id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
514,534
https://en.wikipedia.org/wiki/Canonical%20transformation
In Hamiltonian mechanics, a canonical transformation is a change of canonical coordinates that preserves the form of Hamilton's equations. This is sometimes known as form invariance. Although Hamilton's equations are preserved, it need not preserve the explicit form of the Hamiltonian itself. Canonical transformations are useful in their own right, and also form the basis for the Hamilton–Jacobi equations (a useful method for calculating conserved quantities) and Liouville's theorem (itself the basis for classical statistical mechanics). Since Lagrangian mechanics is based on generalized coordinates, transformations of the coordinates do not affect the form of Lagrange's equations and, hence, do not affect the form of Hamilton's equations if the momentum is simultaneously changed by a Legendre transformation into where are the new co‑ordinates, grouped in canonical conjugate pairs of momenta and corresponding positions for with being the number of degrees of freedom in both co‑ordinate systems. Therefore, coordinate transformations (also called point transformations) are a type of canonical transformation. However, the class of canonical transformations is much broader, since the old generalized coordinates, momenta and even time may be combined to form the new generalized coordinates and momenta. Canonical transformations that do not include the time explicitly are called restricted canonical transformations (many textbooks consider only this type). Modern mathematical descriptions of canonical transformations are considered under the broader topic of symplectomorphism which covers the subject with advanced mathematical prerequisites such as cotangent bundles, exterior derivatives and symplectic manifolds. Notation Boldface variables such as represent a list of generalized coordinates that need not transform like a vector under rotation and similarly represents the corresponding generalized momentum, e.g., A dot over a variable or list signifies the time derivative, e.g., and the equalities are read to be satisfied for all coordinates, for example: The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, e.g., The dot product (also known as an "inner product") maps the two coordinate lists into one variable representing a single numerical value. The coordinates after transformation are similarly labelled with for transformed generalized coordinates and for transformed generalized momentum. Conditions for restricted canonical transformation Restricted canonical transformations are coordinate transformations where transformed coordinates and do not have explicit time dependence, i.e., and . The functional form of Hamilton's equations is In general, a transformation does not preserve the form of Hamilton's equations but in the absence of time dependence in transformation, some simplifications are possible. Following the formal definition for a canonical transformation, it can be shown that for this type of transformation, the new Hamiltonian (sometimes called the Kamiltonian) can be expressed as:where it differs by a partial time derivative of a function known as a generator, which reduces to being only a function of time for restricted canonical transformations. In addition to leaving the form of the Hamiltonian unchanged, it is also permits the use of the unchanged Hamiltonian in the Hamilton's equations of motion due to the above form as: Although canonical transformations refers to a more general set of transformations of phase space corresponding with less permissive transformations of the Hamiltonian, it provides simpler conditions to obtain results that can be further generalized. All of the following conditions, with the exception of bilinear invariance condition, can be generalized for canonical transformations, including time dependance. Indirect conditions Since restricted transformations have no explicit time dependence (by definition), the time derivative of a new generalized coordinate is where is the Poisson bracket. Similarly for the identity for the conjugate momentum, Pm using the form of the "Kamiltonian" it follows that: Due to the form of the Hamiltonian equations of motion, if the transformation is canonical, the two derived results must be equal, resulting in the equations: The analogous argument for the generalized momenta Pm leads to two other sets of equations: These are the indirect conditions to check whether a given transformation is canonical. Symplectic condition Sometimes the Hamiltonian relations are represented as: Where and . Similarly, let . From the relation of partial derivatives, converting the relation in terms of partial derivatives with new variables gives where . Similarly for , Due to form of the Hamiltonian equations for , where can be used due to the form of Kamiltonian. Equating the two equations gives the symplectic condition as: The left hand side of the above is called the Poisson matrix of , denoted as . Similarly, a Lagrange matrix of can be constructed as . It can be shown that the symplectic condition is also equivalent to by using the property. The set of all matrices which satisfy symplectic conditions form a symplectic group. The symplectic conditions are equivalent with indirect conditions as they both lead to the equation , which is used in both of the derivations. Invariance of the Poisson bracket The Poisson bracket which is defined as:can be represented in matrix form as:Hence using partial derivative relations and symplectic condition gives: The symplectic condition can also be recovered by taking and which shows that . Thus these conditions are equivalent to symplectic conditions. Furthermore, it can be seen that , which is also the result of explicitly calculating the matrix element by expanding it. Invariance of the Lagrange bracket The Lagrange bracket which is defined as: can be represented in matrix form as: Using similar derivation, gives: The symplectic condition can also be recovered by taking and which shows that . Thus these conditions are equivalent to symplectic conditions. Furthermore, it can be seen that , which is also the result of explicitly calculating the matrix element by expanding it. Bilinear invariance conditions These set of conditions only apply to restricted canonical transformations or canonical transformations that are independent of time variable. Consider arbitrary variations of two kinds, in a single pair of generalized coordinate and the corresponding momentum: The area of the infinitesimal parallelogram is given by: It follows from the symplectic condition that the infinitesimal area is conserved under canonical transformation: Note that the new coordinates need not be completely oriented in one coordinate momentum plane. Hence, the condition is more generally stated as an invariance of the form under canonical transformation, expanded as:If the above is obeyed for any arbitrary variations, it would be only possible if the indirect conditions are met. The form of the equation, is also known as a symplectic product of the vectors and and the bilinear invariance condition can be stated as a local conservation of the symplectic product. Liouville's theorem The indirect conditions allow us to prove Liouville's theorem, which states that the volume in phase space is conserved under canonical transformations, i.e., By calculus, the latter integral must equal the former times the determinant of Jacobian Where Exploiting the "division" property of Jacobians yields Eliminating the repeated variables gives Application of the indirect conditions above yields . Generating function approach To guarantee a valid transformation between and , we may resort to a direct generating function approach. Both sets of variables must obey Hamilton's principle. That is the action integral over the Lagrangians and , obtained from the respective Hamiltonian via an "inverse" Legendre transformation, must be stationary in both cases (so that one can use the Euler–Lagrange equations to arrive at Hamiltonian equations of motion of the designated form; as it is shown for example here): One way for both variational integral equalities to be satisfied is to have Lagrangians are not unique: one can always multiply by a constant and add a total time derivative and yield the same equations of motion (as discussed on Wikibooks). In general, the scaling factor is set equal to one; canonical transformations for which are called extended canonical transformations. is kept, otherwise the problem would be rendered trivial and there would be not much freedom for the new canonical variables to differ from the old ones. Here is a generating function of one old canonical coordinate ( or ), one new canonical coordinate ( or ) and (possibly) the time . Thus, there are four basic types of generating functions (although mixtures of these four types can exist), depending on the choice of variables. As will be shown below, the generating function will define a transformation from old to new canonical coordinates, and any such transformation is guaranteed to be canonical. The various generating functions and its properties tabulated below is discussed in detail: Type 1 generating function The type 1 generating function depends only on the old and new generalized coordinates To derive the implicit transformation, we expand the defining equation above Since the new and old coordinates are each independent, the following equations must hold These equations define the transformation as follows: The first set of equations define relations between the new generalized coordinates and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations yields analogous formulae for the new generalized momenta in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation yields a formula for as a function of the new canonical coordinates . In practice, this procedure is easier than it sounds, because the generating function is usually simple. For example, let This results in swapping the generalized coordinates for the momenta and vice versa and . This example illustrates how independent the coordinates and momenta are in the Hamiltonian formulation; they are equivalent variables. Type 2 generating function The type 2 generating function depends only on the old generalized coordinates and the new generalized momenta where the terms represent a Legendre transformation to change the right-hand side of the equation below. To derive the implicit transformation, we expand the defining equation above Since the old coordinates and new momenta are each independent, the following equations must hold These equations define the transformation as follows: The first set of equations define relations between the new generalized momenta and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations yields analogous formulae for the new generalized coordinates in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation yields a formula for as a function of the new canonical coordinates . In practice, this procedure is easier than it sounds, because the generating function is usually simple. For example, let where is a set of functions. This results in a point transformation of the generalized coordinates Type 3 generating function The type 3 generating function depends only on the old generalized momenta and the new generalized coordinates where the terms represent a Legendre transformation to change the left-hand side of the equation below. To derive the implicit transformation, we expand the defining equation above Since the new and old coordinates are each independent, the following equations must hold These equations define the transformation as follows: The first set of equations define relations between the new generalized coordinates and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations yields analogous formulae for the new generalized momenta in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation yields a formula for as a function of the new canonical coordinates . In practice, this procedure is easier than it sounds, because the generating function is usually simple. Type 4 generating function The type 4 generating function depends only on the old and new generalized momenta where the terms represent a Legendre transformation to change both sides of the equation below. To derive the implicit transformation, we expand the defining equation above Since the new and old coordinates are each independent, the following equations must hold These equations define the transformation as follows: The first set of equations define relations between the new generalized momenta and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations yields analogous formulae for the new generalized coordinates in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation yields a formula for as a function of the new canonical coordinates . Restrictions on generating functions For example, using generating function of second kind: and , the first set of equations consisting of variables , and has to be inverted to get . This process is possible when the matrix defined by is non-singular. Hence, restrictions are placed on generating functions to have the matrices: , , and , being non-singular. Limitations of generating functions Since is non-singular, it implies that is also non-singular. Since the matrix is inverse of , the transformations of type 2 generating functions always have a non-singular matrix. Similarly, it can be stated that type 1 and type 4 generating functions always have a non-singular matrix whereas type 2 and type 3 generating functions always have a non-singular matrix. Hence, the canonical transformations resulting from these generating functions are not completely general. In other words, since and are each independent functions, it follows that to have generating function of the form and or and , the corresponding Jacobian matrices and are restricted to be non singular, ensuring that the generating function is a function of independent variables. However, as a feature of canonical transformations, it is always possible to choose such independent functions from sets or , to form a generating function representation of canonical transformations, including the time variable. Hence, it can be proved that every finite canonical transformation can be given as a closed but implicit form that is a variant of the given four simple forms. Canonical transformation conditions Canonical transformation relations From: , calculate : Since the left hand side is which is independent of dynamics of the particles, equating coefficients of and to zero, canonical transformation rules are obtained. This step is equivalent to equating the left hand side as . Similarly: Similarly the canonical transformation rules are obtained by equating the left hand side as . The above two relations can be combined in matrix form as: (which will also retain same form for extended canonical transformation) where the result , has been used. The canonical transformation relations are hence said to be equivalent to in this context. The canonical transformation relations can now be restated to include time dependance:Since and , if and do not explicitly depend on time, can be taken. The analysis of restricted canonical transformations is hence consistent with this generalization. Symplectic condition Applying transformation of co-ordinates formula for , in Hamiltonian's equations gives: Similarly for :or:Where the last terms of each equation cancel due to condition from canonical transformations. Hence leaving the symplectic relation: which is also equivalent with the condition . It follows from the above two equations that the symplectic condition implies the equation , from which the indirect conditions can be recovered. Thus, symplectic conditions and indirect conditions can be said to be equivalent in the context of using generating functions. Invariance of the Poisson and Lagrange brackets Since and where the symplectic condition is used in the last equalities. Using , the equalities and are obtained which imply the invariance of Poisson and Lagrange brackets. Extended canonical transformation Canonical transformation relations By solving for:with various forms of generating function, the relation between K and H goes as instead, which also applies for case. All results presented below can also be obtained by replacing , and from known solutions, since it retains the form of Hamilton's equations. The extended canonical transformations are hence said to be result of a canonical transformation () and a trivial canonical transformation () which has (for the given example, which satisfies the condition). Using same steps previously used in previous generalization, with in the general case, and retaining the equation , extended canonical transformation partial differential relations are obtained as: Symplectic condition Following the same steps to derive the symplectic conditions, as: and where using instead gives:The second part of each equation cancel. Hence the condition for extended canonical transformation instead becomes: . Poisson and Lagrange brackets The Poisson brackets are changed as follows:whereas, the Lagrange brackets are changed as: Hence, the Poisson bracket scales by the inverse of whereas the Lagrange bracket scales by a factor of . Infinitesimal canonical transformation Consider the canonical transformation that depends on a continuous parameter , as follows: For infinitesimal values of , the corresponding transformations are called as infinitesimal canonical transformations which are also known as differential canonical transformations. Consider the following generating function: Since for , has the resulting canonical transformation, and , this type of generating function can be used for infinitesimal canonical transformation by restricting to an infinitesimal value. From the conditions of generators of second type:Since , changing the variables of the function to and neglecting terms of higher order of , gives:Infinitesimal canonical transformations can also be derived using the matrix form of the symplectic condition. Active canonical transformations In the passive view of transformations, the coordinate system is changed without the physical system changing, whereas in the active view of transformation, the coordinate system is retained and the physical system is said to undergo transformations. Thus, using the relations from infinitesimal canonical transformations, the change in the system states under active view of the canonical transformation is said to be: or as in matrix form. For any function , it changes under active view of the transformation according to: Considering the change of Hamiltonians in the active view, i.e., for a fixed point,where are mapped to the point, by the infinitesimal canonical transformation, and similar change of variables for to is considered up-to first order of . Hence, if the Hamiltonian is invariant for infinitesimal canonical transformations, its generator is a constant of motion. Examples of ICT Time evolution Taking and , then . Thus the continuous application of such a transformation maps the coordinates to . Hence if the Hamiltonian is time translation invariant, i.e. does not have explicit time dependence, its value is conserved for the motion. Translation Taking , and . Hence, the canonical momentum generates a shift in the corresponding generalized coordinate and if the Hamiltonian is invariant of translation, the momentum is a constant of motion. Rotation Consider an orthogonal system for an N-particle system: Choosing the generator to be: and the infinitesimal value of , then the change in the coordinates is given for x by: and similarly for y: whereas the z component of all particles is unchanged: . These transformations correspond to rotation about the z axis by angle in its first order approximation. Hence, repeated application of the infinitesimal canonical transformation generates a rotation of system of particles about the z axis. If the Hamiltonian is invariant under rotation about the z axis, the generator, the component of angular momentum along the axis of rotation, is an invariant of motion. Motion as canonical transformation Motion itself (or, equivalently, a shift in the time origin) is a canonical transformation. If and , then Hamilton's principle is automatically satisfiedsince a valid trajectory should always satisfy Hamilton's principle, regardless of the endpoints. Examples The translation where are two constant vectors is a canonical transformation. Indeed, the Jacobian matrix is the identity, which is symplectic: . Set and , the transformation where is a rotation matrix of order 2 is canonical. Keeping in mind that special orthogonal matrices obey it's easy to see that the Jacobian is symplectic. However, this example only works in dimension 2: is the only special orthogonal group in which every matrix is symplectic. Note that the rotation here acts on and not on and independently, so these are not the same as a physical rotation of an orthogonal spatial coordinate system. The transformation , where is an arbitrary function of , is canonical. Jacobian matrix is indeed given by which is symplectic. Modern mathematical description In mathematical terms, canonical coordinates are any coordinates on the phase space (cotangent bundle) of the system that allow the canonical one-form to be written as up to a total differential (exact form). The change of variable between one set of canonical coordinates and another is a canonical transformation. The index of the generalized coordinates is written here as a superscript (), not as a subscript as done above (). The superscript conveys the contravariant transformation properties of the generalized coordinates, and does not mean that the coordinate is being raised to a power. Further details may be found at the symplectomorphism article. History The first major application of the canonical transformation was in 1846, by Charles Delaunay, in the study of the Earth-Moon-Sun system. This work resulted in the publication of a pair of large volumes as Mémoires by the French Academy of Sciences, in 1860 and 1867. See also Symplectomorphism Hamilton–Jacobi equation Liouville's theorem (Hamiltonian) Mathieu transformation Linear canonical transformation Notes References Hamiltonian mechanics Transforms
Canonical transformation
[ "Physics", "Mathematics" ]
4,408
[ "Functions and mappings", "Theoretical physics", "Mathematical objects", "Classical mechanics", "Hamiltonian mechanics", "Mathematical relations", "Transforms", "Dynamical systems" ]
515,339
https://en.wikipedia.org/wiki/Zinc%20oxide
Zinc oxide is an inorganic compound with the formula . It is a white powder which is insoluble in water. ZnO is used as an additive in numerous materials and products including cosmetics, food supplements, rubbers, plastics, ceramics, glass, cement, lubricants, paints, sunscreens, ointments, adhesives, sealants, pigments, foods, batteries, ferrites, fire retardants, semi conductors, and first-aid tapes. Although it occurs naturally as the mineral zincite, most zinc oxide is produced synthetically. History Early humans probably used zinc compounds in processed and unprocessed forms, as paint or medicinal ointment; however, their composition is uncertain. The use of pushpanjan, probably zinc oxide, as a salve for eyes and open wounds is mentioned in the Indian medical text the Charaka Samhita, thought to date from 500 BC or before. Zinc oxide ointment is also mentioned by the Greek physician Dioscorides (1st century AD). Galen suggested treating ulcerating cancers with zinc oxide, as did Avicenna in his The Canon of Medicine. It is used as an ingredient in products such as baby powder and creams against diaper rashes, calamine cream, anti-dandruff shampoos, and antiseptic ointments. The Romans produced considerable quantities of brass (an alloy of zinc and copper) as early as 200 BC by a cementation process where copper was reacted with zinc oxide. The zinc oxide is thought to have been produced by heating zinc ore in a shaft furnace. This liberated metallic zinc as a vapor, which then ascended the flue and condensed as the oxide. This process was described by Dioscorides in the 1st century AD. Zinc oxide has also been recovered from zinc mines at Zawar in India, dating from the second half of the first millennium BC. From the 12th to the 16th century, zinc and zinc oxide were recognized and produced in India using a primitive form of the direct synthesis process. From India, zinc manufacturing moved to China in the 17th century. In 1743, the first European zinc smelter was established in Bristol, United Kingdom. Around 1782, Louis-Bernard Guyton de Morveau proposed replacing lead white pigment with zinc oxide. The main usage of zinc oxide (zinc white) was in paints and as an additive to ointments. Zinc white was accepted as a pigment in oil paintings by 1834 but it did not mix well with oil. This problem was solved by optimizing the synthesis of ZnO. In 1845, Edme-Jean Leclaire in Paris was producing the oil paint on a large scale; by 1850, zinc white was being manufactured throughout Europe. The success of zinc white paint was due to its advantages over the traditional white lead: zinc white is essentially permanent in sunlight, it is not blackened by sulfur-bearing air, it is non-toxic and more economical. Because zinc white is so "clean" it is valuable for making tints with other colors, but it makes a rather brittle dry film when unmixed with other colors. For example, during the late 1890s and early 1900s, some artists used zinc white as a ground for their oil paintings. These paintings developed cracks over time. In recent times, most zinc oxide has been used in the rubber industry to resist corrosion. In the 1970s, the second largest application of ZnO was photocopying. High-quality ZnO produced by the "French process" was added to photocopying paper as a filler. This application was soon displaced by titanium. Chemical properties Pure ZnO is a white powder. However, in nature, it occurs as the rare mineral zincite, which usually contains manganese and other impurities that confer a yellow to red color. Crystalline zinc oxide is thermochromic, changing from white to yellow when heated in air and reverting to white on cooling. This color change is caused by a small loss of oxygen to the environment at high temperatures to form the non-stoichiometric Zn1+xO, where at 800 °C, x = 0.00007. Zinc oxide is an amphoteric oxide. It is nearly insoluble in water, but it will dissolve in most acids, such as hydrochloric acid: ZnO + 2 HCl → ZnCl2 + H2O Solid zinc oxide will also dissolve in alkalis to give soluble zincates: ZnO + 2 NaOH + H2O → Na2[Zn(OH)4] ZnO reacts slowly with fatty acids in oils to produce the corresponding carboxylates, such as oleate or stearate. When mixed with a strong aqueous solution of zinc chloride, ZnO forms cement-like products best described as zinc hydroxy chlorides. This cement was used in dentistry. ZnO also forms cement-like material when treated with phosphoric acid; related materials are used in dentistry. A major component of zinc phosphate cement produced by this reaction is hopeite, Zn3(PO4)2·4H2O. ZnO decomposes into zinc vapor and oxygen at around 1975 °C with a standard oxygen pressure. In a carbothermic reaction, heating with carbon converts the oxide into zinc vapor at a much lower temperature (around 950 °C). ZnO + C → Zn(Vapor) + CO Physical properties Structure Zinc oxide crystallizes in two main forms, hexagonal wurtzite and cubic zincblende. The wurtzite structure is most stable at ambient conditions and thus most common. The zincblende form can be stabilized by growing ZnO on substrates with cubic lattice structure. In both cases, the zinc and oxide centers are tetrahedral, the most characteristic geometry for Zn(II). ZnO converts to the rocksalt motif at relatively high pressures about 10 GPa. Hexagonal and zincblende polymorphs have no inversion symmetry (reflection of a crystal relative to any given point does not transform it into itself). This and other lattice symmetry properties result in piezoelectricity of the hexagonal and zincblende ZnO, and pyroelectricity of hexagonal ZnO. The hexagonal structure has a point group 6 mm (Hermann–Mauguin notation) or C6v (Schoenflies notation), and the space group is P63mc or C6v4. The lattice constants are a = 3.25 Å and c = 5.2 Å; their ratio c/a ~ 1.60 is close to the ideal value for hexagonal cell c/a = 1.633. As in most group II-VI materials, the bonding in ZnO is largely ionic (Zn2+O2−) with the corresponding radii of 0.074 nm for Zn2+ and 0.140 nm for O2−. This property accounts for the preferential formation of wurtzite rather than zinc blende structure, as well as the strong piezoelectricity of ZnO. Because of the polar Zn−O bonds, zinc and oxygen planes are electrically charged. To maintain electrical neutrality, those planes reconstruct at atomic level in most relative materials, but not in ZnO – its surfaces are atomically flat, stable and exhibit no reconstruction. However, studies using wurtzoid structures explained the origin of surface flatness and the absence of reconstruction at ZnO wurtzite surfaces in addition to the origin of charges on ZnO planes. Mechanical properties ZnO is a wide-band gap semiconductor of the II-VI semiconductor group. The native doping of the semiconductor due to oxygen vacancies or zinc interstitials is n-type. ZnO is a relatively soft material with approximate hardness of 4.5 on the Mohs scale. Its elastic constants are smaller than those of relevant III-V semiconductors, such as GaN. The high heat capacity and heat conductivity, low thermal expansion and high melting temperature of ZnO are beneficial for ceramics. The E2 optical phonon in ZnO exhibits an unusually long lifetime of 133 ps at 10 K. Among the tetrahedrally bonded semiconductors, it has been stated that ZnO has the highest piezoelectric tensor, or at least one comparable to that of GaN and AlN. This property makes it a technologically important material for many piezoelectrical applications, which require a large electromechanical coupling. Therefore, ZnO in the form of thin film has been one of the most studied and used resonator materials for thin-film bulk acoustic resonators. Electrical and optical properties Favourable properties of zinc oxide include good transparency, high electron mobility, wide band gap, and strong room-temperature luminescence. Those properties make ZnO valuable for a variety of emerging applications: transparent electrodes in liquid crystal displays, energy-saving or heat-protecting windows, and electronics as thin-film transistors and light-emitting diodes. ZnO has a relatively wide direct band gap of ~3.3 eV at room temperature. Advantages associated with a wide band gap include higher breakdown voltages, ability to sustain large electric fields, lower electronic noise, and high-temperature and high-power operation. The band gap of ZnO can further be tuned to ~3–4 eV by its alloying with magnesium oxide or cadmium oxide. Due to this large band gap, there have been efforts to create visibly transparent solar cells utilising ZnO as a light absorbing layer. However, these solar cells have so far proven highly inefficient. Most ZnO has n-type character, even in the absence of intentional doping. Nonstoichiometry is typically the origin of n-type character, but the subject remains controversial. An alternative explanation has been proposed, based on theoretical calculations, that unintentional substitutional hydrogen impurities are responsible. Controllable n-type doping is easily achieved by substituting Zn with group-III elements such as Al, Ga, In or by substituting oxygen with group-VII elements chlorine or iodine. Reliable p-type doping of ZnO remains difficult. This problem originates from low solubility of p-type dopants and their compensation by abundant n-type impurities. This problem is observed with GaN and ZnSe. Measurement of p-type in "intrinsically" n-type material is complicated by the inhomogeneity of samples. Current limitations to p-doping limit electronic and optoelectronic applications of ZnO, which usually require junctions of n-type and p-type material. Known p-type dopants include group-I elements Li, Na, K; group-V elements N, P and As; as well as copper and silver. However, many of these form deep acceptors and do not produce significant p-type conduction at room temperature. Electron mobility of ZnO strongly varies with temperature and has a maximum of ~2000 cm2/(V·s) at 80 K. Data on hole mobility are scarce with values in the range 5–30 cm2/(V·s). ZnO discs, acting as a varistor, are the active material in most surge arresters. Zinc oxide is noted for its strongly nonlinear optical properties, especially in bulk. The nonlinearity of ZnO nanoparticles can be fine-tuned according to their size. Production For industrial use, ZnO is produced at levels of 105 tons per year by three main processes: Indirect process In the indirect or French process, metallic zinc is melted in a graphite crucible and vaporized at temperatures above 907 °C (typically around 1000 °C). Zinc vapor reacts with the oxygen in the air to give ZnO, accompanied by a drop in its temperature and bright luminescence. Zinc oxide particles are transported into a cooling duct and collected in a bag house. This indirect method was popularized by Edme Jean LeClaire of Paris in 1844 and therefore is commonly known as the French process. Its product normally consists of agglomerated zinc oxide particles with an average size of 0.1 to a few micrometers. By weight, most of the world's zinc oxide is manufactured via French process. Direct process The direct or American process starts with diverse contaminated zinc composites, such as zinc ores or smelter by-products. The zinc precursors are reduced (carbothermal reduction) by heating with a source of carbon such as anthracite to produce zinc vapor, which is then oxidized as in the indirect process. Because of the lower purity of the source material, the final product is also of lower quality in the direct process as compared to the indirect one. Wet chemical process A small amount of industrial production involves wet chemical processes, which start with aqueous solutions of zinc salts, from which zinc carbonate or zinc hydroxide is precipitated. The solid precipitate is then calcined at temperatures around 800 °C. Laboratory synthesis Numerous specialised methods exist for producing ZnO for scientific studies and niche applications. These methods can be classified by the resulting ZnO form (bulk, thin film, nanowire), temperature ("low", that is close to room temperature or "high", that is T ~ 1000 °C), process type (vapor deposition or growth from solution) and other parameters. Large single crystals (many cubic centimeters) can be grown by the gas transport (vapor-phase deposition), hydrothermal synthesis, or melt growth. However, because of the high vapor pressure of ZnO, growth from the melt is problematic. Growth by gas transport is difficult to control, leaving the hydrothermal method as a preference. Thin films can be produced by a variety of methods including chemical vapor deposition, metalorganic vapour phase epitaxy, electrodeposition, sputtering, spray pyrolysis, thermal oxidation, sol–gel synthesis, atomic layer deposition, and pulsed laser deposition. Zinc oxide can be produced in bulk by precipitation from zinc compounds, mainly zinc acetate, in various solutions, such as aqueous sodium hydroxide or aqueous ammonium carbonate. Synthetic methods characterized in literature since the year 2000 aim to produce ZnO particles with high surface area and minimal size distribution, including precipitation, mechanochemical, sol-gel, microwave, and emulsion methods. ZnO nanostructures Nanostructures of ZnO can be synthesized into a variety of morphologies, including nanowires, nanorods, tetrapods, nanobelts, nanoflowers, nanoparticles, etc. Nanostructures can be obtained with most above-mentioned techniques, at certain conditions, and also with the vapor–liquid–solid method. The synthesis is typically carried out at temperatures of about 90 °C, in an equimolar aqueous solution of zinc nitrate and hexamine, the latter providing the basic environment. Certain additives, such as polyethylene glycol or polyethylenimine, can improve the aspect ratio of the ZnO nanowires. Doping of the ZnO nanowires has been achieved by adding other metal nitrates to the growth solution. The morphology of the resulting nanostructures can be tuned by changing the parameters relating to the precursor composition (such as the zinc concentration and pH) or to the thermal treatment (such as the temperature and heating rate). Aligned ZnO nanowires on pre-seeded silicon, glass, and gallium nitride substrates have been grown using aqueous zinc salts such as zinc nitrate and zinc acetate in basic environments. Pre-seeding substrates with ZnO creates sites for homogeneous nucleation of ZnO crystal during the synthesis. Common pre-seeding methods include in-situ thermal decomposition of zinc acetate crystallites, spin coating of ZnO nanoparticles, and the use of physical vapor deposition methods to deposit ZnO thin films. Pre-seeding can be performed in conjunction with top down patterning methods such as electron beam lithography and nanosphere lithography to designate nucleation sites prior to growth. Aligned ZnO nanowires can be used in dye-sensitized solar cells and field emission devices. Applications The applications of zinc oxide powder are numerous, and the principal ones are summarized below. Most applications exploit the reactivity of the oxide as a precursor to other zinc compounds. For material science applications, zinc oxide has high refractive index, high thermal conductivity, binding, antibacterial and UV-protection properties. Consequently, it is added into materials and products including plastics, ceramics, glass, cement, rubber, lubricants, paints, ointments, adhesive, sealants, concrete manufacturing, pigments, foods, batteries, ferrites, and fire retardants. Rubber industry Between 50% and 60% of ZnO use is in the rubber industry. Zinc oxide along with stearic acid is used in the sulfur vulcanization of rubber. ZnO additives in the form of nanoparticles are used in rubber as a pigment and to enhance its durability, and have been used in composite rubber materials such as those based on montmorillonite to impart germicidal properties. Ceramic industry Ceramic industry consumes a significant amount of zinc oxide, in particular in ceramic glaze and frit compositions. The relatively high heat capacity, thermal conductivity and high temperature stability of ZnO coupled with a comparatively low coefficient of expansion are desirable properties in the production of ceramics. ZnO affects the melting point and optical properties of the glazes, enamels, and ceramic formulations. Zinc oxide as a low expansion, secondary flux improves the elasticity of glazes by reducing the change in viscosity as a function of temperature and helps prevent crazing and shivering. By substituting ZnO for BaO and PbO, the heat capacity is decreased and the thermal conductivity is increased. Zinc in small amounts improves the development of glossy and brilliant surfaces. However, in moderate to high amounts, it produces matte and crystalline surfaces. With regard to color, zinc has a complicated influence. Medicine Skin treatment Zinc oxide as a mixture with about 0.5% iron(III) oxide (Fe2O3) is called calamine and is used in calamine lotion, a topical skin treatment. Historically, the name calamine was ascribed to a mineral that contained zinc used in powdered form as medicine, but it was determined in 1803 that ore described as calamine was actually a mixture of the zinc minerals smithsonite and hemimorphite. Zinc oxide is widely used to treat a variety of skin conditions, including atopic dermatitis, contact dermatitis, itching due to eczema, diaper rash and acne. It is used in products such as baby powder and barrier creams to treat diaper rashes, calamine cream, anti-dandruff shampoos, and antiseptic ointments. It is often combined with castor oil to form an emollient and astringent, zinc and castor oil cream, commonly used to treat infants. It is also a component in tape (called "zinc oxide tape") used by athletes as a bandage to prevent soft tissue damage during workouts. Antibacterial Zinc oxide is used in mouthwash products and toothpastes as an anti-bacterial agent proposed to prevent plaque and tartar formation, and to control bad breath by reducing the volatile gases and volatile sulfur compounds (VSC) in the mouth. Along with zinc oxide or zinc salts, these products also commonly contain other active ingredients, such as cetylpyridinium chloride, xylitol, hinokitiol, essential oils and plant extracts. Powdered zinc oxide has deodorizing and antibacterial properties. ZnO is added to cotton fabric, rubber, oral care products, and food packaging. Enhanced antibacterial action of fine particles compared to bulk material is not exclusive to ZnO and is observed for other materials, such as silver. The mechanism of ZnO's antibacterial effect has been variously described as the generation of reactive oxygen species, the release of Zn2+ ions, and a general disturbance of the bacterial cell membrane by nanoparticles. Sunscreen Zinc oxide is used in sunscreen to absorb ultraviolet light. It is the broadest spectrum UVA and UVB absorber that is approved for use as a sunscreen by the U.S. Food and Drug Administration (FDA), and is completely photostable. When used as an ingredient in sunscreen, zinc oxide blocks both UVA (320–400 nm) and UVB (280–320 nm) rays of ultraviolet light. Zinc oxide and the other most common physical sunscreen, titanium dioxide, are considered to be nonirritating, nonallergenic, and non-comedogenic. Zinc from zinc oxide is, however, slightly absorbed into the skin. Many sunscreens use nanoparticles of zinc oxide (along with nanoparticles of titanium dioxide) because such small particles do not scatter light and therefore do not appear white. The nanoparticles are not absorbed into the skin more than regular-sized zinc oxide particles are and are only absorbed into the outermost layer of the skin but not into the body. Dental restoration When mixed with eugenol, zinc oxide eugenol is formed, which has applications as a restorative and prosthodontic in dentistry. Food additive Zinc oxide is added to many food products, including breakfast cereals, as a source of zinc, a necessary nutrient. Zinc may be added to food in the form of zinc oxide nanoparticles, or as zinc sulfate, zinc gluconate, zinc acetate, or zinc citrate. Some foods also include trace amounts of ZnO even if it is not intended as a nutrient. Pigment Zinc oxide (zinc white) is used as a pigment in paints and is more opaque than lithopone, but less opaque than titanium dioxide. It is also used in coatings for paper. Chinese white is a special grade of zinc white used in artists' pigments. The use of zinc white as a pigment in oil painting started in the middle of 18th century. It has partly replaced the poisonous lead white and was used by painters such as Böcklin, Van Gogh, Manet, Munch and others. It is also a main ingredient of mineral makeup (CI 77947). UV absorber Micronized and nano-scale zinc oxide provides strong protection against UVA and UVB ultraviolet radiation, and are consequently used in sunscreens, and also in UV-blocking sunglasses for use in space and for protection when welding, following research by scientists at Jet Propulsion Laboratory (JPL). Coatings Paints containing zinc oxide powder have long been utilized as anticorrosive coatings for metals. They are especially effective for galvanized iron. Iron is difficult to protect because its reactivity with organic coatings leads to brittleness and lack of adhesion. Zinc oxide paints retain their flexibility and adherence on such surfaces for many years. ZnO highly n-type doped with aluminium, gallium, or indium is transparent and conductive (transparency ~90%, lowest resistivity ~10−4 Ω·cm). ZnO:Al coatings are used for energy-saving or heat-protecting windows. The coating lets the visible part of the spectrum in but either reflects the infrared (IR) radiation back into the room (energy saving) or does not let the IR radiation into the room (heat protection), depending on which side of the window has the coating. Plastics, such as polyethylene naphthalate (PEN), can be protected by applying zinc oxide coating. The coating reduces the diffusion of oxygen through PEN. Zinc oxide layers can also be used on polycarbonate in outdoor applications. The coating protects polycarbonate from solar radiation, and decreases its oxidation rate and photo-yellowing. Corrosion prevention in nuclear reactors Zinc oxide depleted in 64Zn (the zinc isotope with atomic mass 64) is used in corrosion prevention in nuclear pressurized water reactors. The depletion is necessary, because 64Zn is transformed into radioactive 65Zn under irradiation by the reactor neutrons. Methane reforming Zinc oxide (ZnO) is used as a pretreatment step to remove hydrogen sulfide (H2S) from natural gas following hydrogenation of any sulfur compounds prior to a methane reformer, which can poison the catalyst. At temperatures between about , H2S is converted to water by the following reaction: H2S + ZnO → H2O + ZnS Electronics ZnO has wide direct band gap (3.37 eV or 375 nm at room temperature). Therefore, its most common potential applications are in laser diodes and light emitting diodes (LEDs). Moreover, ultrafast nonlinearities and photoconductive functions have been reported in ZnO. Some optoelectronic applications of ZnO overlap with that of GaN, which has a similar band gap (~3.4 eV at room temperature). Compared to GaN, ZnO has a larger exciton binding energy (~60 meV, 2.4 times of the room-temperature thermal energy), which results in bright room-temperature emission from ZnO. ZnO can be combined with GaN for LED-applications. For instance, a transparent conducting oxide layer and ZnO nanostructures provide better light outcoupling. Other properties of ZnO favorable for electronic applications include its stability to high-energy radiation and its ability to be patterned by wet chemical etching. Radiation resistance makes ZnO a suitable candidate for space applications. Nanostructured ZnO is an effective medium both in powder and polycrystalline forms in random lasers, due to its high refractive index and aforementioned light emission properties. Gas sensors Zinc oxide is used in semiconductor gas sensors for detecting airborne compounds such as hydrogen sulfide, nitrogen dioxide, and volatile organic compounds. ZnO is a semiconductor that becomes n-doped by adsorption of reducing compounds, which reduces the detected electrical resistance through the device, in a manner similar to the widely used tin oxide semiconductor gas sensors. It is formed into nanostructures such as thin films, nanoparticles, nanopillars, or nanowires to provide a large surface area for interaction with gasses. The sensors are made selective for specific gasses by doping or surface-attaching materials such as catalytic noble metals. Aspirational applications Transparent electrodes Aluminium-doped ZnO layers are used as transparent electrodes. The components Zn and Al are much cheaper and less toxic compared to the generally used indium tin oxide (ITO). One application which has begun to be commercially available is the use of ZnO as the front contact for solar cells or of liquid crystal displays. Transparent thin-film transistors (TTFT) can be produced with ZnO. As field-effect transistors, they do not need a p–n junction, thus avoiding the p-type doping problem of ZnO. Some of the field-effect transistors even use ZnO nanorods as conducting channels. Piezoelectricity The piezoelectricity in textile fibers coated in ZnO have been shown capable of fabricating "self-powered nanosystems" with everyday mechanical stress from wind or body movements. Photocatalysis ZnO, both in macro- and nano- scales, could in principle be used as an electrode in photocatalysis, mainly as an anode in green chemistry applications. As a photocatalyst, ZnO reacts when exposed to UV radiation and is used in photodegradation reactions to remove organic pollutants from the environment. It is also used to replace catalysts used in photochemical reactions that would ordinarily require costly or inconvenient reaction conditions with low yields. Other The pointed tips of ZnO nanorods could be used as field emitters. ZnO is a promising anode material for lithium-ion battery because it is cheap, biocompatible, and environmentally friendly. ZnO has a higher theoretical capacity (978 mAh g−1) than many other transition metal oxides such as CoO (715 mAh g−1), NiO (718 mAh g−1) and CuO (674 mAh g−1). ZnO is also used as an electrode in supercapacitors. Safety As a food additive, zinc oxide is on the U.S. Food and Drug Administration's list of generally recognized as safe substances. Zinc oxide itself is non-toxic; it is hazardous, however, to inhale high concentrations of zinc oxide fumes, such as those generated when zinc or zinc alloys are melted and oxidized at high temperature. This problem occurs while melting alloys containing brass because the melting point of brass is close to the boiling point of zinc. Inhalation of zinc oxide, which may occur when welding galvanized (zinc-plated) steel, can result in a malady called metal fume fever. In sunscreen formulations that combined zinc oxide with small-molecule UV absorbers, UV light caused photodegradation of the small-molecule asorbers and toxicity in embryonic zebrafish assays. See also Depleted zinc oxide Zinc oxide nanoparticle Gallium(III) nitride List of inorganic pigments Zinc Zinc oxide eugenol Zinc peroxide Zinc smelting Zinc–air battery Zinc–zinc oxide cycle ZnO nanostructures References Cited sources Reviews External links Zincite properties International Chemical Safety Card 0208. NIOSH Pocket Guide to Chemical Hazards. Zinc white pigment at ColourLex Amphoteric compounds Antipruritics Ceramic materials Corrosion inhibitors II-VI semiconductors Inorganic pigments Nonlinear optical materials Oxides Piezoelectric materials Sunscreening agents Wurtzite structure type oxide
Zinc oxide
[ "Physics", "Chemistry", "Engineering" ]
6,224
[ "Physical phenomena", "Amphoteric compounds", "Acids", "Inorganic compounds", "Semiconductor materials", "Process chemicals", "Oxides", "Salts", "Inorganic pigments", "Materials", "Electrical phenomena", "II-VI semiconductors", "Ceramic materials", "Corrosion inhibitors", "Ceramic engine...
515,382
https://en.wikipedia.org/wiki/Beamline
In accelerator physics, a beamline refers to the trajectory of the beam of particles, including the overall construction of the path segment (guide tubes, diagnostic devices) along a specific path of an accelerator facility. This part is either the line in a linear accelerator along which a beam of particles travels, or the path leading from particle generator (e.g. a cyclic accelerator, synchrotron light sources, cyclotrons, or spallation sources) to the experimental end-station. Beamlines usually end in experimental stations that utilize particle beams or synchrotron light obtained from a synchrotron, or neutrons from a spallation source or research reactor. Beamlines are used in experiments in particle physics, materials science, life science, chemistry, and molecular biology, but can also be used for irradiation tests or to produce isotopes. Beamline in a particle accelerator In particle accelerators the beamline is usually housed in a tunnel and/or underground, cased inside a concrete housing for shielding purposes. The beamline is usually a cylindrical metal pipe, typically called a beam pipe, and/or a drift tube, evacuated to a high vacuum so there are few gas molecules in the path for the beam of accelerated particles to hit, which otherwise could scatter them before they reach their destination. There are specialized devices and equipment on the beamline that are used for producing, maintaining, monitoring, and accelerating the particle beam. These devices may be in proximity of or attached directly to the beamline. These devices include sophisticated transducers, diagnostics (position monitors and wire scanners), lenses, collimators, thermocouples, ion pumps, ion gauges, ion chambers (for diagnostic purposes; usually called "beam monitors"), vacuum valves ("isolation valves"), and gate valves, to mention a few. It is imperative to have all beamline sections, magnets, etc., aligned (often by a survey and an alignment crew by using a laser tracker), beamlines must be within micrometre tolerance. Good alignment helps to prevent beam loss, and beam from colliding with the pipe walls, which creates secondary emissions and/or radiation. Synchrotron radiation beamline Regarding synchrotrons, beamline may also refer to the instrumentation that carries beams of synchrotron radiation to an experimental end station, which uses the radiation produced by the bending magnets and insertion devices in the storage ring of a synchrotron radiation facility. A typical application for this kind of beamline is crystallography, although many other techniques utilising synchrotron light exist. At a large synchrotron facility there will be many beamlines, each optimised for a particular field of research. The differences will depend on the type of insertion device (which, in turn, determines the intensity and spectral distribution of the radiation); the beam conditioning equipment; and the experimental end station. A typical beamline at a modern synchrotron facility will be 25 to 100 m long from the storage ring to the end station, and may cost up to millions of US dollars. For this reason, a synchrotron facility is often built in stages, with the first few beamlines opening on day one of operation, and other beamlines being added later as the funding permits. The beamline elements are located in radiation shielding enclosures, called hutches, which are the size of a small room (cabin). A typical beamline consists of two hutches, an optical hutch for the beam conditioning elements and an experimental hutch, which houses the experiment. Between hutches, the beam travels in a transport tube. Entrance to the hutches is forbidden when the beam shutter is open and radiation can enter the hutch. This is enforced by the use of elaborate safety systems with redundant interlocking functions, which make sure that no one is inside the hutch when the radiation is turned on. The safety system will also shut down the radiation beam if the door to the hutch is accidentally opened when the beam is on. In this case, the beam is dumped, meaning the stored beam is diverted into a target designed to absorb and contain its energy. Elements that are used in beamlines by experimenters for conditioning the radiation beam between the storage ring and the end station include the following: Windows: windows are used to separate UHV and HV vacuum sections and to terminate the beamline. They are also used between UHV vacuum sections to provide protection from vacuum accidents. The foils used for the window membrane also attenuate the radiation spectrum in the region below 6KeV. 1- Beryllium Windows: Beryllium windows can be supplied cooled, or uncooled, with various sizes (and numbers) of window apertures. Windows are sized to suit specific requirements, however the maximum size of a window is determined by the foil thickness and pressure differential to be withstood. Windows can be supplied fitted with a range of beam entry/exit flange sizes to suite specific requirements. 2- CVD Diamond Windows: Chemical Vapour Deposition (CVD) Diamond offer extreme hardness, high thermal conductivity, chemical inertness, and high transparency over a very wide spectral range. Stronger and stiffer than Beryllium, with lower thermal expansion and lower toxicity, it is ideal for UHV isolation windows in X-ray beamlines. Windows can be supplied embedded in UHV flanges and with efficient water cooling. 3- Exit Windows: Vacuum exit windows come in a variety of materials including Beryllium and CVD diamond detailed above. Slits: Slits are used to define the beam either horizontally or vertically. They can be used in pairs to define the beam in both directions. the maximum aperture size is selected to suit specific requirements. Options include cooled (white beam operation) or uncooled (monochromatic beam operation) slits and phosphor coating on the upstream side of the slit to assist with beam location. There are four main type of slits: Blade Slits, High Heat Load Slits, Inline Slits, High Precision Slits. Shutters: Beam shutters are used to interrupt radiation from the front end, or optics enclosures when it is not required downstream. They have an equipment and personnel safety function. And there are three types of shutters; Photon Shutters, Monochromatic Beam Shutters, Custom Shutters Beam Filters: (or attenuators) remove unwanted energy ranges from the beam by passing the incident synchrotron radiation through a thin transmissive foil. They are often used to manage heat-loads of white beams to optimize beamline performance according to the energy of operation. A typical filter has two or three racks, with each rack holding three of four separate foils, depending upon the beam cross-section. Focusing mirrors - one or more mirrors, which may be flat, bent-flat, or toroidal, which helps to collimate (focus) the beam Monochromators - devices based on diffraction by crystals which select particular wavelength bands and absorb other wavelengths, and which are sometimes tunable to varying wavelengths, and sometimes fixed to a particular wavelength Spacing tubes - vacuum maintaining tubes which provide the proper space between optical elements, and shield any scattered radiation Sample stages - for mounting and manipulating the sample under study and subjecting it to various external conditions, such a varying temperature, pressure etc. Radiation detectors - for measuring the radiation which has interacted with the sample The combination of beam conditioning devices controls the thermal load (heating caused by the beam) at the end station; the spectrum of radiation incident at the end station; and the focus or collimation of the beam. Devices along the beamline which absorb significant power from the beam may need to be actively cooled by water, or liquid nitrogen. The entire length of a beamline is normally kept under ultra high vacuum conditions. Software for beamline modeling Although the design of a synchrotron radiation beamline may be seen as an application of X-ray optics, there are dedicated tools for modeling the x-ray propagation down the beamline and their interaction with various components. There are ray-tracing codes such as Shadow and McXTrace that treat the x-ray beam in the geometric optics limit, and then there are wave propagation software that takes into account diffraction, and the intrinsic wavelike properties of the radiation. For the purposes of understanding full or partial coherence of the synchrotron radiation, the wave properties need to be taken into account. The codes SRW, Spectra and xrt include this possibility, the latter code supports "hybryd" regime allowing to switch from geometric to wave approach on a given optical segment. Neutron beamline Superficially, neutron beamlines differ from synchrotron radiation beamlines mostly by the fact that they use neutrons from a research reactor or a spallation source instead of photons. Since neutrons don't carry charge and are difficult to redirect, the components are quite different (see e.g. choppers or neutron super mirrors). The experiments usually measure neutron scattering from or energy transfer to the sample under study. See also Ion beam :Category:Neutron facilities Klystron References External links What is a synchrotron beamline? Search for the synchrotron beamline you need Macromolecular Crystallography at Synchrotrons: A Historical Introduction Accelerator physics Synchrotron instrumentation Neutron instrumentation Materials science Beamline
Beamline
[ "Physics", "Materials_science", "Technology", "Engineering" ]
1,943
[ "Applied and interdisciplinary physics", "Synchrotron instrumentation", "Materials science", "Vacuum", "Measuring instruments", "Neutron instrumentation", "Experimental physics", "nan", "Accelerator physics", "Matter" ]
21,116,542
https://en.wikipedia.org/wiki/Generalized%20map
In mathematics, a generalized map is a topological model which allows one to represent and to handle subdivided objects. This model was defined starting from combinatorial maps in order to represent non-orientable and open subdivisions, which is not possible with combinatorial maps. The main advantage of generalized map is the homogeneity of one-to-one mappings in any dimensions, which simplifies definitions and algorithms comparing to combinatorial maps. For this reason, generalized maps are sometimes used instead of combinatorial maps, even to represent orientable closed partitions. Like combinatorial maps, generalized maps are used as efficient data structure in image representation and processing, in geometrical modeling, they are related to simplicial set and to combinatorial topology, and this is a boundary representation model (B-rep or BREP), i.e. it represents object by its boundaries. General definition The definition of generalized map in any dimension is given in and: A nD generalized map (or nG-map) is an (n + 2)-tuple G = (D, α0, ..., αn) such that: D is a finite set of darts; α0, ..., αn are involutions on D; αi o αj is an involution if i + 2 ≤ j (i, j ∈ { 0, ,..., n }). An nD generalized map represents the subdivision of an open or closed orientable or not nD space. See also Boundary representation Combinatorial map Quad-edge data structure Rotation system Simplicial set Winged edge References Algebraic topology Topological graph theory Data structures
Generalized map
[ "Mathematics" ]
350
[ "Graph theory", "Algebraic topology", "Fields of abstract algebra", "Topology", "Mathematical relations", "Topological graph theory" ]
21,117,094
https://en.wikipedia.org/wiki/Amorphous%20silica-alumina
Amorphous silica-alumina is a synthetic substance that is used as a catalyst or catalyst support. It can be prepared in a number of ways for example: Precipitation of hydrous alumina onto amorphous silica hydrogel Reacting a silica sol with an alumina sol Coprecipitation from sodium silicate / aluminium salt solution Water-soluble contaminants, e.g. sodium salts, are removed by washing. Some of the alumina is present in tetrahedral coordination as shown by NMR studies 29Si MASNMR and 27Al NMR Amorphous silica-alumina contains sites which are termed Brønsted acid (or protic) sites, with an ionizable hydrogen atom, and Lewis acid (aprotic), electron accepting sites and these different types of acidic site can be distinguished by the ways in which, say, pyridine attaches. On Lewis acid sites it forms complexes and on the Brønsted sites it adsorbs as the pyridinium ion. As of 2000 examples of processes that use silica-alumina catalysts are the production of pyridine from crotonaldehyde, formaldehyde, steam, air and ammonia and the cracking of hydrocarbons, References Catalysis Inorganic silicon compounds Acid catalysts
Amorphous silica-alumina
[ "Chemistry" ]
282
[ "Catalysis", "Acids", "Inorganic compounds", "Acid catalysts", "Inorganic silicon compounds", "Chemical kinetics" ]
21,117,851
https://en.wikipedia.org/wiki/Composite%20laminate
In materials science, a composite laminate is an assembly of layers of fibrous composite materials which can be joined to provide required engineering properties, including in-plane stiffness, bending stiffness, strength, and coefficient of thermal expansion. The individual layers consist of high-modulus, high-strength fibers in a polymeric, metallic, or ceramic matrix material. Typical fibers used include cellulose, graphite, glass, boron, and silicon carbide, and some matrix materials are epoxies, polyimides, aluminium, titanium, and alumina. Layers of different materials may be used, resulting in a hybrid laminate. The individual layers generally are orthotropic (that is, with principal properties in orthogonal directions) or transversely isotropic (with isotropic properties in the transverse plane) with the laminate then exhibiting anisotropic (with variable direction of principal properties), orthotropic, or quasi-isotropic properties. Quasi-isotropic laminates exhibit isotropic (that is, independent of direction) inplane response but are not restricted to isotropic out-of-plane (bending) response. Depending upon the stacking sequence of the individual layers, the laminate may exhibit coupling between inplane and out-of-plane response. An example of bending-stretching coupling is the presence of curvature developing as a result of in-plane loading. Classical laminate analysis Composite laminates may be regarded as a type of plate or thin-shell structure, and as such their stiffness properties may be found by integration of in-plane stress in the direction normal to the laminates surface. The broad majority of ply or lamina materials obey Hooke's law and hence all of their stresses and strains may be related by a system of linear equations. Laminates are assumed to deform by developing three strains of the mid-plane/surface and three changes in curvature and where and define the co-ordinate system at the laminate level. Individual plies have local co-ordinate axes which are aligned with the materials characteristic directions; such as the principal directions of its elasticity tensor. Uni-directional ply's for example always have their first axis aligned with the direction of the reinforcement. A laminate is a stack of individual plies having a set of ply orientations which have a strong influence on both the stiffness and strength of the laminate as a whole. Rotating an anisotropic material results in a variation of its elasticity tensor. If in its local co-ordinates a ply is assumed to behave according to the stress-strain law then under a rotation transformation (see transformation matrix) it has the modified elasticity terms Hence An important assumption in the theory of classical laminate analysis is that the strains resulting from curvature vary linearly in the thickness direction, and that the total in-plane strains are a sum of those derived from membrane loads and bending loads. Hence Furthermore, a three-dimensional stress field is replaced by six stress resultants; three membrane forces (forces per unit length) and bending moments per unit length. It is assumed that if these three quantities are known at any location (x,y) then the stresses may be computed from them. Once part of a laminate the transformed elasticity is treated as a piecewise function of the thickness direction, hence the integration operation may be treated as the sum of a finite series, giving where See also Carbon-fiber-reinforced polymer Composite material High-pressure laminate Laminate Lay-up process Void (composites) References External links Advanced Composites Centre for Innovation and Science Composite materials Fibre-reinforced polymers
Composite laminate
[ "Physics" ]
758
[ "Materials", "Composite materials", "Matter" ]
21,120,310
https://en.wikipedia.org/wiki/Delayed%20extraction
Delayed extraction is a method used with a time-of-flight mass spectrometer in which the accelerating voltage is applied after some short time delay following pulsed laser desorption/ionization from a flat surface of target plate or, in other implementation, pulsed electron ionization or Resonance enhanced multiphoton ionization in some narrow space between two plates of the ion extraction system. The extraction delay can produce time-of-flight compensation for ion energy spread and improve mass resolution. Implementation Resolution can be improved in time-of-flight mass spectrometer with ions produced at high vacuum conditions (better than few microTorr) by allowing the initial packet ions to spread in space due to their translational energy before being accelerated into the flight tube. With ions produced by electron ionization or laser ionization of atoms or molecules from a rarefied gas, this is referred to as "time-lag focusing". With ions produced by laser desorption/ionization or MALDI from a conductive surface of target plate, this is referred as "delayed extraction." With delayed extraction, the mass resolution is improved due to the correlation between velocity and position of the ions after those have been produced in the ion source. Ions produced with greater kinetic energy have a higher velocity and during the delay time move closer to the extraction electrode before the accelerating voltage is applied across the target or pulsed electrode. The slower ions with less kinetic energy stay closer to a surface of the target electrode or pulsed electrode when the accelerating voltage is applied and therefore start being accelerated at a greater potential compared to the ions farther from the target electrode. With the proper delay time, the slower ions will receive enough extra potential energy to catch the faster ions after flying some distance from the pulsed acceleration system. Ions of the same mass-to-charge ratio will then drift through the flight tube to the detector in the same time. See also Time of flight References Mass spectrometry
Delayed extraction
[ "Physics", "Chemistry" ]
392
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
27,355,562
https://en.wikipedia.org/wiki/Organostannane%20addition
Organostannane addition is reaction involving the nucleophilic addition of an allyl-, allenyl-, or propargyl-stannane to an aldehyde, imine, or (in rare cases) a ketone. This reaction is widely used for carbonyl allylation. The addition of an organostannane to carbonyl group is one of the most common and efficient methods for the production of contiguous, oxygen-containing stereocenters in organic molecules. Since many naturally-occurring polymers contain this stereochemical motif, such as polypropionate and polyacetate, organostannane additon has been studied extensively by natural products chemists as a synthetically and comercially-important reaction. Organostannanes are very stable molecules, favoured for their ease of handling and selective reactivity. Chiral allylstannanes are known to react stereoselectively, yielding single diastereomers. The production of substituted allylstannanes containing either one or two new stereocenters can be achieved by this method with a very high degree of stereocontrol. (ref?) (1) However, stoichiometrically relative amounts of metal-containing byproducts are generated by this reaction, and addition to sterically-encumbered pi-bonds in ketones, are uncommon. (ref?) Mechanism and stereochemistry Prevailing mechanism Three modes allow the addition of allylstannanes to carbonyls: thermal addition, Lewis-acid-promoted addition, and addition involving prior transmetalation. Each of these modes invokes a unique model for stereocontrol, but in all cases, a distinction is made between reagent and substrate control. Substrate-controlled additions typically involve chiral aldehydes or imines and invoke the Felkin-Anh model. When all reagents are achiral, only simple diastereoselectivity (syn versus anti, see above) must be considered. Addition takes place via an SE' mechanism involving concerted dissociation of tin and C-C bond formation at the γ position. With the allylstannane and aldehyde in high-temperature conditions, addition proceeds through a six-membered, cyclic transition state, with the tin center serving as an organizing element. The configuration of the double bond in the allylstannane controls the sense of diastereoselectivity of the reaction. (2) This is not the case in Lewis-acid-promoted reactions, in which either the (Z)- or (E)-stannane affords the syn product predominantly (Type II). The origin of this selectivity has been debated, and depends on the relative energies of a number of acyclic transition states. (E)-Stannanes exhibit higher syn selectivity than the corresponding (Z)-stannanes. (3) In the presence of certain Lewis acids, transmetalation may occur before addition. Complex reaction mixtures may result if transmetalation is not complete or if an equilibrium between allylic isomers exists. Tin(IV) chloride and indium(III) chloride have been employed for useful reactions in this mode. (4) Enantioselective variants A wide variety of enantioselective additions employing chiral, non-racemic Lewis acids are known. The chiral (acyloxy)borane or "CAB" catalyst 1, titanium-BINOL system 2, and silver-BINAP system 3 provide addition products in high ee via the Lewis-acid-promoted mechanism described above. Scope and limitations Thermal additions of stannanes are limited (because of the high temperatures and pressures required) to only simple aldehyde substrates. Lewis acid promoted and transmetalation reactions are much milder and have achieved synthetic utility. Intramolecular addition gives five- or six-membered rings under Lewis acidic or thermal conditions. (6) The possibility of incorporating oxygen-containing substituents into allyl- and allenylstannanes expands their scope and utility substantially over methods relying on more reactive organometallics. These compounds are usually prepared by enantioselective reduction with a chiral reducing agent such as BINAL-H. In the presence of a Lewis acid, isomerization of α-alkoxy allylstannanes to the corresponding γ-alkoxy isomers takes place. (7) The use of chiral electrophiles is common and can provide "double diastereoselection" if the stannane is also chiral. Chelation control using Lewis acids such as magnesium bromide can lead to high stereoselectivities for reactions of α-alkoxy aldehydes. (8) Nucleophilic addition to propargyl mesylates or tosylates is used to form allenylstannanes. These compounds react similarly to allylstannanes to afford homopropargyl alcohols, and any of the three reaction modes described above can be used with this class of reagents as well. (9) Imines are less reactive than the corresponding aldehydes, but palladium catalysis can be used to facilitate addition into imines. The use of iminium ions as electrophiles has also been reported. (10) Synthetic applications The chiral allylic stannane 1 adds to acrolein to yield the 1,5-syn diastereomer as a single stereoisomer. A subsequent sigmatropic rearrangement increased the distance between the stereocenters even further. This step was carried out en route to (±)-patulolide C. (11) Repeated use of the allylic stannane addition in an intramolecular sense was used in the synthesis of hemibrevetoxin B (one example is shown below). The pseudoequatorial positions of both "appendages" in the starting material lead to the observed stereoisomer. (12) Related articles Krische allylation Carbonyl allylation References Organic reactions
Organostannane addition
[ "Chemistry" ]
1,271
[ "Organic reactions" ]
27,356,706
https://en.wikipedia.org/wiki/Dock%20Road%20Edwardian%20Pumping%20Station
Dock Road Edwardian Pumping Station is a sewage pumping station in Northwich, Cheshire, United Kingdom. The pumping station is recorded in the National Heritage List for England as a designated Grade II listed building. History Towards the end of the 19th century, Brunner Mond chemical company was provided new housing for their employees and Northwich sewage facilities were not adequate. The first sewers were laid in the town and discharged into the River Weaver, causing widespread pollution and providing no defence against disease. The Wallerscote Sewage Works was opened in 1902, and was successful in improving the conditions within the area it served. However, there were large parts of the town that was too low-lying for sewage to flow to Wallerscote. To solve the problem, Northwich Urban District Council built the pumping station next to the River Weaver at Dock Road in 1913. The station intercepted sewage before it entered the river and pumped it across the river to the top of Castle Hill and onwards to the Wallerscote Works. The pumping station was in use for over 60 years and is now maintained by United Utilities as a monument that is open to the public. Equipment The station was equipped with two single-cylinder Crossley 'N'-type gas-fired engines and two Haywood Tyler triplex lift and force pumps, capable of pumping 9,600 gallons per hour. In the 1970s, a new station containing electric pumps capable of pumping 36,000 gallons per hour was built alongside the Edwardian building but the original engines and pumps have now been restored. See also Listed buildings in Northwich References External links Hydrology Buildings and structures in Cheshire Grade II listed buildings in Cheshire Infrastructure completed in 1913 Northwich Sewage pumping stations 1913 establishments in England
Dock Road Edwardian Pumping Station
[ "Chemistry", "Engineering", "Environmental_science" ]
346
[ "Hydrology", "Environmental engineering" ]
27,358,159
https://en.wikipedia.org/wiki/Cheung%E2%80%93Marks%20theorem
In information theory, the Cheung–Marks theorem, named after K. F. Cheung and Robert J. Marks II, specifies conditions where restoration of a signal by the sampling theorem can become ill-posed. It offers conditions whereby "reconstruction error with unbounded variance [results] when a bounded variance noise is added to the samples." Background In the sampling theorem, the uncertainty of the interpolation as measured by noise variance is the same as the uncertainty of the sample data when the noise is i.i.d. In his classic 1948 paper founding information theory, Claude Shannon offered the following generalization of the sampling theorem: Although true in the absence of noise, many of the expansions proposed by Shannon become ill-posed. An arbitrarily small amount of noise on the data renders restoration unstable. Such sampling expansions are not useful in practice since sampling noise, such as quantization noise, rules out stable interpolation and therefore any practical use. Example Shannon's suggestion of simultaneous sampling of the signal and its derivative at half the Nyquist rate results in well behaved interpolation. The Cheung–Marks theorem shows counter-intuitively that interlacing signal and derivative samples makes the restoration problem ill-posed. The theorem also shows sensitivity increases with derivative order. The theorem Generally, the Cheung–Marks theorem shows the sampling theorem becomes ill-posed when the area (integral) of the squared magnitude of the interpolation function over all time is not finite. "While the generalized sampling concept is relatively straightforward, the reconstruction is not always feasible because of potential instabilities." References Information theory Digital signal processing Mathematical theorems in theoretical computer science
Cheung–Marks theorem
[ "Mathematics", "Technology", "Engineering" ]
340
[ "Mathematical theorems", "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory", "Mathematical theorems in theoretical computer science", "Mathematical problems" ]
13,144,608
https://en.wikipedia.org/wiki/Verification%20and%20validation
Verification and validation (also abbreviated as V&V) are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000. The words "verification" and "validation" are sometimes preceded with "independent", indicating that the verification and validation is to be performed by a disinterested third party. "Independent verification and validation" can be abbreviated as "IV&V". In reality, as quality management terms, the definitions of verification and validation can be inconsistent. Sometimes they are even used interchangeably. However, the PMBOK guide, a standard adopted by the Institute of Electrical and Electronics Engineers (IEEE), defines them as follows in its 4th edition: "Validation. The assurance that a product, service, or system meets the needs of the customer and other identified stakeholders. It often involves acceptance and suitability with external customers. Contrast with verification." "Verification. The evaluation of whether or not a product, service, or system complies with a regulation, requirement, specification, or imposed condition. It is often an internal process. Contrast with validation." Similarly, for a Medical device, the FDA (21 CFR) defines Validation and Verification as procedures that ensures that the device fulfil their intended purpose. Validation: Ensuring that the device meets the needs and requirements of its intended users and the intended use environment. Verification: Ensuring that the device meets its specified design requirements ISO 9001:2015 considers Validation: To ensure that the resulting product is capable of meeting the requirements for the specified application or intended use, where known. Design validation is similar to verification, except this time you should check the designed product under conditions of actual use. Verification: Design verification is confirmation by examination and provision of objective evidence that the specified input requirements have been fulfilled. Verification activities such as modelling, simulations, alternative calculations, comparison with other proven designs, experiments, tests, and specialist technical reviews. Overview Verification Verification is intended to check that a product, service, or system meets a set of design specifications. In the development phase, verification procedures involve performing special tests to model or simulate a portion, or the entirety, of a product, service, or system, then performing a review or analysis of the modeling results. In the post-development phase, verification procedures involve regularly repeating tests devised specifically to ensure that the product, service, or system continues to meet the initial design requirements, specifications, and regulations as time progresses. It is a process that is used to evaluate whether a product, service, or system complies with regulations, specifications, or conditions imposed at the start of a development phase. Verification can be in development, scale-up, or production. This is often an internal process. Validation Validation is intended to ensure a product, service, or system (or portion thereof, or set thereof) results in a product, service, or system (or portion thereof, or set thereof) that meets the operational needs of the user. For a new development flow or verification flow, validation procedures may involve modeling either flow and using simulations to predict faults or gaps that might lead to invalid or incomplete verification or development of a product, service, or system (or portion thereof, or set thereof). A set of validation requirements (as defined by the user), specifications, and regulations may then be used as a basis for qualifying a development flow or verification flow for a product, service, or system (or portion thereof, or set thereof). Additional validation procedures also include those that are designed specifically to ensure that modifications made to an existing qualified development flow or verification flow will have the effect of producing a product, service, or system (or portion thereof, or set thereof) that meets the initial design requirements, specifications, and regulations; these validations help to keep the flow qualified. It is a process of establishing evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. This often involves acceptance of fitness for purpose with end users and other product stakeholders. This is often an external process. It is sometimes said that validation can be expressed by the query "Are you building the right thing?" and verification by "Are you building it right?". "Building the right thing" refers back to the user's needs, while "building it right" checks that the specifications are correctly implemented by the system. In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance. It is entirely possible that a product passes when verified but fails when validated. This can happen when, say, a product is built as per the specifications but the specifications themselves fail to address the user's needs. Activities Verification of machinery and equipment usually consists of design qualification (DQ), installation qualification (IQ), operational qualification (OQ), and performance qualification (PQ). DQ may be performed by a vendor or by the user, by confirming through review and testing that the equipment meets the written acquisition specification. If the relevant document or manuals of machinery/equipment are provided by vendors, the later 3Q needs to be thoroughly performed by the users who work in an industrial regulatory environment. Otherwise, the process of IQ, OQ and PQ is the task of validation. The typical example of such a case could be the loss or absence of vendor's documentation for legacy equipment or do-it-yourself (DIY) assemblies (e.g., cars, computers, etc.) and, therefore, users should endeavour to acquire DQ document beforehand. Each template of DQ, IQ, OQ and PQ usually can be found on the internet respectively, whereas the DIY qualifications of machinery/equipment can be assisted either by the vendor's training course materials and tutorials, or by the published guidance books, such as step-by-step series if the acquisition of machinery/equipment is not bundled with on- site qualification services. This kind of the DIY approach is also applicable to the qualifications of software, computer operating systems and a manufacturing process. The most important and critical task as the last step of the activity is to generating and archiving machinery/equipment qualification reports for auditing purposes, if regulatory compliances are mandatory. Qualification of machinery/equipment is venue dependent, in particular items that are shock sensitive and require balancing or calibration, and re-qualification needs to be conducted once the objects are relocated. The full scales of some equipment qualifications are even time dependent as consumables are used up (i.e. filters) or springs stretch out, requiring recalibration, and hence re-certification is necessary when a specified due time lapse. Re-qualification of machinery/equipment should also be conducted when replacement of parts, or coupling with another device, or installing a new application software and restructuring of the computer which affects especially the pre-settings, such as on BIOS, registry, disk drive partition table, dynamically-linked (shared) libraries, or an ini file etc., have been necessary. In such a situation, the specifications of the parts/devices/software and restructuring proposals should be appended to the qualification document whether the parts/devices/software are genuine or not. Torres and Hyman have discussed the suitability of non-genuine parts for clinical use and provided guidelines for equipment users to select appropriate substitutes which are capable of avoiding adverse effects. In the case when genuine parts/devices/software are demanded by some of regulatory requirements, then re-qualification does not need to be conducted on the non-genuine assemblies. Instead, the asset has to be recycled for non-regulatory purposes. When machinery/equipment qualification is conducted by a standard endorsed third party such as by an ISO standard accredited company for a particular division, the process is called certification. Currently, the coverage of ISO/IEC 15408 certification by an ISO/IEC 27001 accredited organization is limited; the scheme requires a fair amount of efforts to get popularized. Categories of validation Validation work can generally be categorized by the following functions: Prospective validation – the missions conducted before new items are released to make sure the characteristics of the interests which are functioning properly and which meet safety standards. Some examples could be legislative rules, guidelines or proposals, methods, theories/hypothesis/models, products and services. Retrospective validation – a process for items that are already in use and distribution or production. The validation is performed against the written specifications or predetermined expectations, based upon their historical data/evidences that are documented/recorded. If any critical data is missing, then the work can not be processed or can only be completed partially. The tasks are considered necessary if: prospective validation is missing, inadequate or flawed. the change of legislative regulations or standards affects the compliance of the items being released to the public or market. reviving of out-of-use items. Some of the examples could be validation of: ancient scriptures that remain controversial clinical decision rules data systems Full-scale validation Partial validation – often used for research and pilot studies if time is constrained. The most important and significant effects are tested. From an analytical chemistry perspective, those effects are selectivity, accuracy, repeatability, linearity and its range. Cross-validation Re-validation/locational or periodical validation – carried out, for the item of interest that is dismissed, repaired, integrated/coupled, relocated, or after a specified time lapse. Examples of this category could be relicensing/renewing driver's license, recertifying an analytical balance that has been expired or relocated, and even revalidating professionals. Re-validation may also be conducted when/where a change occurs during the courses of activities, such as scientific researches or phases of clinical trial transitions. Examples of these changes could be sample matrices production scales population profiles and sizes out-of-specification] (OOS) investigations, due to the contamination of testing reagents, glasswares, the aging of equipment/devices, or the depreciation of associated assets etc. In GLP accredited laboratories, verification/revalidation will even be conducted very often against the monographs of the Ph.Eur., IP to cater for multinational needs or USP and BP etc to cater for national needs. These laboratories must have method validation as well. Concurrent validation – conducted during a routine processing of services, manufacturing or engineering etc. Examples of these could be duplicated sample analysis for a chemical assay triplicated sample analysis for trace impurities at the marginalized levels of detection limit, or/and quantification limit single sample analysis for a chemical assay by a skilled operator with multiplicated online system suitability testings Aspects of analytical methods validation The most tested attributes in validation tasks may include, but are not limited to Sensitivity and specificity Accuracy and precision Repeatability Reproducibility Limit of detection – especially for trace elements Limit of quantification Curve fitting and its range System suitability – A test run each time an analysis is performed to ensure the test method is acceptable and is performing as written. This type of check is often run in a QC Lab. Usually, system suitability is performed by analyzing a standard material (House standard or reference standard) before the unknowns are run in an analytical method. Statistical analysis and other parameters must pass preset conditions to ensure the method and system are performing correctly. For example, in an HPLC purity analysis of a drug substance, a standard material of the highest purity would be run before the test samples. The parameters analyzed might be (for example) % RSD of area counts for triplicate injections or chromatographic parameters checked such as retention time. The HPLC run would be considered valid if the system suitability test passes and ensures the subsequent data collected for the unknown analytes are valid. For a longer HPLC run of over 20 samples, an additional system suitability standard (called a "check standard") might be run at the end or interspersed in the HPLC run and would be included in the statistical analysis. If all system suit standards pass, this ensures all samples yield acceptable data throughout the run, and not just at the beginning. All system suitability standards must be passed to accept the run. In a broad way, it usually includes a test of ruggedness among inter-collaborators, or a test of robustness within an organization However, the U.S. Food and Drug Administration (FDA) has specifically defined it for its administration, as "System suitability testing is an integral part of many analytical procedures. The tests are based on the concept that the equipment, electronics, analytical operations and samples to be analyzed constitute an integral system that can be evaluated as such. System suitability test parameters to be established for a particular procedure depend on the type of procedure being validated". In some cases of analytical chemistry, a system suitability test could be rather a method specific than universal. Such examples are chromatographic analysis, which is usually media (column, paper or mobile solvent) sensitive However to the date of this writing, this kind of approaches are limited to some of pharmaceutical compendial methods, by which the detecting of impurities, or the quality of the intest analyzed are critical (i.e., life and death). This is probably largely due to: their intensive labouring demands and time consumption their confinements by the definition of the term defined by different standards. To solve this kind of difficulty, some regulatory bodies or methods provide advice on when performing of a specified system suitability test should be applied and compulsory. Industry references These terms generally apply broadly across industries and institutions. In addition, they may have very specific meanings and requirements for specific products, regulations, and industries. Some examples: Software and computer systems Food and drug manufacturing Pharmaceuticals The design, production, and distribution of drugs are highly regulated. This includes software systems. For example, in the US, the Food and Drug Administration have regulations in Part 21 of the Code of Federal Regulations. Nash et al. have published a book which provides a comprehensive coverage on the various validation topics of pharmaceutical manufacturing processes. Some companies are taking a risk-based approach to validating their GAMP system if one understands the regulatory requirements very well while the most of others follows the conventional process It is a part of GxP management. The aspects of validation and verification are even more intense and emphasized if an OOS occurs. Very often under this circumstance, a multiplicated sample analysis is required for conducting the OOS investigation in a testing laboratory. Medical devices The FDA (21 CFR) has validation and verification requirements for medical devices, as outlined in ASME V&V 40. Also see guidance: and ISO 13485. Manufacturing process and cleaning validation are compulsory and regulated by the U.S. Food and Drug Administration Food hygiene: example Clinical laboratory medicine: ISO 15198:2004 Clinical laboratory medicine—In vitro diagnostic medical devices—Validation of user quality control procedures by the manufacturer Engineering Engineering in general Engineering validation test Civil engineering Buildings – Roads – Bridges – Health care: example Greenhouse gas: ISO 14064 ANSI/ISO: Greenhouse gases – Requirements for greenhouse gas validation and verification bodies for use in accreditation or other forms of recognition Traffic and transport Road safety audit Periodic motor vehicle inspection Aircraft noise: example Aircraft: Model: (Ni-Cd) cells: example ICT Industry: example Economics Accounting Agriculture – applications vary from verifying agricultural methodology and production processes to validating agricultural modeling Real estate appraisal – audit reporting and authentication Arms control See also Certification of voting machines Change control Comparability Data validation Formal verification Functional verification ISO 17025 Positive recall Process validation Software verification and validation Statistical model validation System testing Usability testing Validation master plan Verification and validation of computer simulation models Notes and references Further reading External links Maturity of verification and validation in ICT companies Organisational maturity and functional performance Quality management Product testing Systems engineering Drug manufacturing Food safety
Verification and validation
[ "Engineering" ]
3,254
[ "Systems engineering" ]
13,147,078
https://en.wikipedia.org/wiki/Emetine
Emetine is a drug used as both an anti-protozoal and to induce vomiting. It is produced from the ipecac root. It takes its name from its emetic properties. Early preparations Mechanism of action of emetine was studied by François Magendie during the nineteenth century. Early use of emetine was in the form of oral administration of the extract of ipecac root, or ipecacuanha. This extract was originally thought to contain only one alkaloid, emetine, but was found to contain several, including cephaeline, psychotrine and others. Although this therapy was reportedly successful, the extract caused vomiting in many patients, which reduced its utility. In some cases, it was given with opioids to reduce nausea. Other approaches to reduce nausea involved coated tablets, allowing the drug to be released after digestion in the stomach. Use as anti-amoebic The identification of emetine as a more potent agent improved the treatment of amoebiasis. While use of emetine still caused nausea, it was more effective than the crude extract of ipecac root. Additionally, emetine could be administered hypodermically which still produced nausea, but not to the degree experienced in oral administration. Although it is a potent antiprotozoal, the drug also can interfere with muscle contractions, leading to cardiac failure in some cases. Because of this, in some uses it is required to be administered in a hospital so that adverse events can be addressed. Dehydroemetine Dehydroemetine is a synthetically produced antiprotozoal agent similar to emetine in its anti-amoebic properties and structure (they differ only in a double bond next to the ethyl group), but it produces fewer side effects. Cephaeline Cephaeline is a desmethyl analog of emetine also found in ipecac root. Use in blocking protein synthesis Emetine dihydrochloro hydrate is used in the laboratory to block protein synthesis in eukaryotic cells. It does this by binding to the 40S subunit of the ribosome. This can thus be used in the study of protein degradation in cells. Mutants resistant to emetine are altered in the 40S ribosomal subunit (S14 protein), and they exhibit cross-resistance to cryptopleurine, tylocrebrine, cephaeline and tubulosine, but not other inhibitors of protein synthesis. The compounds to which these mutants exhibit cross-resistance have been shown to share common structural determinants with emetine that are responsible for their biological activities. Biosynthesis The biosynthesis of cephaeline and emetine come from two main biosynthesis pathways: the biosynthesis of dopamine from L-tyrosine and the biosynthesis of secologanin from geranyl diphosphate. Biosynthesis begins from the reaction between dopamine and secologanin forming N-deacetylisoipecoside (S-form) and N-deacetylipecoside (R-form). The S-form then goes through a Pictet-Spengler type reaction followed by a series of O-methylations and the removal of glucose, with O-methyltransferases and a glycosidase, to form proemetine. Proemetine then reacts with another dopamine molecule to form 7'-O-demethylcephaeline. The final products are then produced with a 7'-O-methylation to make cephaeline and a 6'-O-methylation successively to make emetine. Side effects Heavy or overusage of emetine can carry the risk of developing proximal myopathy and/or cardiomyopathy. Research A 2018 study at Princeton University and Thomas Jefferson University has demonstrated that emetine blocks the dissemination of rabies virus inside nerve cells, but the exact mechanism is still under investigation. Emetine had no effect on the transport of endosomes devoid of the rabies virus. (Rabies resides in nerve endosomes). But endosomes carrying the virus were either completely immobilized, or were only able to move short distances at slower-than-normal speeds. In 2016, a study found that low doses of emetine inhibited cytomegalovirus replication and was synergistic with ganciclovir. References Antiprotozoal agents Isoquinoline alkaloids Norsalsolinol ethers Emetics Protein synthesis inhibitors
Emetine
[ "Chemistry", "Biology" ]
970
[ "Alkaloids by chemical classification", "Tetrahydroisoquinoline alkaloids", "Biocides", "Antiprotozoal agents" ]
13,148,240
https://en.wikipedia.org/wiki/Ground%20reaction%20force
In physics, and in particular in biomechanics, the ground reaction force (GRF) is the force exerted by the ground on a body in contact with it. For example, a person standing motionless on the ground exerts a contact force on it (equal to the person's weight) and at the same time an equal and opposite ground reaction force is exerted by the ground on the person. In the above example, the ground reaction force coincides with the notion of a normal force. However, in a more general case, the GRF will also have a component parallel to the ground, for example when the person is walking – a motion that requires the exchange of horizontal (frictional) forces with the ground. The use of the word reaction derives from Newton's third law, which essentially states that if a force, called action, acts upon a body, then an equal and opposite force, called reaction, must act upon another body. The force exerted by the ground is conventionally referred to as the reaction, although, since the distinction between action and reaction is completely arbitrary, the expression ground action would be, in principle, equally acceptable. The component of the GRF parallel to the surface is the frictional force. When slippage occurs the ratio of the magnitude of the frictional force to the normal force yields the coefficient of static friction. GRF is often observed to evaluate force production in various groups within the community. One of these groups studied often are athletes to help evaluate a subject's ability to exert force and power. This can help create baseline parameters when creating strength and conditioning regimens from a rehabilitation and coaching standpoint. Plyometric jumps such as a drop-jump is an activity often used to build greater power and force which can lead to overall better ability on the playing field. When landing from a safe height in a bilateral comparisons on GRF in relation to landing with the dominant foot first followed by the non-dominant limb, literature has shown there were no significances in bilateral components with landing with the dominant foot first faster than the non-dominant foot on the GRF of the drop-jump or landing on vertical GRF output. References Mechanics Biomechanics
Ground reaction force
[ "Physics", "Engineering" ]
452
[ "Biomechanics", "Mechanics", "Mechanical engineering" ]
3,663,969
https://en.wikipedia.org/wiki/HNCA%20experiment
HNCA is a 3D triple-resonance NMR experiment commonly used in the field of protein NMR. The name derives from the experiment's magnetization transfer pathway: The magnetization of the amide proton of an amino acid residue is transferred to the amide nitrogen, and then to the alpha carbons of both the starting residue and the previous residue in the protein's amino acid sequence. In contrast, the complementary HNCOCA experiment transfers magnetization only to the alpha carbon of the previous residue. The HNCA experiment is used, often in tandem with HNCOCA, to assign alpha carbon resonance signals to specific residues in the protein. This experiment requires a purified sample of protein prepared with 13C and 15N isotopic labelling, at a concentration greater than 0.1 mM, and is thus generally only applied to recombinant proteins. The spectrum produced by this experiment has 3 dimensions: A proton axis, a 15N axis and a 13C axis. For residue i peaks will appear at {HN(i), N(i), Calpha (i)} and {HN(i), N(i), Calpha(i-1)}, while for the complementary HNCOCA experiment peaks appear only at {HN(i), N(i), Calpha(i-1)}. Together, these two experiments reveal the alpha carbon chemical shift for each amino acid residue in a protein, and provide information linking adjacent residues in the protein's sequence. References Citations General references Protein NMR Spectroscopy : Principles and Practice (1995) John Cavanagh, Wayne J. Fairbrother, Arthur G. Palmer III, Nicholas J. Skelton, Academic Press Protein methods Biophysics Protein structure Nuclear magnetic resonance experiments
HNCA experiment
[ "Physics", "Chemistry", "Biology" ]
366
[ "Biochemistry methods", "Applied and interdisciplinary physics", "Protein stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance experiments", "Protein methods", "Protein biochemistry", "Biochemistry stubs", "Biophysics", "Structural biology", "Protein structure" ]
3,664,001
https://en.wikipedia.org/wiki/HNCOCA%20experiment
HNCOCA is a 3D triple-resonance NMR experiment commonly used in the field of protein NMR. The name derives from the experiment's magnetization transfer pathway: The magnetization of the amide proton of an amino acid residue is transferred to the amide nitrogen, and then to the alpha carbon of the previous residue in the protein's amino acid sequence. In contrast, the complementary HNCA experiment transfers magnetization to the alpha carbons of both the starting residue and the previous residue in the sequence. The HNCOCA experiment is used, often in tandem with HNCA, to assign alpha carbon resonance signals to specific residues in the protein. This experiment requires a purified sample of protein prepared with 13C and 15N isotopic labelling, at a concentration greater than 0.1 mM, and is thus generally only applied to recombinant proteins. The spectrum produced by this experiment has 3 dimensions: A proton axis, a 15N axis and a 13C axis. For residue i peaks will appear at {HN(i), N(i), Cα(i-1)} only, while for the complementary HNCA experiment peaks appear at {HN(i), N(i), Cα(i-1)} and {HN(i), N(i), Cα (i)}. Together, these two experiments reveal the alpha carbon chemical shift for each amino acid residue in a protein, and provide information linking adjacent residues in the protein's sequence. References Citations General references Protein methods Biophysics Protein structure Nuclear magnetic resonance experiments
HNCOCA experiment
[ "Physics", "Chemistry", "Biology" ]
329
[ "Biochemistry methods", "Applied and interdisciplinary physics", "Protein stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance experiments", "Protein methods", "Protein biochemistry", "Biochemistry stubs", "Biophysics", "Structural biology", "Protein structure" ]
3,665,494
https://en.wikipedia.org/wiki/Smena%20%28camera%29
Smena () is a series of low-cost 35 mm film cameras manufactured in the Soviet Union by the LOMO factory from 1953 to 1991. They were designed to be inexpensive and accessible to the public, made of bakelite or black plastic for the later models. Their mode of operation was exclusively manual, to the extent that winding of film is separated from shutter cocking. In the 1960s and 1970s they were exported by Soviet era export conglomerate Mashpriborintorg (). Austrian company Lomographische AG now promotes Smenas, as exclusive distributor under agreement with LOMO PLC. Specifications Smena 8M Lens: Triplet 43, 40 mm, f/4, 3 elements Focal range: 1 m to infinity, scale-focus Shutter speeds : B, 1/15, 1/30, 1/60, 1/125, 1/250 Shutter type: 3 blades diaphragm shutter Apertures: f/4, f/5.6, f/8, f/11, f/16 Film type: 35 mm film Size: 70 x 100 x 60 mm Weight: 289 g Models The Smena models are: Smena Smena-2 Smena-2M Smena-3 Smena-4 Smena-5 Smena-6 Smena-7 Smena-8 or Cosmic 35 for the UK market. Smena-8M Smena-9 Smena-35 Smena-Rapid Smena-Symbol Smena-M Smena-Sl Model gallery See also Lubitel Lomography Citations External links Smena page on the Lomography website We use film Smena 1- Specifications and pics weusefilm – smena 8 / cosmic – Manual & Specifications Few images taken with Smena Cosmic Cameras Soviet cameras Toy cameras
Smena (camera)
[ "Technology" ]
367
[ "Recording devices", "Cameras" ]
3,666,839
https://en.wikipedia.org/wiki/SMC%20protein
SMC complexes represent a large family of ATPases that participate in many aspects of higher-order chromosome organization and dynamics. SMC stands for Structural Maintenance of Chromosomes. Classification Eukaryotic SMCs Eukaryotes have at least six SMC proteins in individual organisms, and they form three distinct heterodimers with specialized functions: A pair of SMC1 and SMC3 constitutes the core subunits of the cohesin complexes involved in sister chromatid cohesion. SMC1 and SMC3 also have functions in the repair of DNA double-strained breaks in the process of homologous recombination. Likewise, a pair of SMC2 and SMC4 acts as the core of the condensin complexes implicated in chromosome condensation. SMC2 and SMC4 have the function of DNA repair as well. Condensin I plays a role in single-strained break repair but not in double-strained breaks. The opposite is true for Condensin II, which plays a role in homologous recombination. A dimer composed of SMC5 and SMC6 functions as part of a yet-to-be-named complex implicated in DNA repair and checkpoint responses. Each complex contains a distinct set of non-SMC regulatory subunits. Some organisms have variants of SMC proteins. For instance, mammals have a meiosis-specific variant of SMC1, known as SMC1β. The nematode Caenorhabditis elegans has an SMC4-variant that has a specialized role in dosage compensation. The following table shows the SMC proteins names for several model organisms and vertebrates: Prokaryotic SMCs SMC proteins are conserved from bacteria to humans. Most bacteria have a single SMC protein in individual species that forms a homodimer. Recently SMC proteins have been shown to aid the daughter cells DNA at the origin of replication to guarantee proper segregation. In a subclass of Gram-negative bacteria, including Escherichia coli, a distantly related protein known as MukB plays an equivalent role. Molecular structure Primary structure SMC proteins are 1,000-1,500 amino-acid long. They have a modular structure that is composed of the following domains: Walker A ATP-binding motif coiled-coil region I hinge region coiled-coil region II Walker B ATP-binding motif; signature motif Secondary and tertiary structure SMC dimers form a V-shaped molecule with two long coiled-coil arms. To make such a unique structure, an SMC protomer is self-folded through anti-parallel coiled-coil interactions, forming a rod-shaped molecule. At one end of the molecule, the N-terminal and C-terminal domains form an ATP-binding domain. The other end is called a hinge domain. Two protomers then dimerize through their hinge domains and assemble a V-shaped dimer. The length of the coiled-coil arms is ~50 nm long. Such long "antiparallel" coiled coils are very rare and found only among SMC proteins (and their relatives such as Rad50). The ATP-binding domain of SMC proteins is structurally related to that of ABC transporters, a large family of transmembrane proteins that actively transport small molecules across cellular membranes. It is thought that the cycle of ATP binding and hydrolysis modulates the cycle of closing and opening of the V-shaped molecule. Still, the detailed mechanisms of action of SMC proteins remain to be determined. Aggregation of SMC The SMC proteins have the potential to form a larger ring-like structure. The ability to create different architectural arrangements allows for various regulations of functions. Some of the possible configurations are double rings, filaments, and rosettes. Double rings are 4 SMC proteins bound at the heads and hinge, forming a ring. Filaments are a chain of alternating SMCs. Rosettes are rose-like structures with terminal segments in the inner region and hinge in the outer region. Genes The following human genes encode SMC proteins: SMC1A SMC1B SMC2 SMC3 SMC4 SMC5 SMC6 See also cohesin condensin Cornelia de Lange Syndrome References EC 3.6.3 Cell biology Mitosis Cell cycle
SMC protein
[ "Biology" ]
896
[ "Cell biology", "Cellular processes", "Cell cycle", "Mitosis" ]
3,668,370
https://en.wikipedia.org/wiki/Carath%C3%A9odory%E2%80%93Jacobi%E2%80%93Lie%20theorem
The Carathéodory–Jacobi–Lie theorem is a theorem in symplectic geometry which generalizes Darboux's theorem. Statement Let M be a 2n-dimensional symplectic manifold with symplectic form ω. For p ∈ M and r ≤ n, let f1, f2, ..., fr be smooth functions defined on an open neighborhood V of p whose differentials are linearly independent at each point, or equivalently where {fi, fj} = 0. (In other words, they are pairwise in involution.) Here {–,–} is the Poisson bracket. Then there are functions fr+1, ..., fn, g1, g2, ..., gn defined on an open neighborhood U ⊂ V of p such that (fi, gi) is a symplectic chart of M, i.e., ω is expressed on U as Applications As a direct application we have the following. Given a Hamiltonian system as where M is a symplectic manifold with symplectic form and H is the Hamiltonian function, around every point where there is a symplectic chart such that one of its coordinates is H. References Symplectic geometry Theorems in differential geometry
Carathéodory–Jacobi–Lie theorem
[ "Mathematics" ]
265
[ "Theorems in differential geometry", "Theorems in geometry" ]
3,668,693
https://en.wikipedia.org/wiki/Nilmanifold
In mathematics, a nilmanifold is a differentiable manifold which has a transitive nilpotent group of diffeomorphisms acting on it. As such, a nilmanifold is an example of a homogeneous space and is diffeomorphic to the quotient space , the quotient of a nilpotent Lie group N modulo a closed subgroup H. This notion was introduced by Anatoly Mal'cev in 1949. In the Riemannian category, there is also a good notion of a nilmanifold. A Riemannian manifold is called a homogeneous nilmanifold if there exist a nilpotent group of isometries acting transitively on it. The requirement that the transitive nilpotent group acts by isometries leads to the following rigid characterization: every homogeneous nilmanifold is isometric to a nilpotent Lie group with left-invariant metric (see Wilson). Nilmanifolds are important geometric objects and often arise as concrete examples with interesting properties; in Riemannian geometry these spaces always have mixed curvature, almost flat spaces arise as quotients of nilmanifolds, and compact nilmanifolds have been used to construct elementary examples of collapse of Riemannian metrics under the Ricci flow. In addition to their role in geometry, nilmanifolds are increasingly being seen as having a role in arithmetic combinatorics (see Green–Tao) and ergodic theory (see, e.g., Host–Kra). Compact nilmanifolds A compact nilmanifold is a nilmanifold which is compact. One way to construct such spaces is to start with a simply connected nilpotent Lie group N and a discrete subgroup . If the subgroup acts cocompactly (via right multiplication) on N, then the quotient manifold will be a compact nilmanifold. As Mal'cev has shown, every compact nilmanifold is obtained this way. Such a subgroup as above is called a lattice in N. It is well known that a nilpotent Lie group admits a lattice if and only if its Lie algebra admits a basis with rational structure constants: this is Mal'cev's criterion. Not all nilpotent Lie groups admit lattices; for more details, see also M. S. Raghunathan. A compact Riemannian nilmanifold is a compact Riemannian manifold which is locally isometric to a nilpotent Lie group with left-invariant metric. These spaces are constructed as follows. Let be a lattice in a simply connected nilpotent Lie group N, as above. Endow N with a left-invariant (Riemannian) metric. Then the subgroup acts by isometries on N via left-multiplication. Thus the quotient is a compact space locally isometric to N. Note: this space is naturally diffeomorphic to . Compact nilmanifolds also arise as principal bundles. For example, consider a 2-step nilpotent Lie group N which admits a lattice (see above). Let be the commutator subgroup of N. Denote by p the dimension of Z and by q the codimension of Z; i.e. the dimension of N is p+q. It is known (see Raghunathan) that is a lattice in Z. Hence, is a p-dimensional compact torus. Since Z is central in N, the group G acts on the compact nilmanifold with quotient space . This base manifold M is a q-dimensional compact torus. It has been shown that every principal torus bundle over a torus is of this form, see. More generally, a compact nilmanifold is a torus bundle, over a torus bundle, over...over a torus. As mentioned above, almost flat manifolds are intimately compact nilmanifolds. See that article for more information. Complex nilmanifolds Historically, a complex nilmanifold meant a quotient of a complex nilpotent Lie group over a cocompact lattice. An example of such a nilmanifold is an Iwasawa manifold. From the 1980s, another (more general) notion of a complex nilmanifold gradually replaced this one. An almost complex structure on a real Lie algebra g is an endomorphism which squares to −Idg. This operator is called a complex structure if its eigenspaces, corresponding to eigenvalues , are subalgebras in . In this case, I defines a left-invariant complex structure on the corresponding Lie group. Such a manifold (G,I) is called a complex group manifold. It is easy to see that every connected complex homogeneous manifold equipped with a free, transitive, holomorphic action by a real Lie group is obtained this way. Let G be a real, nilpotent Lie group. A complex nilmanifold is a quotient of a complex group manifold (G,I), equipped with a left-invariant complex structure, by a discrete, cocompact lattice, acting from the right. Complex nilmanifolds are usually not homogeneous, as complex varieties. In complex dimension 2, the only complex nilmanifolds are a complex torus and a Kodaira surface. Properties Compact nilmanifolds (except a torus) are never homotopy formal. This implies immediately that compact nilmanifolds (except a torus) cannot admit a Kähler structure (see also ). Topologically, all nilmanifolds can be obtained as iterated torus bundles over a torus. This is easily seen from a filtration by ascending central series. Examples Nilpotent Lie groups From the above definition of homogeneous nilmanifolds, it is clear that any nilpotent Lie group with left-invariant metric is a homogeneous nilmanifold. The most familiar nilpotent Lie groups are matrix groups whose diagonal entries are 1 and whose lower diagonal entries are all zeros. For example, the Heisenberg group is a 2-step nilpotent Lie group. This nilpotent Lie group is also special in that it admits a compact quotient. The group would be the upper triangular matrices with integral coefficients. The resulting nilmanifold is 3-dimensional. One possible fundamental domain is (isomorphic to) [0,1]3 with the faces identified in a suitable way. This is because an element of the nilmanifold can be represented by the element in the fundamental domain. Here denotes the floor function of x, and the fractional part. The appearance of the floor function here is a clue to the relevance of nilmanifolds to additive combinatorics: the so-called bracket polynomials, or generalised polynomials, seem to be important in the development of higher-order Fourier analysis. Abelian Lie groups A simpler example would be any abelian Lie group. This is because any such group is a nilpotent Lie group. For example, one can take the group of real numbers under addition, and the discrete, cocompact subgroup consisting of the integers. The resulting 1-step nilmanifold is the familiar circle . Another familiar example might be the compact 2-torus or Euclidean space under addition. Generalizations A parallel construction based on solvable Lie groups produces a class of spaces called solvmanifolds. An important example of a solvmanifolds are Inoue surfaces, known in complex geometry. References Differential geometry Homogeneous spaces Lie groups Manifolds Riemannian geometry Smooth manifolds
Nilmanifold
[ "Physics", "Mathematics" ]
1,600
[ "Lie groups", "Mathematical structures", "Group actions", "Homogeneous spaces", "Space (mathematics)", "Topological spaces", "Topology", "Algebraic structures", "Manifolds", "Geometry", "Symmetry" ]
2,704,518
https://en.wikipedia.org/wiki/Congruence%20%28manifolds%29
In the theory of smooth manifolds, a congruence is the set of integral curves defined by a nonvanishing vector field defined on the manifold. Congruences are an important concept in general relativity, and are also important in parts of Riemannian geometry. A motivational example The idea of a congruence is probably better explained by giving an example than by a definition. Consider the smooth manifold R². Vector fields can be specified as first order linear partial differential operators, such as These correspond to a system of first order linear ordinary differential equations, in this case where dot denotes a derivative with respect to some (dummy) parameter. The solutions of such systems are families of parameterized curves, in this case This family is what is often called a congruence of curves, or just congruence for short. This particular example happens to have two singularities, where the vector field vanishes. These are fixed points of the flow. (A flow is a one-dimensional group of diffeomorphisms; a flow defines an action by the one-dimensional Lie group R, having locally nice geometric properties.) These two singularities correspond to two points, rather than two curves. In this example, the other integral curves are all simple closed curves. Many flows are considerably more complicated than this. To avoid complications arising from the presence of singularities, usually one requires the vector field to be nonvanishing. If we add more mathematical structure, our congruence may acquire new significance. Congruences in Riemannian manifolds For example, if we make our smooth manifold into a Riemannian manifold by adding a Riemannian metric tensor, say the one defined by the line element our congruence might become a geodesic congruence. Indeed, in the example from the preceding section, our curves become geodesics on an ordinary round sphere (with the North pole excised). If we had added the standard Euclidean metric instead, our curves would have become circles, but not geodesics. An interesting example of a Riemannian geodesic congruence, related to our first example, is the Clifford congruence on P³, which is also known at the Hopf bundle or Hopf fibration. The integral curves or fibers respectively are certain pairwise linked great circles, the orbits in the space of unit norm quaternions under left multiplication by a given unit quaternion of unit norm. Congruences in Lorentzian manifolds In a Lorentzian manifold, such as a spacetime model in general relativity (which will usually be an exact or approximate solution to the Einstein field equation), congruences are called timelike, null, or spacelike if the tangent vectors are everywhere timelike, null, or spacelike respectively. A congruence is called a geodesic congruence if the tangent vector field has vanishing covariant derivative, . See also Congruence (general relativity) References A textbook on manifold theory. See also the same author's textbooks on topological manifolds (a lower level of structure) and Riemannian geometry (a higher level of structure). Differential topology
Congruence (manifolds)
[ "Mathematics" ]
657
[ "Topology", "Differential topology" ]
2,704,720
https://en.wikipedia.org/wiki/Sustainable%20architecture
Sustainable architecture is architecture that seeks to minimize the negative environmental impact of buildings through improved efficiency and moderation in the use of materials, energy, development space and the ecosystem at large. Sustainable architecture uses a conscious approach to energy and ecological conservation in the design of the built environment. The idea of sustainability, or ecological design, is to ensure that use of currently available resources does not end up having detrimental effects to a future society's well-being or making it impossible to obtain resources for other applications in the long run. Background Shift from narrow to broader approach The term "sustainability" in relation to architecture has so far been mostly considered through the lens of building technology and its transformations. Going beyond the technical sphere of "green design", invention and expertise, some scholars are starting to position architecture within a much broader cultural framework of the human interrelationship with nature. Adopting this framework allows tracing a rich history of cultural debates about humanity's relationship to nature and the environment, from the point of view of different historical and geographical contexts. Operational carbon vs Embodied carbon Global construction accounts for 38% of total global emissions. While sustainable architecture and construction standards have traditionally focused on reducing operational carbon emissions, there are to date few standards or systems in place to track and reduce embodied carbon. While steel and other materials are responsible for large-scale emissions, cement alone is responsible for 8% of all emissions. Changing pedagogues Critics of the reductionism of modernism often noted the abandonment of the teaching of architectural history as a causal factor. The fact that a number of the major players in the deviation from modernism were trained at Princeton University's School of Architecture, where recourse to history continued to be a part of design training in the 1940s and 1950s, was significant. The increasing rise of interest in history had a profound impact on architectural education. History courses became more typical and regularized. With the demand for professors knowledgeable in the history of architecture, several PhD programs in schools of architecture arose in order to differentiate themselves from art history PhD programs, where architectural historians had previously trained. In the US, MIT and Cornell were the first, created in the mid-1970s, followed by Columbia, Berkeley, and Princeton. Among the founders of new architectural history programs were Bruno Zevi at the Institute for the History of Architecture in Venice, Stanford Anderson and Henry Millon at MIT, Alexander Tzonis at the Architectural Association, Anthony Vidler at Princeton, Manfredo Tafuri at the University of Venice, Kenneth Frampton at Columbia University, and Werner Oechslin and Kurt Forster at ETH Zürich. Sustainable energy use Energy efficiency over the entire life cycle of a building is the most important goal of sustainable architecture. Architects use many different passive and active techniques to reduce the energy needs of buildings and increase their ability to capture or generate their own energy. To minimize cost and complexity, sustainable architecture prioritizes passive systems to take advantage of building location with incorporated architectural elements, supplementing with renewable energy sources and then fossil fuel resources only as needed. Site analysis can be employed to optimize use of local environmental resources such as daylight and ambient wind for heating and ventilation. Energy use very often depends on whether the building gets its energy on-grid, or off-grid. Off-grid buildings do not use energy provided by utility services and instead have their own independent energy production. They use on-site electricity storage while on-grid sites feed in excessive electricity back to the grid. Heating, ventilation and cooling system efficiency Numerous passive architectural strategies have been developed over time. Examples of such strategies include the arrangement of rooms or the sizing and orientation of windows in a building, and the orientation of facades and streets or the ratio between building heights and street widths for urban planning. An important and cost-effective element of an efficient heating, ventilation, and air conditioning (HVAC) system is a well-insulated building. A more efficient building requires less heat generating or dissipating power, but may require more ventilation capacity to expel polluted indoor air. Significant amounts of energy are flushed out of buildings in the water, air and compost streams. Off the shelf, on-site energy recycling technologies can effectively recapture energy from waste hot water and stale air and transfer that energy into incoming fresh cold water or fresh air. Recapture of energy for uses other than gardening from compost leaving buildings requires centralized anaerobic digesters. HVAC systems are powered by motors. Copper, versus other metal conductors, helps to improve the electrical energy efficiencies of motors, thereby enhancing the sustainability of electrical building components. Site and building orientation have some major effects on a building's HVAC efficiency. Passive solar building design allows buildings to harness the energy of the sun efficiently without the use of any active solar mechanisms such as photovoltaic cells or solar hot water panels. Typically passive solar building designs incorporate materials with high thermal mass that retain heat effectively and strong insulation that works to prevent heat escape. Low energy designs also requires the use of solar shading, by means of awnings, blinds or shutters, to relieve the solar heat gain in summer and to reduce the need for artificial cooling. In addition, low energy buildings typically have a very low surface area to volume ratio to minimize heat loss. This means that sprawling multi-winged building designs (often thought to look more "organic") are often avoided in favor of more centralized structures. Traditional cold climate buildings such as American colonial saltbox designs provide a good historical model for centralized heat efficiency in a small-scale building. Windows are placed to maximize the input of heat-creating light while minimizing the loss of heat through glass, a poor insulator. In the northern hemisphere this usually involves installing a large number of south-facing windows to collect direct sun and severely restricting the number of north-facing windows. Certain window types, such as double or triple glazed insulated windows with gas filled spaces and low emissivity (low-E) coatings, provide much better insulation than single-pane glass windows. Preventing excess solar gain by means of solar shading devices in the summer months is important to reduce cooling needs. Deciduous trees are often planted in front of windows to block excessive sun in summer with their leaves but allow light through in winter when their leaves fall off. Louvers or light shelves are installed to allow the sunlight in during the winter (when the sun is lower in the sky) and keep it out in the summer (when the sun is high in the sky). They are slatted like shutters and reflect light and radiation to reduce glare on the interior space. Advanced louver systems are automated to maximize daylight and monitor the interior temperature by adjusting their tilt. Coniferous or evergreen plants are often planted to the north of buildings to shield against cold north winds. In colder climates, heating systems are a primary focus for sustainable architecture because they are typically one of the largest single energy drains in buildings. In warmer climates where cooling is a primary concern, passive solar designs can also be very effective. Masonry building materials with high thermal mass are very valuable for retaining the cool temperatures of night throughout the day. In addition builders often opt for sprawling single story structures in order to maximize surface area and heat loss. Buildings are often designed to capture and channel existing winds, particularly the especially cool winds coming from nearby bodies of water. Many of these valuable strategies are employed in some way by the traditional architecture of warm regions, such as south-western mission buildings. In climates with four seasons, an integrated energy system will increase in efficiency: when the building is well insulated, when it is sited to work with the forces of nature, when heat is recaptured (to be used immediately or stored), when the heat plant relying on fossil fuels or electricity is greater than 100% efficient, and when renewable energy is used. Renewable energy generation Solar panels Active solar devices such as photovoltaic solar panels help to provide sustainable electricity for any use. Electrical output of a solar panel is dependent on orientation, efficiency, latitude, and climate—solar gain varies even at the same latitude. Typical efficiencies for commercially available PV panels range from 4% to 28%. The low efficiency of certain photovoltaic panels can significantly affect the payback period of their installation. This low efficiency does not mean that solar panels are not a viable energy alternative. In Germany for example, Solar Panels are commonly installed in residential home construction. Roofs are often angled toward the sun to allow photovoltaic panels to collect at maximum efficiency. In the northern hemisphere, a true-south facing orientation maximizes yield for solar panels. If true-south is not possible, solar panels can produce adequate energy if aligned within 30° of south. However, at higher latitudes, winter energy yield will be significantly reduced for non-south orientation. To maximize efficiency in winter, the collector can be angled above horizontal Latitude +15°. To maximize efficiency in summer, the angle should be Latitude -15°. However, for an annual maximum production, the angle of the panel above horizontal should be equal to its latitude. Wind turbines The use of undersized wind turbines in energy production in sustainable structures requires the consideration of many factors. In considering costs, small wind systems are generally more expensive than larger wind turbines relative to the amount of energy they produce. For small wind turbines, maintenance costs can be a deciding factor at sites with marginal wind-harnessing capabilities. At low-wind sites, maintenance can consume much of a small wind turbine's revenue. Wind turbines begin operating when winds reach 8 mph, achieve energy production capacity at speeds of 32-37 mph, and shut off to avoid damage at speeds exceeding 55 mph. The energy potential of a wind turbine is proportional to the square of the length of its blades and to the cube of the speed at which its blades spin. Though wind turbines are available that can supplement power for a single building, because of these factors, the efficiency of the wind turbine depends much upon the wind conditions at the building site. For these reasons, for wind turbines to be at all efficient, they must be installed at locations that are known to receive a constant amount of wind (with average wind speeds of more than 15 mph), rather than locations that receive wind sporadically. A small wind turbine can be installed on a roof. Installation issues then include the strength of the roof, vibration, and the turbulence caused by the roof ledge. Small-scale rooftop wind turbines have been known to be able to generate power from 10% to up to 25% of the electricity required of a regular domestic household dwelling. Turbines for residential scale use are usually between 7 feet (2 m) to 25 feet (8 m) in diameter and produce electricity at a rate of 900 watts to 10,000 watts at their tested wind speed. The reliability of wind turbine systems is important to the success of a wind energy project. Unanticipated breakdowns can have a significant impact on a project's profitability due to the logistical and practical difficulties of replacing critical components in a wind turbine. Uncertainty with the long-term component reliability has a direct impact on the amount of confidence associated with cost of energy (COE) estimates. Solar water heating Solar water heaters, also called solar domestic hot water systems, can be a cost-effective way to generate hot water for a home. They can be used in any climate, and the fuel they use—sunshine—is free. There are two types of solar water systems: active and passive. An active solar collector system can produce about 80 to 100 gallons of hot water per day. A passive system will have a lower capacity. Active solar water system's efficiency is 35-80% while a passive system is 30-50%, making active solar systems more powerful. There are also two types of circulation, direct circulation systems and indirect circulation systems. Direct circulation systems loop the domestic water through the panels. They should not be used in climates with temperatures below freezing. Indirect circulation loops glycol or some other fluid through the solar panels and uses a heat exchanger to heat up the domestic water. The two most common types of collector panels are flat-plate and evacuated-tube. The two work similarly except that evacuated tubes do not convectively lose heat, which greatly improves their efficiency (5%–25% more efficient). With these higher efficiencies, Evacuated-tube solar collectors can also produce higher-temperature space heating, and even higher temperatures for absorption cooling systems. Electric-resistance water heaters that are common in homes today have an electrical demand around 4500 kW·h/year. With the use of solar collectors, the energy use is cut in half. The up-front cost of installing solar collectors is high, but with the annual energy savings, payback periods are relatively short. Heat pumps Air source heat pumps (ASHP) can be thought of as reversible air conditioners. Like an air conditioner, an ASHP can take heat from a relatively cool space (e.g. a house at 70 °F) and dump it into a hot place (e.g. outside at 85 °F). However, unlike an air conditioner, the condenser and evaporator of an ASHP can switch roles and absorb heat from the cool outside air and dump it into a warm house. Air-source heat pumps are inexpensive relative to other heat pump systems. As the efficiency of air-source heat pumps decline when the outdoor temperature is very cold or very hot; therefore, they are most efficiently used in temperate climates. However, contrary to earlier expectations, they have proven to be also well suited for regions with cold outdoor temperatures, such as Scandinavia or Alaska. In Norway, Finland and Sweden, the use of heat pumps has grown strongly over the last two decades: in 2019, there were 15–25 heat pumps per 100 inhabitants in these countries, with ASHP the dominant heat pump technology. Similarly, earlier assumptions that ASHP would only work well in fully insulated buildings have proven wrong—even old, partially insulated buildings can be retrofitted with ASHPs and thereby strongly reduce their energy demand. Effects of EAHPs (exhaust air heat pumps) have also been studied within the aforementioned regions displaying promising results. An exhaust air heat pump uses electricity to extract heat from exhaust air leaving a building, redirecting it towards DHW (domestic hot water), space heating, and warming supply air. In colder countries, an EAHP may be able to recover around 2 - 3 times more energy than an air-to-air exchange system. A 2022 study surrounding projected emission decreases within Sweden’s Kymenlaakso region explored the aspect of retrofitting existing apartment buildings (of varying ages) with EAHP systems. Select buildings were chosen in the cities of Kotka and Kouvola, their projected carbon emissions decreasing by about 590 tCO2 and 944 tCO2 respectively with a 7 - 13 year payoff period. It is, however, important to note that EAHP systems may not produce favourable results if installed in a building exhibiting incompatible exhaust output rates or electricity consumption. In this case, EAHP systems may increase energy bills without providing reasonable cuts to carbon emissions (see EAHP). Ground-source (or geothermal) heat pumps provide an efficient alternative. The difference between the two heat pumps is that the ground-source has one of its heat exchangers placed underground—usually in a horizontal or vertical arrangement. Ground-source takes advantage of the relatively constant, mild temperatures underground, which means their efficiencies can be much greater than that of an air-source heat pump. The in-ground heat exchanger generally needs a considerable amount of area. Designers have placed them in an open area next to the building or underneath a parking lot. Energy Star ground-source heat pumps can be 40% to 60% more efficient than their air-source counterparts. They are also quieter and can also be applied to other functions like domestic hot water heating. In terms of initial cost, the ground-source heat pump system costs about twice as much as a standard air-source heat pump to be installed. However, the up-front costs can be more than offset by the decrease in energy costs. The reduction in energy costs is especially apparent in areas with typically hot summers and cold winters. Other types of heat pumps are water-source and air-earth. If the building is located near a body of water, the pond or lake could be used as a heat source or sink. Air-earth heat pumps circulate the building's air through underground ducts. With higher fan power requirements and inefficient heat transfer, Air-earth heat pumps are generally not practical for major construction. Passive daytime radiative cooling Passive daytime radiative cooling harvests the extreme coldness of outer space as a renewable energy source to achieve daytime cooling. Being high in solar reflectance to reduce solar heat gain and strong in longwave infrared (LWIR) thermal radiation heat transfer, daytime radiative cooling surfaces can achieve sub-ambient cooling for indoor and outdoor spaces when applied to roofs, which can significantly lower energy demand and costs devoted to cooling. These cooling surfaces can be applied as sky-facing panels, similar to other renewable energy sources like solar energy panels, making them for simple integration into architectural design. A passive daytime radiative cooling roof application can double the energy savings of a white roof, and when applied as a multilayer surface to 10% of a building's roof, it can replace 35% of air conditioning used during the hottest hours of daytime. Daytime radiative cooling applications for indoor space cooling is growing with an estimated "market size of ~$27 billion in 2025." Sustainable building materials Some examples of sustainable building materials include recycled denim or blown-in fiber glass insulation, sustainably harvested wood, Trass, Linoleum, sheep wool, hempcrete, roman concrete, panels made from paper flakes, baked earth, rammed earth, clay, vermiculite, flax linen, sisal, seagrass, expanded clay grains, coconut, wood fiber plates, calcium sandstone, locally obtained stone and rock, and bamboo, which is one of the strongest and fastest growing woody plants, and non-toxic low-VOC glues and paints. Bamboo flooring can be useful in ecological spaces since they help reduce pollution particles in the air. Vegetative cover or shield over building envelopes also helps in the same. Paper which is fabricated or manufactured out of forest wood is supposedly hundred percent recyclable, thus it regenerates and saves almost all the forest wood that it takes during its manufacturing process. There is an underutilized potential for systematically storing carbon in the built environment. Natural products The use of natural building materials for their sustainable qualities is a practice seen in vernacular architecture. Regional architectural styles develop over generations, utilizing local materials. This practice reduces transportation and production emissions. Regenerative sources, use of waste material, and the ability to reuse are sustainable qualities of timber, thatching, and stone and clay. Laminated timber products, straw, and stone are low carbon construction materials with major potential for scalability. Timber products can sequester carbon, while stone has a low extraction energy. Straw, including straw-bale construction, sequesters carbon while providing a high level of insulation. High thermal performance of natural materials contribute to regulating interior conditions without the use of modern technologies. The uses of timber, straw, and stone in sustainable architecture were the subject of a major exhibit at the UK's Design Museum. Recycled materials Sustainable architecture often incorporates the use of recycled or second hand materials, such as reclaimed lumber and recycled copper. The reduction in use of new materials creates a corresponding reduction in embodied energy (energy used in the production of materials). Often sustainable architects attempt to retrofit old structures to serve new needs in order to avoid unnecessary development. Architectural salvage and reclaimed materials are used when appropriate. When older buildings are demolished, frequently any good wood is reclaimed, renewed, and sold as flooring. Any good dimension stone is similarly reclaimed. Many other parts are reused as well, such as doors, windows, mantels, and hardware, thus reducing the consumption of new goods. When new materials are employed, green designers look for materials that are rapidly replenished, such as bamboo, which can be harvested for commercial use after only six years of growth, sorghum or wheat straw, both of which are waste material that can be pressed into panels, or cork oak, in which only the outer bark is removed for use, thus preserving the tree. When possible, building materials may be gleaned from the site itself; for example, if a new structure is being constructed in a wooded area, wood from the trees which were cut to make room for the building would be re-used as part of the building itself. For insulation in building envelopes, more experimental materials such as “waste sheep’s wool” alongside other waste fibers originating from textile and agri-industrial operations are being researched for use as well, with recent studies suggesting the recycled insulation effective for architectural purposes. Lower volatile organic compounds Low-impact building materials are used wherever feasible: for example, insulation may be made from low VOC (volatile organic compound)-emitting materials such as recycled denim or cellulose insulation, rather than the building insulation materials that may contain carcinogenic or toxic materials such as formaldehyde. To discourage insect damage, these alternate insulation materials may be treated with boric acid. Organic or milk-based paints may be used. However, a common fallacy is that "green" materials are always better for the health of occupants or the environment. Many harmful substances (including formaldehyde, arsenic, and asbestos) are naturally occurring and are not without their histories of use with the best of intentions. A study of emissions from materials by the State of California has shown that there are some green materials that have substantial emissions whereas some more "traditional" materials actually were lower emitters. Thus, the subject of emissions must be carefully investigated before concluding that natural materials are always the healthiest alternatives for occupants and for the Earth. Volatile organic compounds (VOC) can be found in any indoor environment coming from a variety of different sources. VOCs have a high vapor pressure and low water solubility, and are suspected of causing sick building syndrome type symptoms. This is because many VOCs have been known to cause sensory irritation and central nervous system symptoms characteristic to sick building syndrome, indoor concentrations of VOCs are higher than in the outdoor atmosphere, and when there are many VOCs present, they can cause additive and multiplicative effects. Green products are usually considered to contain fewer VOCs and be better for human and environmental health. A case study conducted by the Department of Civil, Architectural, and Environmental Engineering at the University of Miami that compared three green products and their non-green counterparts found that even though both the green products and the non-green counterparts both emitted levels of VOCs, the amount and intensity of the VOCs emitted from the green products were much safer and comfortable for human exposure. Lab-grown organic materials Commonly used building materials such as wood require deforestation that is, without proper care, unsustainable. As of October 2022, researchers at MIT have made developments on lab-grown Zinnia elegans cells growing into specific characteristics under conditions within their control. These characteristics include the “shape, thickness, [and] stiffness,” as well as mechanical properties that can mimic wood. David N. Bengston from the USDA suggests that this alternative would be more efficient than traditional wood harvesting, with future developments potentially saving on transportation energy and conserve forests. However, Bengston notes that this breakthrough would change paradigms and raises new economic and environmental questions, such as timber-dependent communities′ jobs or how conservation would impact wildfires. Materials sustainability standards Despite the importance of materials to overall building sustainability, quantifying and evaluating the sustainability of building materials has proven difficult. There is little coherence in the measurement and assessment of materials sustainability attributes, resulting in a landscape today that is littered with hundreds of competing, inconsistent and often imprecise eco-labels, standards and certifications. This discord has led both to confusion among consumers and commercial purchasers and to the incorporation of inconsistent sustainability criteria in larger building certification programs such as LEED. Various proposals have been made regarding rationalization of the standardization landscape for sustainable building materials. Sustainable design and plan Building Building information modelling Building information modelling (BIM) is used to help enable sustainable design by allowing architects and engineers to integrate and analyze building performance.[5]. BIM services, including conceptual and topographic modelling, offer a new channel to green building with successive and immediate availability of internally coherent, and trustworthy project information. BIM enables designers to quantify the environmental impacts of systems and materials to support the decisions needed to design sustainable buildings. Consulting A sustainable building consultant may be engaged early in the design process, to forecast the sustainability implications of building materials, orientation, glazing and other physical factors, so as to identify a sustainable approach that meets the specific requirements of a project. Norms and standards have been formalized by performance-based rating systems e.g. LEED and Energy Star for homes. They define benchmarks to be met and provide metrics and testing to meet those benchmarks. It is up to the parties involved in the project to determine the best approach to meet those standards. As sustainable building consulting is often associated with cost premium, organisations such as Architects Assist aim for equity of access to sustainable and resident design. Building placement One central and often ignored aspect of sustainable architecture is building placement. Although the ideal environmental home or office structure is often envisioned as an isolated place, this kind of placement is usually detrimental to the environment. First, such structures often serve as the unknowing frontlines of suburban sprawl. Second, they usually increase the energy consumption required for transportation and lead to unnecessary auto emissions. Ideally, most building should avoid suburban sprawl in favor of the kind of light urban development articulated by the New Urbanist movement. Careful mixed use zoning can make commercial, residential, and light industrial areas more accessible for those traveling by foot, bicycle, or public transit, as proposed in the Principles of Intelligent Urbanism. The study of permaculture, in its holistic application, can also greatly help in proper building placement that minimizes energy consumption and works with the surroundings rather than against them, especially in rural and forested zones. Water Usage Sustainable buildings look for ways to conserve water. One strategic water saving design green buildings incorporate are green roofs. Green roofs have rooftop vegetation which captures storm drainage water. This function not only collects the water for further uses but also serves as a good insulator that can aid in the urban heat island effect. Another strategic water efficient design is treating wastewater so it can be reused again. Urban design Sustainable urbanism takes actions beyond sustainable architecture, and makes a broader view for sustainability. Typical solutions includes eco-industrial park (EIP), urban agriculture, etc. International program that are being supported includes Sustainable Urban Development Network, supported by UN-HABITAT, and Eco2 Cities, supported by the World Bank. Concurrently, the recent movements of New Urbanism, New Classical architecture and complementary architecture promote a sustainable approach towards construction, that appreciates and develops smart growth, architectural tradition and classical design. This in contrast to modernist and globally uniform architecture, as well as leaning against solitary housing estates and suburban sprawl. Both trends started in the 1980s. The Driehaus Architecture Prize is an award that recognizes efforts in New Urbanism and New Classical architecture, and is endowed with a prize money twice as high as that of the modernist Pritzker Prize. Waste management Waste takes the form of spent or useless materials generated from households and businesses, construction and demolition processes, and manufacturing and agricultural industries. These materials are loosely categorized as municipal solid waste, construction and demolition (C&D) debris, and industrial or agricultural by-products. Sustainable architecture focuses on the on-site use of waste management, incorporating things such as grey water systems for use on garden beds, and composting toilets to reduce sewage. These methods, when combined with on-site food waste composting and off-site recycling, can reduce a house's waste to a small amount of packaging waste. See also References External links World Green Building Council Passivhaus Institut German institute for passive buildings Low-energy building Sustainable building Sustainable design Environmental social science Sustainable development Sustainability
Sustainable architecture
[ "Engineering", "Environmental_science" ]
5,832
[ "Sustainable building", "Sustainable architecture", "Building engineering", "Construction", "Environmental social science", "Architecture" ]
2,704,841
https://en.wikipedia.org/wiki/Air%20preheater
An air preheater is any device designed to heat air before another process (for example, combustion in a boiler), with the primary objective of increasing the thermal efficiency of the process. They may be used alone or to replace a recuperative heat system or to replace a steam coil. In particular, this article describes the combustion air preheaters used in large boilers found in thermal power stations producing electric power from e.g. fossil fuels, biomass or waste. For instance, as the Ljungström air preheater has been attributed worldwide fuel savings estimated to 4,960,000,000 tons of oil, "few inventions have been as successful in saving fuel as the Ljungström Air Preheater", marked as the 44th International Historic Mechanical Engineering Landmark by the American Society of Mechanical Engineers. The purpose of the air preheater is to recover the heat from the boiler flue gas which increases the thermal efficiency of the boiler by reducing the useful heat lost in the flue gas. As a consequence, the flue gases are also conveyed to the flue gas stack (or chimney) at a lower temperature, allowing simplified design of the conveyance system and the flue gas stack. It also allows control over the temperature of gases leaving the stack (to meet emissions regulations, for example). It is installed between the economizer and chimney. Types There are two types of air preheaters for use in steam generators in thermal power stations: One is a tubular type built into the boiler flue gas ducting, and the other is a regenerative air preheater. These may be arranged so the gas flows horizontally or vertically across the axis of rotation. Another type of air preheater is the regenerator used in iron or glass manufacture. Tubular type Construction features Tubular preheaters consist of straight tube bundles which pass through the outlet ducting of the boiler and open at each end outside of the ducting. Inside the ducting, the hot furnace gases pass around the preheater tubes, transferring heat from the exhaust gas to the air inside the preheater. Ambient air is forced by a fan through ducting at one end of the preheater tubes and at other end the heated air from inside of the tubes emerges into another set of ducting, which carries it to the boiler furnace for combustion. Problems The tubular preheater ductings for cold and hot air require more space and structural supports than a rotating preheater design. Further, due to dust-laden abrasive flue gases, the tubes outside the ducting wear out faster on the side facing the gas current. Many advances have been made to eliminate this problem such as the use of ceramic and hardened steel. Many new circulating fluidized bed (CFB) and bubbling fluidized bed (BFB) steam generators are currently incorporating tubular air heaters offering an advantage with regards to the moving parts of a rotary type. Dew point corrosion Dew point corrosion occurs for a variety of reasons. The type of fuel used, its sulfur content and moisture content are contributing factors. However, by far the most significant factor in dew point corrosion is the metal temperature of the tubes. If the metal temperature within the tubes drops below the acid saturation temperature, usually at between 190 °F (88 °C) and 230 °F (110 °C), but sometimes at temperatures as high as 260 °F (127 °C), then the risk of dew point corrosion damage becomes considerable. Regenerative air preheaters There are two types of regenerative air preheaters: the rotating-plate regenerative air preheaters (RAPH) and the stationary-plate regenerative air preheaters (Rothemuhle). Rotating-plate regenerative air preheater The rotating-plate design (RAPH) consists of a central rotating-plate element installed within a casing that is divided into two (bi-sector type), three (tri-sector type) or four (quad-sector type) sectors containing seals around the element. The seals allow the element to rotate through all the sectors, but keep gas leakage between sectors to a minimum while providing separate gas air and flue gas paths through each sector. Tri-sector types are the most common in modern power generation facilities. In the tri-sector design, the largest sector (usually spanning about half the cross-section of the casing) is connected to the boiler hot gas outlet. The hot exhaust gas flows over the central element, transferring some of its heat to the element, and is then ducted away for further treatment in dust collectors and other equipment before being expelled from the flue gas stack. The second, smaller sector, is fed with ambient air by a fan, which passes over the heated element as it rotates into the sector, and is heated before being carried to the boiler furnace for combustion. The third sector is the smallest one and it heats air which is routed into the pulverizers and used to carry the coal-air mixture to coal boiler burners. Thus, the total air heated in the RAPH provides: heating air to remove the moisture from the pulverised coal dust, carrier air for transporting the pulverised coal to the boiler burners and the primary air for combustion. The rotor itself is the medium of heat transfer in this system, and is usually composed of some form of steel and/or ceramic structure. It rotates quite slowly (around 1-2 RPM) to allow optimum heat transfer first from the hot exhaust gases to the element, then as it rotates, from the element to the cooler air in the other sectors. Construction features In this design the whole air preheater casing is supported on the boiler supporting structure itself with necessary expansion joints in the ducting. The vertical rotor is supported on thrust bearings at the lower end and has an oil bath lubrication, cooled by water circulating in coils inside the oil bath. This arrangement is for cooling the lower end of the shaft, as this end of the vertical rotor is on the hot end of the ducting. The top end of the rotor has a simple roller bearing to hold the shaft in a vertical position. The rotor is built up on the vertical shaft with radial supports and cages for holding the baskets in position. Radial and circumferential seal plates are also provided to avoid leakages of gases or air between the sectors or between the duct and the casing while in rotation. For on line cleaning of the deposits from the baskets steam jets are provided such that the blown out dust and ash are collected at the bottom ash hopper of the air preheater. This dust hopper is connected for emptying along with the main dust hoppers of the dust collectors. The rotor is turned by an air driven motor and gearing, and is required to be started before starting the boiler and also to be kept in rotation for some time after the boiler is stopped, to avoid uneven expansion and contraction resulting in warping or cracking of the rotor. The station air is generally totally dry (dry air is required for the instrumentation), so the air used to drive the rotor is injected with oil to lubricate the air motor. Safety protected inspection windows are provided for viewing the preheater's internal operation under all operating conditions. The baskets are in the sector housings provided on the rotor and are renewable. The life of the baskets depend on the ash abrasiveness and corrosiveness of the boiler outlet gases. Problems The boiler flue gas contains many dust particles (due to high ash content) not contributing towards combustion, such as silica, which cause abrasive wear of the baskets, and may also contain corrosive gases depending on the composition of the fuel. For example, Indian coals generally result in high levels of ash and silica in the flue gas. The wear of the baskets therefore is generally more than other, cleaner-burning fuels. In this RAPH, the dust laden, corrosive boiler gases have to pass between the elements of air preheater baskets. The elements are made up of zig zag corrugated plates pressed into a steel basket giving sufficient annular space in between for the gas to pass through. These plates are corrugated to give more surface area for the heat to be absorbed and also to give it the rigidity for stacking them into the baskets. Hence frequent replacements are called for and new baskets are always kept ready. In the early days, Cor-ten steel was being used for the elements. Today due to technological advance many manufacturers may use their own patents. Some manufacturers supply different materials for the use of the elements to lengthen the life of the baskets. In certain cases the unburnt deposits may occur on the air preheater elements causing it to catch fire during normal operations of the boiler, giving rise to explosions inside the air preheater. Sometimes mild explosions may be detected in the control room by variations in the inlet and outlet temperatures of the combustion air. Stationary-plate regenerative air preheater The heating plate elements in this type of regenerative air preheater are also installed in a casing, but the heating plate elements are stationary rather than rotating. Instead the air ducts in the preheater are rotated so as to alternatively expose sections of the heating plate elements to the upflowing cool air. As indicated in the adjacent drawing, there are rotating inlet air ducts at the bottom of the stationary plates similar to the rotating outlet air ducts at the top of the stationary plates. Stationary-plate regenerative air preheaters are also known as Rothemuhle preheaters, manufactured for over 25 years by Balke-Dürr GmbH of Ratingen, Germany. Regenerator A regenerator consists of a brick checkerwork: bricks laid with spaces equivalent to a brick's width between them, so that air can flow relatively easily through the checkerwork. The idea is that as hot exhaust gases flow through the checkerwork, they give up heat to the bricks. The airflow is then reversed, so that the hot bricks heat up the incoming combustion air and fuel. For a glass-melting furnace, a regenerator sits on either side of the furnace, often forming an integral whole. For a blast furnace, the regenerators (commonly called Cowper stoves) sit separate to the furnace. A furnace needs no less than two stoves, but may have three. One of the stoves is 'on gas', receiving hot gases from the furnace top and heating the checkerwork inside, whilst the other is 'on blast', receiving cold air from the blowers, heating it and passing it to the blast furnace. See also Recuperator Economiser Regenerative heat exchanger Thermal wheel References External links History of the Ljungström Air Preheater Chemical equipment Mechanical engineering Power station technology Engineering thermodynamics
Air preheater
[ "Physics", "Chemistry", "Engineering" ]
2,213
[ "Applied and interdisciplinary physics", "Chemical equipment", "Engineering thermodynamics", "Thermodynamics", "nan", "Mechanical engineering" ]
2,704,993
https://en.wikipedia.org/wiki/Electronic%20speed%20control
An electronic speed control (ESC) is an electronic circuit that controls and regulates the speed of an electric motor. It may also provide reversing of the motor and dynamic braking. Miniature electronic speed controls are used in electrically powered radio controlled models. Full-size electric vehicles also have systems to control the speed of their drive motors. Function An electronic speed control follows a speed reference signal (derived from a throttle lever, joystick, or other manual input) and varies the switching rate of a network of field effect transistors (FETs). By adjusting the duty cycle or switching frequency of the transistors, the speed of the motor is changed. The rapid switching of the current flowing through the motor is what causes the motor itself to emit its characteristic high-pitched whine, especially noticeable at lower speeds. Different types of speed controls are required for brushed DC motors and brushless DC motors. A brushed motor can have its speed controlled by varying the voltage on its armature. (Industrially, motors with electromagnet field windings instead of permanent magnets can also have their speed controlled by adjusting the strength of the motor field current.) A brushless motor requires a different operating principle. The speed of the motor is varied by adjusting the timing of pulses of current delivered to the several windings of the motor. Brushless ESC systems basically create three-phase AC power, like a variable frequency drive, to run brushless motors. Brushless motors are popular with radio controlled airplane hobbyists because of their efficiency, power, longevity and light weight in comparison to traditional brushed motors. Brushless DC motor controllers are much more complicated than brushed motor controllers. The correct phase of the current fed to the motor varies with the motor rotation, which is to be taken into account by the ESC: Usually, back EMF from the motor windings is used to detect this rotation, but variations exist that use separate magnetic (Hall effect) sensors or optical detectors. Computer-programmable speed controls generally have user-specified options which allow setting low voltage cut-off limits, timing, acceleration, braking and direction of rotation. Reversing the motor's direction may also be accomplished by switching any two of the three leads from the ESC to the motor. Classification ESCs are normally rated according to maximum current, for example, 25 amperes (25 A). Generally the higher the rating, the larger and heavier the ESC tends to be, which is a factor when calculating mass and balance in airplanes. Many modern ESCs support nickel metal hydride, lithium ion polymer and lithium iron phosphate batteries with a range of input and cut-off voltages. The type of battery and number of cells connected is an important consideration when choosing a battery eliminator circuit (BEC), whether built into the controller or as a stand-alone unit. A higher number of cells connected will result in a reduced power rating and therefore a lower number of servos supported by an integrated BEC, if it uses a linear voltage regulator. A well designed BEC using a switching regulator should not have a similar limitation. ESC firmware Most modern ESCs contain a microcontroller interpreting the input signal and appropriately controlling the motor using a built-in program, or firmware. In some cases it is possible to change the factory built-in firmware for an alternate, publicly available, open source firmware. This is done generally to adapt the ESC to a particular application. Some ESCs are factory built with the capability of user upgradable firmware. Others require soldering to connect a programmer. ESC are usually sold as black boxes with proprietary firmware. As of 2014, a Swedish engineer named Benjamin Vedder started an open source ESC project later called VESC. The VESC project has since attracted attention for its advanced customization options and relatively reasonable build price compared to other high end ESCs. Vehicle applications Electric cars Large, high-current ESCs are used in electric cars, such as the Nissan Leaf, Tesla Roadster (2008), Model S, Model X, Model 3, and the Chevrolet Bolt. The energy draw is usually measured in kilowatts (the Nissan Leaf, for instance, uses a 160 kW motor that produces up to 340 Nm torque ). Most mass-produced electric cars feature ESCs that capture energy when the car coasts or brakes, using the motor as a generator and slowing the car down. The captured energy is used to charge the batteries and thus extend the driving range of the car (this is known as regenerative braking). In some vehicles, such as those produced by Tesla, this can be used to slow down so effectively that the car's conventional brakes are only needed at very low speeds (the motor braking effect diminishes as the speed is reduced). In others, such as the Nissan Leaf, there is only a slight "drag" effect when coasting, and the ESC modulates the energy capture in tandem with the conventional brakes to bring the car to a stop. ESCs used in mass-produced electric cars usually have reversing capability, allowing the motor to run in both directions. The car may only have one gear ratio, and the motor simply runs in the opposite direction to make the car go in reverse. Some electric cars with DC motors also have this feature, using an electrical switch to reverse the direction of the motor, but others run the motor in the same direction all the time and use a traditional manual or automatic transmission to reverse direction (usually this is easier, since the vehicle used for the conversion already has the transmission, and the electric motor is simply installed in place of the original engine). Electric bicycles and scooters A motor used in an electric bicycle application requires high initial torque and therefore uses Hall effect sensors for speed measurement. Electric bicycle controllers generally use brake application sensors and pedal rotation sensors, and provide potentiometer-adjustable motor speed, closed-loop speed control for precise speed regulation, protection logic for over-voltage, over-current, and thermal protection. Sometimes pedal torque sensors are used to enable motor assistance proportional to applied torque and sometimes support is provided for regenerative braking; however, infrequent braking and the low mass of bicycles limit recovered energy. An implementation is described in a whitepaper by Zilog on an ebike hub motor controller for a 200 W, 24 V brushless DC electric (BLDC) motor. P.A.S or PAS may appear within the list of components of electric conversion kits for bicycles, which implies Pedal Assistance Sensor or sometimes Pulse Pedal Assistance Sensor. Pulse usually relates to a magnet and sensor which measures the rotational velocity of the crank. Pedal pressure sensors under the feet are possible but not common. Remote control applications An ESC can be a stand-alone unit which plugs into the receiver's throttle control channel or incorporated into the receiver itself, as is the case in most toy-grade R/C vehicles. Some R/C manufacturers that install proprietary hobby-grade electronics in their entry-level vehicles, vessels or aircraft use onboard electronics that combine the two on a single circuit board. Electronic speed controls for model RC vehicles may incorporate a battery eliminator circuit to regulate voltage for the receiver, removing the need for separate receiver batteries. The regulator may be linear or switched mode. ESCs, in a broader sense, are PWM controllers for electric motors. The ESC generally accepts a nominal 50 Hz PWM servo input signal whose pulse width varies from 1 ms to 2 ms. When supplied with a 1 ms width pulse at 50 Hz, the ESC responds by turning off the motor attached to its output. A 1.5 ms pulse-width input signal drives the motor at approximately half-speed. When presented with 2.0 ms input signal, the motor runs at full speed. Cars ESCs designed for sport use in cars generally have reversing capability; newer sport controls can have the reversing ability overridden so that it can not be used in a race. Controls designed specifically for racing and even some sport controls have the added advantage of dynamic braking capability. The ESC forces the motor to act as a generator by placing an electrical load across the armature. This in turn makes the armature harder to turn, thus slowing or stopping the model. Some controllers add the benefit of regenerative braking. Helicopters ESCs designed for radio-control helicopters do not require a braking feature (since the one-way bearing would render it useless anyhow) nor do they require reverse direction (although it can be helpful since the motor wires can often be difficult to access and change once installed). Many high-end helicopter ESCs provide a "governor mode" which fixes the motor RPM to a set speed, greatly aiding CCPM-based flight. It is also used in quadcopters. Airplanes ESCs designed for radio-control airplanes usually contain a few safety features. If the power coming from the battery is insufficient to continue running the electric motor, the ESC will reduce or cut off power to the motor while allowing continued use of ailerons, rudder and elevator function. This allows the pilot to retain control of the airplane to glide or fly on low power to safety. Boats ESCs designed for boats are by necessity waterproof. The watertight structure is significantly different from that of non-marine type ESCs, with a more packed air trapping enclosure. Thus arises the need to cool the motor and ESC effectively to prevent rapid failure. Most marine-grade ESCs are cooled by circulated water run by the motor, or negative propeller vacuum near the drive shaft output. Like car ESCs, boat ESCs have braking and reverse capability. Quadcopters Electronic Speed Controllers (ESC) are an essential component of modern quadcopters (and all multirotors), offering high power, high frequency, high resolution 3-phase AC power to a motor in an extremely compact miniature package. These craft depend entirely on the variable speed of the motors driving the propellers. Fine speed control over a wide range in motor/prop speed gives all of the control necessary for a quadcopter (and all multirotors) to fly. Quadcopter ESCs usually can use a faster update rate compared to the standard 50 Hz signal used in most other RC applications. A variety of ESC protocols beyond PWM are utilized for modern-day multirotors, including, Oneshot42, Oneshot125, Multishot, and DShot. DShot is a digital protocol that offers certain advantages over classical analog control, such as higher resolution, CRC checksums, and lack of oscillator drift (removing the need for calibration). Modern day ESC protocols can communicate at speeds of 37.5 kHz or greater, with a DSHOT2400 frame only taking 6.5 μs. Model trains Most electric model trains are powered by electricity transported by the rails or by an overhead wire to the vehicle and so the electronic speed control does not have to be on board. This is however not the case for model trains with digital steering systems allowing multiple trains to run on the same track with different speed at the same time. See also Electronic control unit, automotive electronics control system JST connector, family of electric connectors Motor controller, used for electric motor performance coordination Throttle External links What is an Electronic Speed Controller & How Does an ESC Work. References Electric motor control Electronic engineering Power electronics
Electronic speed control
[ "Technology", "Engineering" ]
2,347
[ "Electrical engineering", "Electronic engineering", "Computer engineering", "Power electronics" ]
2,705,889
https://en.wikipedia.org/wiki/System%20requirements%20specification
A System Requirements Specification (SysRS) (abbreviated SysRS to be distinct from a software requirements specification (SRS)) is a structured collection of information that embodies the requirements of a system. A business analyst (BA), sometimes titled system analyst, is responsible for analyzing the business needs of their clients and stakeholders to help identify business problems and propose solutions. Within the systems development life cycle domain, the BA typically performs a liaison function between the business side of an enterprise and the information technology department or external service providers. See also Business analysis Business process reengineering Business requirements Concept of operations Data modeling Information technology Process modeling Requirement Requirements analysis Software requirements specification Systems analysis Use case References External links IEEE Guide for Developing System Requirements Specifications (IEEE Std 1233, 1999 Edition) IEEE Guide for Developing System Requirements Specifications (IEEE Std 1233, 1998 Edition) DAU description System/Subsystem Specification, Data Item Description (SSS-DID) System Requirements Specification for STEWARDS example SRS at USDA Software engineering Systems analysis Systems engineering ru:Техническое задание
System requirements specification
[ "Technology", "Engineering" ]
235
[ "Software engineering", "Systems engineering", "Information technology", "Computer engineering" ]
2,705,942
https://en.wikipedia.org/wiki/F-ATPase
F-ATPase, also known as F-Type ATPase, is an ATPase/synthase found in bacterial plasma membranes, in mitochondrial inner membranes (in oxidative phosphorylation, where it is known as Complex V), and in chloroplast thylakoid membranes. It uses a proton gradient to drive ATP synthesis by allowing the passive flux of protons across the membrane down their electrochemical gradient and using the energy released by the transport reaction to release newly formed ATP from the active site of F-ATPase. Together with V-ATPases and A-ATPases, F-ATPases belong to superfamily of related rotary ATPases. F-ATPase consists of two domains: the Fo domain, which is integral in the membrane and is composed of 3 different types of integral proteins classified as a, b and c. the F1, which is peripheral (on the side of the membrane that the protons are moving into). F1 is composed of 5 polypeptide units α3β3γδε that bind to the surface of the Fo domain. F-ATPases usually work as ATP synthases instead of ATPases in cellular environments. That is to say, it usually makes ATP from the proton gradient instead of working in the other direction like V-ATPases typically do. They do occasionally revert as ATPases in bacteria. Structure Fo-F1 particles are mainly formed of polypeptides. The F1-particle contains 5 types of polypeptides, with the composition-ratio—3α:3β:1δ:1γ:1ε. The Fo has the 1a:2b:12c composition. Together they form a rotary motor. As the protons bind to the subunits of the Fo domains, they cause parts of it to rotate. This rotation is propagated by a 'camshaft' to the F1 domain. ADP and Pi (inorganic phosphate) bind spontaneously to the three β subunits of the F1 domain, so that every time it goes through a 120° rotation ATP is released (rotational catalysis). The Fo domains sits within the membrane, spanning the phospholipid bilayer, while the F1 domain extends into the cytosol of the cell to facilitate the use of newly synthesized ATP. The Bovine Mitochondrial F1-ATPase Complexed with the inhibitor protein If1 is commonly cited in the relevant literature. Examples of its use may be found in many cellular fundamental metabolic activities such as acidosis and alkalosis and respiratory gas exchange. The o in the Fo stands for oligomycin, because oligomycin is able to inhibit its function. N-ATPase N-ATPases are a group of F-type ATPases without a delta/OSCP subunit, found in bacteria and a group of archaea via horizontal gene transfer. They transport sodium ions instead of protons and tend to hydrolyze ATP. They form a distinct group that is further apart from usual F-ATPases than A-ATPases are from V-ATPases. References Membrane proteins
F-ATPase
[ "Biology" ]
651
[ "Protein classification", "Membrane proteins" ]
2,707,059
https://en.wikipedia.org/wiki/Impervious%20surface
Impervious surfaces are mainly artificial structures—such as pavements (roads, sidewalks, driveways and parking lots, as well as industrial areas such as airports, ports and logistics and distribution centres, all of which use considerable paved areas) that are covered by water-resistant materials such as asphalt, concrete, brick, stone—and rooftops. Soils compacted by urban development are also highly impervious. Environmental effects Impervious surfaces are an environmental concern because their construction initiates a chain of events that modifies urban air and water resources: The pavement materials seal the soil surface, eliminating rainwater infiltration and natural groundwater recharge. This can cause urban flooding. An article in the Seattle Times states that "while urban areas cover only 3 percent of the U.S., it is estimated that their runoff is the primary source of pollution in 13 percent of rivers, 18 percent of lakes and 32 percent of estuaries." Some of these pollutants include excess nutrients from fertilizers; pathogens; pet waste; gasoline, motor oil and heavy metals from vehicles; high sediment loads from stream bed erosion and construction sites; and waste such as cigarette butts, 6-pack holders and plastic bags carried by surges of stormwater. In some cities, the flood waters get into combined sewers, causing them to overflow, flushing their raw sewage into streams. Polluted runoff can have many negative effects on fish, animals, plants and people. Impervious surfaces collect solar heat in their dense mass. When the heat is released, it raises air temperatures, producing urban "heat islands", and increasing energy consumption in buildings. The warm runoff from impervious surfaces reduces dissolved oxygen in stream water, making life difficult in aquatic ecosystems. Impervious pavements deprive tree roots of aeration, eliminating the "urban forest" and the canopy shade that would otherwise moderate urban climate. Because impervious surfaces displace living vegetation, they reduce ecological productivity, and interrupt atmospheric carbon cycling. The total coverage by impervious surfaces in an area, such as a municipality or a watershed, is usually expressed as a percentage of the total land area. The coverage increases with rising urbanization. In rural areas, impervious cover may only be one or two percent. In residential areas, coverage increases from about 10 percent in low-density subdivisions to over 50 percent in multifamily communities. In industrial and commercial areas, coverage rises above 70 percent. In regional shopping centers and dense urban areas, it is over 90 percent. In the contiguous 48 states of the US, urban impervious cover adds up to . Development adds annually. Typically, two-thirds of the cover is pavements and one-third is building roofs. Mitigation of environmental impacts Impervious surface coverage can be limited by restricting land use density (such as a number of homes per acre in a subdivision), but this approach causes land elsewhere (outside the subdivision) to be developed, to accommodate the growing population. (See urban sprawl.) Alternatively, urban structures can be built differently to make them function more like naturally pervious soils; examples of such alternative structures are porous pavements, green roofs and infiltration basins. Rainwater from impervious surfaces can be collected in rainwater tanks and used in place of main water. The island of Catalina located West of the Port of Long Beach has put extensive effort into capturing rainfall to minimize the cost of transportation from the mainland. Partly in response to recent criticism by municipalities, a number of concrete manufacturers such as CEMEX and Quikrete have begun producing permeable materials which partly mitigate the environmental impact of conventional impervious concrete. These new materials are composed of various combinations of naturally derived solids including fine to coarse-grained rocks and minerals, organic matter (including living organisms), ice, weathered rock and precipitates, liquids (primarily water solutions), and gases. The COVID-19 pandemic gave birth to proposals for radical change in the organisation of the city, being the drastic reduction of the presence of impermeable surfaces and the recovery of the permeability of the soil one of the elements of the Manifesto for the Reorganisation of the city, published in Barcelona by architecture and urban theorist Massimo Paolini and signed by 160 academics and 350 architects. Percentage imperviousness The percentage imperviousness, commonly referred to as PIMP in calculations, is an important factor when considering drainage of water. It is calculated by measuring the percentage of a catchment area which is made up of impervious surfaces such as roads, roofs and other paved surfaces. An estimation of PIMP is given by PIMP = 6.4J^0.5 where J is the number of dwellings per hectare (Butler and Davies 2000). For example, woodland has a PIMP value of 10%, whereas dense commercial areas have a PIMP value of 100%. This variable is used in the Flood Estimation Handbook. Homer and others (2007) indicate that about 76 percent of the conterminous United States is classified as having less than 1 percent impervious cover, 11 percent with impervious cover of 1 to 10 percent, 4 percent with an estimated impervious cover of 11 to 20 percent, 4.4 percent with an estimated impervious cover of 21 to 40 percent, and about 4.4 percent with an estimated impervious cover greater than 40 percent. Total impervious area The total impervious area (TIA), commonly referred to as impervious cover (IC) in calculations, can be expressed as a fraction (from zero to one) or a percentage. There are many methods for estimating TIA, including the use of the National Land Cover Data Set (NLCD) with a Geographic information system (GIS), land-use categories with categorical TIA estimates, a generalized percent developed area, and relations between population density and TIA. The U.S. NLCD impervious surface data set may provide a high-quality nationally consistent land cover data set in a GIS-ready format that can be used to estimate TIA value. The NLCD consistently quantifies the percent anthropogenic TIA for the NLCD at a 30-meter (a 900 m2) pixel resolution throughout the Nation. Within the data set, each pixel is quantified as having a TIA value that ranges from 0 to 100 percent. TIA estimates made with the NLCD impervious surface data set represent an aggregated TIA value for each pixel rather than a TIA value for an individual impervious feature. For example, a two lane road in a grassy field has a TIA value of 100 percent, but the pixel containing the road would have a TIA value of 26 percent. If the road (equally) straddles the boundary of two pixels, each pixel would have a TIA value of 13 percent. The Data-quality analysis of the NLCD 2001 data set with manually delimited TIA sample areas indicates that the average error of predicted versus actual TIA may range from 8.8 to 11.4 percent. TIA estimates from land use are made by identifying land use categories for large blocks of land, summing the total area of each category, and multiplying each area by a characteristic TIA coefficient. Land use categories commonly are used to estimate TIA because areas with a common land use can be identified from field studies, from maps, from planning and zoning information, and from remote imagery. Land use coefficient methods commonly are used because planning and zoning maps that identify similar areas are, increasingly, available in GIS formats. Also, land use methods are selected to estimate potential effects of future development on TIA with planning maps that quantify projected changes in land use. There are substantial differences in actual and estimated TIA estimates from different studies in the literature. Terms like low density and high density may differ in different areas. A residential density of one-half acre per house may be classified as high density in a rural area, medium density in a suburban area, and low density in an urban area. Granato (2010) provides a table with TIA values for different land-use categories from 30 studies in the literature. The percent developed area (PDA) is commonly used to estimate TIA manually by using maps. The Multi-Resolution Land Characteristics Consortium (MRLCC) defines a developed area as being covered by at least 30 percent of constructed materials). Southard (1986) defined non-developed areas as natural, agricultural, or scattered residential development. He developed a regression equation to predict TIA using percent developed area (table 6-1). He developed his equation using logarithmic power function with data from 23 basins in Missouri. He noted that this method was advantageous because large basins could quickly be delineated and TIA estimated manually from available maps. Granato (2010) developed a regression equation by using data from 262 stream basins in 10 metropolitan areas of the conterminous United States with drainage areas ranging from 0.35 to 216 square miles and PDA values ranging from 0.16 to 99.06 percent. TIA also is estimated from population density data by estimating the population in an area of interest and using regression equations to calculate the associated TIA. Population-density data are used because nationally consistent census-block data are available in GIS formats for the entire United States. Population-density methods also can be used for predicting potential effects of future development. Although there may be substantial variation in relations between population density and TIA the accuracy of such estimates tend to improve with increasing drainage area as local variations are averaged out. Granato (2010) provides a table with 8 population-density relations from the literature and a new equation developed by using data from 6,255 stream basins in the USGS GAGESII dataset. Granato (2010) also provides four equations to estimate TIA from housing density, which is related to population density. TIA is also estimated from impervious maps extracted through remote sensing. Remote sensing has been extensively utilized to detect impervious surfaces. Detection of impervious areas using deep learning in conjunction with satellite images has emerged as a transformative method in remote sensing and environmental monitoring. Deep learning algorithms, particularly convolutional neural networks (CNNs), have revolutionized our capacity to identify and quantify impervious surfaces from high-resolution satellite imagery. These models can automatically extract intricate spatial and spectral features, enabling them to discriminate between impervious and non-impervious surfaces with exceptional accuracy. Natural impervious area Natural impervious areas are defined herein as land covers that can contribute a substantial amount of surface runoff during small and large storms, but commonly are classified as pervious areas. These areas are not commonly considered as an important source of stormflow in most highway and urban runoff-quality studies, but may produce a substantial amount of stormflow. These natural impervious areas may include open water, wetlands, rock outcrops, barren ground (natural soils with low imperviousness), and areas of compacted soils. Natural impervious areas, depending on their nature and antecedent conditions, may produce stormflow from infiltration excess overland flow, saturation overland flow, or direct precipitation. The effects of natural impervious areas on runoff generation are expected to be more important in areas with low TIA than highly developed areas. The NLCD provides land-cover statistics that can be used as a qualitative measure of the prevalence of different land covers that may act as natural impervious areas. Open water may act as a natural impervious area if direct precipitation is routed through the channel network and arrives as stormflow at the site of interest. Wetlands may act as a natural impervious area during storms when groundwater discharge and saturation overland flow are a substantial proportion of stormflow. Barren ground in riparian areas may act as a natural impervious area during storms because these areas are a source of infiltration excess overland flows. Seemingly pervious areas that have been affected by development activities may act as impervious areas and generate infiltration excess overland flows. These stormflows may occur even during storms that do not meet precipitation volume or intensity criteria to produce runoff based on nominal infiltration rates. Developed pervious areas may behave like impervious areas because development and subsequent use tends to compact soils and reduce infiltration rates. For example, Felton and Lull (1963) measured infiltration rates for forest soils and lawns to indicate a potential 80 percent reduction in infiltration as a result of development activities. Similarly, Taylor (1982) did infiltrometer tests in areas before and after suburban development and noted that topsoil alteration and compaction by construction activities reduced infiltration rates by more than 77 percent. See also Bioswale Hardscape Hydraulic conductivity Hydrophobic soil Permeability (fluid) Soil crust Soil sealing Sustainable urban drainage systems Urban flooding References Bibliography Butler, D. and Davies, J.W., 2000, Urban Drainage, London: Spon. Ferguson, Bruce K., 2005, Porous Pavements, Boca Raton: CRC Press. Frazer, Lance, 2005, Paving Paradise: The Peril of Impervious Surfaces, Environmental Health Perspectives, Vol. 113, No. 7, pg. A457-A462. U.S. Environmental Protection Agency. Washington, DC. "After the Storm." Document No. EPA 833-B-03-002. January 2003. This article incorporates public domain material from websites or documents of the United States Geological Survey and the Federal Highway Administration. External links YouTube presentation: The total impervious area (TIA) affects the volume and timing of runoff Environmental soil science Hydrology Hydrology and urban planning Urban studies and planning terminology Water pollution Soil degradation
Impervious surface
[ "Chemistry", "Engineering", "Environmental_science" ]
2,822
[ "Hydrology", "Water pollution", "Soil degradation", "Hydrology and urban planning", "Environmental engineering", "Environmental soil science" ]
2,707,511
https://en.wikipedia.org/wiki/Condorcet%27s%20jury%20theorem
Condorcet's jury theorem is a political science theorem about the relative probability of a given group of individuals arriving at a correct decision. The theorem was first expressed by the Marquis de Condorcet in his 1785 work Essay on the Application of Analysis to the Probability of Majority Decisions. The assumptions of the theorem are that a group wishes to reach a decision by majority vote. One of the two outcomes of the vote is correct, and each voter has an independent probability p of voting for the correct decision. The theorem asks how many voters we should include in the group. The result depends on whether p is greater than or less than 1/2: If p is greater than 1/2 (each voter is more likely to vote correctly), then adding more voters increases the probability that the majority decision is correct. In the limit, the probability that the majority votes correctly approaches 1 as the number of voters increases. On the other hand, if p is less than 1/2 (each voter is more likely to vote incorrectly), then adding more voters makes things worse: the optimal jury consists of a single voter. Since Condorcet, many other researchers have proved various other jury theorems, relaxing some or all of Condorcet's assumptions. Proofs Proof 1: Calculating the probability that two additional voters change the outcome To avoid the need for a tie-breaking rule, we assume n is odd. Essentially the same argument works for even n if ties are broken by adding a single voter. Now suppose we start with n voters, and let m of these voters vote correctly. Consider what happens when we add two more voters (to keep the total number odd). The majority vote changes in only two cases: m was one vote too small to get a majority of the n votes, but both new voters voted correctly. m was just equal to a majority of the n votes, but both new voters voted incorrectly. The rest of the time, either the new votes cancel out, only increase the gap, or don't make enough of a difference. So we only care what happens when a single vote (among the first n) separates a correct from an incorrect majority. Restricting our attention to this case, we can imagine that the first n-1 votes cancel out and that the deciding vote is cast by the n-th voter. In this case the probability of getting a correct majority is just p. Now suppose we send in the two extra voters. The probability that they change an incorrect majority to a correct majority is (1-p)p2, while the probability that they change a correct majority to an incorrect majority is p(1-p)2. The first of these probabilities is greater than the second if and only if p > 1/2, proving the theorem. Proof 2: Calculating the probability that the decision is correct This proof is direct; it just sums up the probabilities of the majorities. Each term of the sum multiplies the number of combinations of a majority by the probability of that majority. Each majority is counted using a combination, n items taken k at a time, where n is the jury size, and k is the size of the majority. Probabilities range from 0 (= the vote is always wrong) to 1 (= always right). Each person decides independently, so the probabilities of their decisions multiply. The probability of each correct decision is p. The probability of an incorrect decision, q, is the opposite of p, i.e. 1 − p. The power notation, i.e. is a shorthand for x multiplications of p. Committee or jury accuracies can be easily estimated by using this approach in computer spreadsheets or programs. As an example, let us take the simplest case of n = 3, p = 0.8. We need to show that 3 people have higher than 0.8 chance of being right. Indeed: 0.8 × 0.8 × 0.8 + 0.8 × 0.8 × 0.2 + 0.8 × 0.2 × 0.8 + 0.2 × 0.8 × 0.8 = 0.896. Asymptotics Asymptotics is “The Calculus of Approximations”. It is used to solve hard problems that cannot be solved exactly and to provide simpler forms of complicated results, from early results like Taylor's and Stirling's formulas to the prime number theorem. An important topic in the study of asymptotic is asymptotic distribution which is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. The probability of a correct majority decision P(n, p), when the individual probability p is close to 1/2 grows linearly in terms of p − 1/2. For n voters each one having probability p of deciding correctly and for odd n (where there are no possible ties): where and the asymptotic approximation in terms of n is very accurate. The expansion is only in odd powers and . In simple terms, this says that when the decision is difficult (p close to 1/2), the gain by having n voters grows proportionally to . The theorem in other disciplines The Condorcet jury theorem has recently been used to conceptualize score integration when several physician readers (radiologists, endoscopists, etc.) independently evaluate images for disease activity. This task arises in central reading performed during clinical trials and has similarities to voting. According to the authors, the application of the theorem can translate individual reader scores into a final score in a fashion that is both mathematically sound (by avoiding averaging of ordinal data), mathematically tractable for further analysis, and in a manner that is consistent with the scoring task at hand (based on decisions about the presence or absence of features, a subjective classification task) The Condorcet jury theorem is also used in ensemble learning in the field of machine learning. An ensemble method combines the predictions of many individual classifiers by majority voting. Assuming that each of the individual classifiers predict with slightly greater than 50% accuracy and their predictions are independent, then the ensemble of their predictions will be far greater than their individual predictive scores. Applicability to democratic processes Many political theorists and philosophers use the Condorcet’s Jury Theorem (CJT) to defend democracy, see Brennan and references therein. Nevertheless, it is an empirical question whether the theorem holds in real life or not. Note that the CJT is a double-edged sword: it can either prove that majority rule is an (almost) perfect mechanism to aggregate information, when , or an (almost) perfect disaster, when . A disaster would mean that the wrong option is chosen systematically. Some authors have argued that we are in the latter scenario. For instance, Bryan Caplan has extensively argued that voters' knowledge is systematically biased toward (probably) wrong options. In the CJT setup, this could be interpreted as evidence for . Recently, another approach to study the applicability of the CJT was taken. Instead of considering the homogeneous case, each voter is allowed to have a probability , possibly different from other voters. This case was previously studied by Daniel Berend and Jacob Paroush and includes the classical theorem of Condorcet (when ) and other results, like the Miracle of Aggregation (when for most voters and for a small proportion of them). Then, following a Bayesian approach, the prior probability (in this case, a priori) of the thesis predicted by the theorem is estimated. That is, if we choose an arbitrary sequence of voters (i.e., a sequence ), will the thesis of the CJT hold? The answer is no. More precisely, if a random sequence of is taken following an unbiased distribution that does not favor competence, , or incompetence, , then the thesis predicted by the theorem will not hold almost surely. With this new approach, proponents of the CJT should present strong evidence of competence, to overcome the low prior probability. That is, it is not only the case that there is evidence against competence (posterior probability), but also that we cannot expect the CJT to hold in the absence of any evidence (prior probability). Further reading Condorcet method Condorcet paradox Jury theorem Law of Large Numbers Wisdom of the crowd Notes Probability theorems Voting theory
Condorcet's jury theorem
[ "Mathematics" ]
1,730
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
2,709,092
https://en.wikipedia.org/wiki/Mashup%20%28web%20application%20hybrid%29
A mashup (computer industry jargon), in web development, is a web page or web application that uses content from more than one source to create a single new service displayed in a single graphical interface. For example, a user could combine the addresses and photographs of their library branches with a Google map to create a map mashup. The term implies easy, fast integration, frequently using open application programming interfaces (open API) and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data. The term mashup originally comes from creating something by combining elements from two or more sources. The main characteristics of a mashup are combination, visualization, and aggregation. It is important to make existing data more useful, for personal and professional use. To be able to permanently access the data of other services, mashups are generally client applications or hosted online. In the past years, more and more Web applications have published APIs that enable software developers to easily integrate data and functions the SOA way, instead of building them by themselves. Mashups can be considered to have an active role in the evolution of social software and Web 2.0. Mashup composition tools are usually simple enough to be used by end-users. They generally do not require programming skills and rather support visual wiring of GUI widgets, services and components together. Therefore, these tools contribute to a new vision of the Web, where users are able to contribute. The term "mashup" is not formally defined by any standard-setting body. History The broader context of the history of the Web provides a background for the development of mashups. Under the Web 1.0 model, organizations stored consumer data on portals and updated them regularly. They controlled all the consumer data, and the consumer had to use their products and services to get the information. The advent of Web 2.0 introduced Web standards that were commonly and widely adopted across traditional competitors and which unlocked the consumer data. At the same time, mashups emerged, allowing mixing and matching competitors' APIs to develop new services. The first mashups used mapping services or photo services to combine these services with data of any kind and therefore to produce visualizations of data. In the beginning, most mashups were consumer-based, but recently the mashup is to be seen as an interesting concept useful also to enterprises. Business mashups can combine existing internal data with external services to generate new views on the data. There was also the free Yahoo! Pipes to build mashups for free using the Yahoo! Query Language. Types of mashup There are many types of mashup, such as business mashups, consumer mashups, and data mashups. The most common type of mashup is the consumer mashup, aimed at the general public. Business (or enterprise) mashups define applications that combine their own resources, application and data, with other external Web services. They focus data into a single presentation and allow for collaborative action among businesses and developers. This works well for an agile development project, which requires collaboration between the developers and customer (or customer proxy, typically a product manager) for defining and implementing the business requirements. Enterprise mashups are secure, visually rich Web applications that expose actionable information from diverse internal and external information sources. Consumer mashups combine data from multiple public sources in the browser and organize it through a simple browser user interface. (e.g.: Wikipediavision combines Google Map and a Wikipedia API) Data mashups, opposite to the consumer mashups, combine similar types of media and information from multiple sources into a single representation. The combination of all these resources create a new and distinct Web service that was not originally provided by either source. By API type Mashups can also be categorized by the basic API type they use but any of these can be combined with each other or embedded into other applications. Data types Indexed data (documents, weblogs, images, videos, shopping articles, jobs ...) used by metasearch engines Cartographic and geographic data: geolocation software, geovisualization Feeds, podcasts: news aggregators Functions Data converters: language translators, speech processing, URL shorteners... Communication: email, instant messaging, notification... Visual data rendering: information visualization, diagrams Security related: electronic payment systems, ID identification... Editors Mashup enabler In technology, a mashup enabler is a tool for transforming incompatible IT resources into a form that allows them to be easily combined, in order to create a mashup. Mashup enablers allow powerful techniques and tools (such as mashup platforms) for combining data and services to be applied to new kinds of resources. An example of a mashup enabler is a tool for creating an RSS feed from a spreadsheet (which cannot easily be used to create a mashup). Many mashup editors include mashup enablers, for example, Presto Mashup Connectors, Convertigo Web Integrator or Caspio Bridge. Mashup enablers have also been described as "the service and tool providers, [sic] that make mashups possible". History Early mashups were developed manually by enthusiastic programmers. However, as mashups became more popular, companies began creating platforms for building mashups, which allow designers to visually construct mashups by connecting together mashup components. Mashup editors have greatly simplified the creation of mashups, significantly increasing the productivity of mashup developers and even opening mashup development to end-users and non-IT experts. Standard components and connectors enable designers to combine mashup resources in all sorts of complex ways with ease. Mashup platforms, however, have done little to broaden the scope of resources accessible by mashups and have not freed mashups from their reliance on well-structured data and open libraries (RSS feeds and public APIs). Mashup enablers evolved to address this problem, providing the ability to convert other kinds of data and services into mashable resources. Web resources Of course, not all valuable data is located within organizations. In fact, the most valuable information for business intelligence and decision support is often external to the organization. With the emergence of rich web applications and online Web portals, a wide range of business-critical processes (such as ordering) are becoming available online. Unfortunately, very few of these data sources syndicate content in RSS format and very few of these services provide publicly accessible APIs. Mashup editors therefore solve this problem by providing enablers or connectors. Mashups versus portals Mashups and portals are both content aggregation technologies. Portals are an older technology designed as an extension to traditional dynamic Web applications, in which the process of converting data content into marked-up Web pages is split into two phases: generation of markup "fragments" and aggregation of the fragments into pages. Each markup fragment is generated by a "portlet", and the portal combines them into a single Web page. Portlets may be hosted locally on the portal server or remotely on a separate server. Portal technology defines a complete event model covering reads and updates. A request for an aggregate page on a portal is translated into individual read operations on all the portlets that form the page ("render" operations on local, JSR 168 portlets or "getMarkup" operations on remote, WSRP portlets). If a submit button is pressed on any portlet on a portal page, it is translated into an update operation on that portlet alone (processAction on a local portlet or performBlockingInteraction on a remote, WSRP portlet). The update is then immediately followed by a read on all portlets on the page. Portal technology is about server-side, presentation-tier aggregation. It cannot be used to drive more robust forms of application integration such as two-phase commit. Mashups differ from portals in the following respects: The portal model has been around longer and has had greater investment and product research. Portal technology is therefore more standardized and mature. Over time, increasing maturity and standardization of mashup technology will likely make it more popular than portal technology because it is more closely associated with Web 2.0 and lately Service-oriented Architectures (SOA). New versions of portal products are expected to eventually add mashup support while still supporting legacy portlet applications. Mashup technologies, in contrast, are not expected to provide support for portal standards. Business mashups Mashup uses are expanding in the business environment. Business mashups are useful for integrating business and data services, as business mashups technologies provide the ability to develop new integrated services quickly, to combine internal services with external or personalized information, and to make these services tangible to the business user through user-friendly Web browser interfaces. Business mashups differ from consumer mashups in the level of integration with business computing environments, security and access control features, governance, and the sophistication of the programming tools (mashup editors) used. Another difference between business mashups and consumer mashups is a growing trend of using business mashups in commercial software as a service (SaaS) offering. Many of the providers of business mashups technologies have added SOA features. Architectural aspects of mashups The architecture of a mashup is divided into three layers: Presentation / user interaction: this is the user interface of mashups. The technologies used are HTML/XHTML, CSS, JavaScript, Asynchronous JavaScript and Xml (Ajax). Web Services: the product's functionality can be accessed using API services. The technologies used are XMLHTTPRequest, XML-RPC, JSON-RPC, SOAP, REST. Data: handling the data like sending, storing and receiving. The technologies used are XML, JSON, KML. Architecturally, there are two styles of mashups: Web-based and server-based. Whereas Web-based mashups typically use the user's web browser to combine and reformat the data, server-based mashups analyze and reformat the data on a remote server and transmit the data to the user's browser in its final form. Mashups appear to be a variation of a façade pattern. That is: a software engineering design pattern that provides a simplified interface to a larger body of code (in this case the code to aggregate the different feeds with different APIs). Mashups can be used with software provided as a service (SaaS). After several years of standards development, mainstream businesses are starting to adopt service-oriented architectures (SOA) to integrate disparate data by making them available as discrete Web services. Web services provide open, standardized protocols to provide a unified means of accessing information from a diverse set of platforms (operating systems, programming languages, applications). These Web services can be reused to provide completely new services and applications within and across organizations, providing business flexibility. See also Mashup (culture) Mashup (music) Open Mashup Alliance Open API Yahoo! Pipes Webhook Web portal Web scraping References Further reading Ahmet Soylu, Felix Mödritscher, Fridolin Wild, Patrick De Causmaecker, Piet Desmet. 2012 . “Mashups by Orchestration and Widget-based Personal Environments: Key Challenges, Solution Strategies, and an Application.” Program: Electronic Library and Information Systems 46 (4): 383–428. Endres-Niggemeyer, Brigitte ed. 2013. Semantic Mashups. Intelligent Reuse of Web Resources. Springer. (Print) Software architecture Web 2.0 Web 2.0 neologisms Web development
Mashup (web application hybrid)
[ "Engineering" ]
2,463
[ "Software engineering", "Web development" ]
2,709,093
https://en.wikipedia.org/wiki/Carbonyldiimidazole
1,1'-Carbonyldiimidazole (CDI) is an organic compound with the molecular formula . It is a white crystalline solid. It is often used for the coupling of amino acids for peptide synthesis and as a reagent in organic synthesis. Preparation CDI can be prepared straightforwardly by the reaction of phosgene with four equivalents of imidazole under anhydrous conditions. Removal of the side product, imidazolium chloride, and solvent results in the crystalline product in ~90% yield. In this conversion, the imidazole serves both as the nucleophile and the base. An alternative precursor 1-(trimethylsilyl)imidazole requires more preparative effort with the advantage that the coproduct trimethylsilyl chloride is volatile. CDI hydrolyzes readily to give back imidazole: The purity of CDI can be determined by the amount of that is formed upon hydrolysis. Use in synthesis CDI is mainly employed to convert amines into amides, carbamates, ureas. It can also be used to convert alcohols into esters. Acid derivatives The formation of amide is promoted by CDI. Although the reactivity of CDI is less than acid chlorides, it is more easily handled and avoids the use of thionyl chloride in acid chloride formation, which can cause side reactions. An early application of this type of reaction was noted in the formation of peptide bonds (with CO2 formation as a driving force). The proposed mechanism for the reaction between a carboxylic acid and CDI is presented below. In the realm of peptide synthesis, this product may be treated with an amine such as that found on an amino acid to release the imidazole group and couple the peptides. The side products, carbon dioxide and imidazole, are relatively innocuous. Racemization of the amino acids also tends to be minimal, reflecting the mild reaction conditions. CDI can also be used for esterification, although alcoholysis requires heat or the presence of a potent nucleophiles as sodium ethoxide, or other strong bases like NaH. This reaction has generally good yield and wide scope, although forming the ester from tertiary alcohols when the acid reagent has a relatively acidic α-proton is troublesome, since C-C condensations can occur, though this itself may be a desirable reaction. A similar reaction involving thiols and selenols can yield the corresponding esters. The alcohol reaction can also be used to form glycosidic bonds. Similarly, an acid can be used in the place of an alcohol to form the anhydride, although dicyclohexylcarbodiimide is a more typical reagent. The equilibrium can be shifted in the favor of the anhydride by utilizing an acid in a 2:1 ratio that forms an insoluble salt with the imidazole. Typical acids are trifluoro- and trichloroacetic acids. Symmetric anhydrides can thus be formed by replacing this trifluoro- or trichloroacetyl group with the acid that was used to form the original reagent. Another related reaction is the reaction of formic acid with CDI to form the formylized imidazole. This reagent is a good formylating agent and can regenerate the unsubstituted imidazole (with formation of carbon monoxide) upon heating. Yet another reaction involves the acylation of triphenylalkelynephosphoranes. These can undergo the Wittig reaction to form α,β unsaturated ketones or aldehydes. The reagent can even undergo reaction with peroxide to form the peroxycarboxylic acid, which can react further to form diacyl peroxides. The imidazole group is also reduced by LiAlH4 to form aldehydes from the carboxylic acid (rather than amines or alcohols). The reagent can also be reacted with Grignard reagents to form ketones. A C-C acylation reaction can occur with a malonic ester-type compound, in the following scheme useful for syntheses of macrolide antibiotics. Other reactions The N-phenylimino derivative of CDI can be formed in a Wittig-like reaction with triphenylphosphine phenylimide. CDI can act as a carbonyl equivalent in the formation of tetronic acids or pulvinones from hydroxyketones and diketones in basic conditions. An alcohol treated with at least 3 equivalents of an activated halide (such as allyl bromide or iodomethane) and CDI yields the corresponding halide with good yield. Bromination and iodination work best, though this reaction does not preserve the stereochemistry of the alcohol. In a similar context, CDI is often used in dehydration reactions. As CDI is an equivalent of phosgene, it can be used in similar reaction, however, with increased selectivity: it allows the synthesis of asymmetric bis alkyl carbonates Safety The safety characteristics of CDI have been investigated as part of a broader evaluation of amide bond forming reagents. CDI was demonstrated to exhibit dermal corrosion and eye irritation. The sensitization potential of CDI was shown to be low compared to other common amide bond forming agents (non-sensitizing at 1% in LLNA testing according to OECD 429). Thermal hazard analysis by differential scanning calorimetry (DSC) shows CDI poses minimal explosion risks. See also Thiocarbonyldiimidazole (TCDI) the thiourea analogue References Reagents for organic chemistry Imidazoles
Carbonyldiimidazole
[ "Chemistry" ]
1,246
[ "Reagents for organic chemistry" ]
2,709,792
https://en.wikipedia.org/wiki/Copper%28I%29%20iodide
Copper(I) iodide is an inorganic compound with the chemical formula . It is also known as cuprous iodide. It is useful in a variety of applications ranging from organic synthesis to cloud seeding. Copper(I) iodide is white, but samples often appear tan or even, when found in nature as rare mineral marshite, reddish brown, but such color is due to the presence of impurities. It is common for samples of iodide-containing compounds to become discolored due to the facile aerobic oxidation of the iodide anion to molecular iodine. Structure Copper(I) iodide, like most binary (containing only two elements) metal halides, is an inorganic polymer. It has a rich phase diagram, meaning that it exists in several crystalline forms. It adopts a zinc blende structure below 390 °C (γ-CuI), a wurtzite structure between 390 and 440 °C (β-CuI), and a rock salt structure above 440 °C (α-CuI). The ions are tetrahedrally coordinated when in the zinc blende or the wurtzite structure, with a Cu-I distance of 2.338 Å. Copper(I) bromide and copper(I) chloride also transform from the zinc blende structure to the wurtzite structure at 405 and 435 °C, respectively. Therefore, the longer the copper–halide bond length, the lower the temperature needs to be to change the structure from the zinc blende structure to the wurtzite structure. The interatomic distances in copper(I) bromide and copper(I) chloride are 2.173 and 2.051 Å, respectively. Consistent with its covalency, CuI is a p-type semiconductor. Preparation Copper(I) iodide can be prepared by heating iodine and copper in concentrated hydroiodic acid. In the laboratory however, copper(I) iodide is prepared by simply mixing an aqueous solution of potassium iodide and a soluble copper(II) salt such as copper(II) sulfate. Reactions Copper(I) iodide reacts with mercury vapors to form brown copper(I) tetraiodomercurate(II): This reaction can be used for the detection of mercury since the white CuI to brown color change is dramatic. Copper(I) iodide is used in the synthesis of Cu(I) clusters such as . Copper(I) iodide dissolves in acetonitrile, yielding diverse complexes. Upon crystallization, molecular or polymeric compounds can be isolated. Dissolution is also observed when a solution of the appropriate complexing agent in acetone or chloroform is used. For example, thiourea and its derivatives can be used. Solids that crystallize out of those solutions are composed of hybrid inorganic chains. Uses In combination with 1,2- or 1,3-diamine ligands, CuI catalyzes the conversion of aryl, heteroaryl, and vinyl bromides into the corresponding iodides. NaI is the typical iodide source and dioxane is a typical solvent (see aromatic Finkelstein reaction). CuI is used as a co-catalyst with palladium catalyst in the Sonogashira coupling. CuI is used in cloud seeding, altering the amount or type of precipitation of a cloud, or their structure by dispersing substances into the atmosphere which increase water's ability to form droplets or crystals. CuI provides a sphere for moisture in the cloud to condense around, causing precipitation to increase and cloud density to decrease. The structural properties of CuI allow CuI to stabilize heat in nylon in commercial and residential carpet industries, automotive engine accessories, and other markets where durability and weight are a factor. CuI is used as a source of dietary iodine in table salt and animal feed. References Further reading External links Chemicalland properties database National Pollutant Inventory – Copper and compounds fact sheet Copper(I) compounds Iodides Metal halides Zincblende crystal structure Wurtzite structure type Semiconductor materials
Copper(I) iodide
[ "Chemistry" ]
865
[ "Semiconductor materials", "Inorganic compounds", "Metal halides", "Salts" ]
8,498,996
https://en.wikipedia.org/wiki/R-matrix
The term R-matrix has several meanings, depending on the field of study. The term R-matrix is used in connection with the Yang–Baxter equation, first introduced in the field of statistical mechanics in the works of J. B. McGuire in 1964 and C. N. Yang in 1967 and in the group algebra of the symmetric group in the work of A. A. Jucys in 1966. The classical R-matrix arises in the definition of the classical Yang–Baxter equation. In quasitriangular Hopf algebra, the R-matrix is a solution of the Yang–Baxter equation. The numerical modeling of diffraction gratings in optical science can be performed using the R-matrix propagation algorithm. R-matrix method in quantum mechanics There is a method in computational quantum mechanics for studying scattering known as the R-matrix. This method was originally formulated for studying resonances in nuclear scattering by Wigner and Eisenbud. Using that work as a basis, an R-matrix method was developed for electron, positron and photon scattering by atoms. This approach was later adapted for electron, positron and photon scattering by molecules. R-matrix method is used in UKRmol and UKRmol+ code suits. The user-friendly software Quantemol Electron Collisions (Quantemol-EC) and its predecessor Quantemol-N are based on UKRmol/UKRmol+ and employ MOLPRO package for electron configuration calculations. See also UK Molecular R-matrix Codes References Matrices
R-matrix
[ "Physics", "Mathematics" ]
312
[ "Matrices (mathematics)", "Mathematical objects", "Nuclear physics" ]
8,499,571
https://en.wikipedia.org/wiki/Negative%20probability
The probability of the outcome of an experiment is never negative, although a quasiprobability distribution allows a negative probability, or quasiprobability for some events. These distributions may apply to unobservable events or conditional probabilities. Physics and mathematics In 1942, Paul Dirac wrote a paper "The Physical Interpretation of Quantum Mechanics" where he introduced the concept of negative energies and negative probabilities: The idea of negative probabilities later received increased attention in physics and particularly in quantum mechanics. Richard Feynman argued that no one objects to using negative numbers in calculations: although "minus three apples" is not a valid concept in real life, negative money is valid. Similarly he argued how negative probabilities as well as probabilities above unity possibly could be useful in probability calculations. Negative probabilities have later been suggested to solve several problems and paradoxes. Half-coins provide simple examples for negative probabilities. These strange coins were introduced in 2005 by Gábor J. Székely. Half-coins have infinitely many sides numbered with 0,1,2,... and the positive even numbers are taken with negative probabilities. Two half-coins make a complete coin in the sense that if we flip two half-coins then the sum of the outcomes is 0 or 1 with probability 1/2 as if we simply flipped a fair coin. In Convolution quotients of nonnegative definite functions and Algebraic Probability Theory Imre Z. Ruzsa and Gábor J. Székely proved that if a random variable X has a signed or quasi distribution where some of the probabilities are negative then one can always find two random variables, Y and Z, with ordinary (not signed / not quasi) distributions such that X, Y are independent and X + Y = Z in distribution. Thus X can always be interpreted as the "difference" of two ordinary random variables, Z and Y. If Y is interpreted as a measurement error of X and the observed value is Z then the negative regions of the distribution of X are masked / shielded by the error Y. Another example known as the Wigner distribution in phase space, introduced by Eugene Wigner in 1932 to study quantum corrections, often leads to negative probabilities. For this reason, it has later been better known as the Wigner quasiprobability distribution. In 1945, M. S. Bartlett worked out the mathematical and logical consistency of such negative valuedness. The Wigner distribution function is routinely used in physics nowadays, and provides the cornerstone of phase-space quantization. Its negative features are an asset to the formalism, and often indicate quantum interference. The negative regions of the distribution are shielded from direct observation by the quantum uncertainty principle: typically, the moments of such a non-positive-semidefinite quasiprobability distribution are highly constrained, and prevent direct measurability of the negative regions of the distribution. Nevertheless, these regions contribute negatively and crucially to the expected values of observable quantities computed through such distributions. Engineering The concept of negative probabilities has also been proposed for reliable facility location models where facilities are subject to negatively correlated disruption risks when facility locations, customer allocation, and backup service plans are determined simultaneously. Li et al. proposed a virtual station structure that transforms a facility network with positively correlated disruptions into an equivalent one with added virtual supporting stations, and these virtual stations were subject to independent disruptions. This approach reduces a problem from one with correlated disruptions to one without. Xie et al. later showed how negatively correlated disruptions can also be addressed by the same modeling framework, except that a virtual supporting station now may be disrupted with a “failure propensity” which This finding paves ways for using compact mixed-integer mathematical programs to optimally design reliable location of service facilities under site-dependent and positive/negative/mixed facility disruption correlations. The proposed “propensity” concept in Xie et al. turns out to be what Feynman and others referred to as “quasi-probability.” Note that when a quasi-probability is larger than 1, then 1 minus this value gives a negative probability. In the reliable facility location context, the truly physically verifiable observation is the facility disruption states (whose probabilities are ensured to be within the conventional range [0,1]), but there is no direct information on the station disruption states or their corresponding probabilities. Hence the disruption "probabilities" of the stations, interpreted as “probabilities of imagined intermediary states,” could exceed unity, and thus are referred to as quasi-probabilities. Finance Negative probabilities have more recently been applied to mathematical finance. In quantitative finance most probabilities are not real probabilities but pseudo probabilities, often what is known as risk neutral probabilities. These are not real probabilities, but theoretical "probabilities" under a series of assumptions that help simplify calculations by allowing such pseudo probabilities to be negative in certain cases as first pointed out by Espen Gaarder Haug in 2004. A rigorous mathematical definition of negative probabilities and their properties was recently derived by Mark Burgin and Gunter Meissner (2011). The authors also show how negative probabilities can be applied to financial option pricing. Machine learning and signal processing Some problems in machine learning use graph- or hypergraph-based formulations having edges assigned with weights, most commonly positive. A positive weight from one vertex to another can be interpreted in a random walk as a probability of getting from the former vertex to the latter. In a Markov chain that is the probability of each event depending only on the state attained in the previous event. Some problems in machine learning, e.g., correlation clustering, naturally often deal with a signed graph where the edge weight indicates whether two nodes are similar (correlated with a positive edge weight) or dissimilar (anticorrelated with a negative edge weight). Treating a graph weight as a probability of the two vertices to be related is being replaced here with a correlation that of course can be negative or positive equally legitimately. Positive and negative graph weights are uncontroversial if interpreted as correlations rather than probabilities but raise similar issues, e.g., challenges for normalization in graph Laplacian and explainability of spectral clustering for signed graph partitioning; e.g., Similarly, in spectral graph theory, the eigenvalues of the Laplacian matrix represent frequencies and eigenvectors form what is known as a graph Fourier basis substituting the classical Fourier transform in the graph-based signal processing. In applications to imaging, the graph Laplacian is formulated analogous to the anisotropic diffusion operator where a Gaussian smoothed image is interpreted as a single time slice of the solution to the heat equation, that has the original image as its initial conditions. If the graph weight was negative, that would correspond to a negative conductivity in the heat equation, stimulating the heat concentration at the graph vertices connected by the graph edge, rather than the normal heat dissipation. While negative heat conductivity is not-physical, this effect is useful for edge-enhancing image smoothing, e.g., resulting in sharpening corners of one-dimensional signals, when used in graph-based edge-preserving smoothing. See also Existence of states of negative norm (or fields with the wrong sign of the kinetic term, such as Pauli–Villars ghosts) allows the probabilities to be negative. See Ghosts (physics). Signed measure Wigner quasiprobability distribution References Quantum mechanics Exotic probabilities Mathematical finance
Negative probability
[ "Physics", "Mathematics" ]
1,579
[ "Applied mathematics", "Theoretical physics", "Mathematical finance", "Quantum mechanics" ]
8,504,422
https://en.wikipedia.org/wiki/CFD-ACE%2B
CFD-ACE+ is a commercial computational fluid dynamics solver developed by Applied Materials. It solves the conservation equations of mass, momentum, energy, chemical species and other scalar transport equations using the finite volume method. These equations enable coupled simulations of fluid, thermal, chemical, biological, electrical and mechanical phenomena. CFD-ACE+ solver allows for coupled heat and mass transport along with complex multi-step gas-phase and surface reactions which makes it especially useful for designing and optimizing semiconductor equipment and processes such as chemical vapor deposition (CVD). Researchers at the Ecole Nationale Superieure d'Arts et Metiers used CFD-ACE+ to simulate the rapid thermal chemical vapor deposition (RTCVD) process. They predicted the deposition rate along the substrate diameter for silicon deposition from silane. They also used CFD-ACE+ to model transparent conductive oxide (TCO) thin film deposition with ultrasonic spray chemical vapor deposition (CVD). The University of Louisville and the Oak Ridge National Laboratory used CFD-ACE+ to develop the yttria-stabilized zirconia CVD process for application of thermal barrier coatings for fossil energy systems. CFD-ACE+ was used by the Indian Institute of Technology Bombay to model the interplay of multiphysics phenomena involved in microfluidic devices such as fluid flow, structure, surface and interfaces etc. Numerical simulation of electroosmotic effect on pressure-driven flows in the serpentine channel of a micro fuel cell with variable zeta potential on the side walls was investigated and reported. Based on their extensive study of CFD software tools for microfluidic applications, researchers at IMTEK, University of Freiburg concluded that generally CFD-ACE+ can be recommended for simulation of free surface flows involving capillary forces. CFD-ACE+ has also been used to design and optimize the various fuel cell components and stacks. Researchers at Ballard Power Systems used the PEMFC module in CFD-ACE+ to improve the design of its latest fuel-cell. Amongst other energy applications, CFD-ACE+ was employed by ABB researchers to simulate the three-dimensional geometry of a high-current constricted vacuum arc drive by a strong magnetic field. Flow velocities were up to several thousand meters per second so the time step of the simulation was in the range of tens of nanoseconds. A movement of the arc over almost one full circle was simulated. Researchers at the University of Akron used CFD-ACE+ to simulation flow patterns and pressure profiles inside a rectangular pocket of a hydrostatic journal bearing. The numerical results made it possible to determine the three-dimensional flow field and pressure profile throughout the pocket, clearance and adjoining lands. The inertia effects and pressure drops across the pocket were incorporated in the numerical model. Stanford University researchers used CFD-ACE+ to investigate the suppression of wake instabilities of a pair of circular cylinders in a freestream flow at a Reynolds number of 150. The simulation showed that when the cylinders are counter-rotated, unsteady vortex wakes can be eliminated. References Physics software
CFD-ACE+
[ "Physics" ]
640
[ "Physics software", "Computational physics" ]
25,499,097
https://en.wikipedia.org/wiki/N-body%20units
N-body units are a completely self-contained system of units used for N-body simulations of self-gravitating systems in astrophysics. In this system, the base physical units are chosen so that the total mass, M, the gravitational constant, G, and the virial radius, R, are normalized. The underlying assumption is that the system of N objects (stars) satisfies the virial theorem. The consequence of standard N-body units is that the velocity dispersion of the system, v, is and that the dynamical or crossing time, t, is . The use of standard N-body units was advocated by Michel Hénon in 1971. Early adopters of this system of units included H. Cohn in 1979 and D. Heggie and R. Mathieu in 1986. At the conference MODEST14 in 2014, D. Heggie proposed that the community abandon the name "N-body units" and replace it with the name "Hénon units" to commemorate the originator. References External links N-Body Units STELLAR DYNAMICS by D.C. Heggie, p.6–7 Physical cosmology Systems of units 1971 introductions
N-body units
[ "Physics", "Astronomy", "Mathematics" ]
246
[ "Systems of units", "Units of measurement", "Quantity", "Theoretical physics", "Astrophysics", "Physical cosmology", "Astronomical sub-disciplines" ]
24,101,751
https://en.wikipedia.org/wiki/Surface%20integrity
Surface integrity is the surface condition of a workpiece after being modified by a manufacturing process. The term was coined by Michael Field and John F. Kahles in 1964. The surface integrity of a workpiece or item changes the material's properties. The consequences of changes to surface integrity are a mechanical engineering design problem, but the preservation of those properties are a manufacturing consideration. Surface integrity can have a great impact on a parts function; for example, Inconel 718 can have a fatigue limit as high as after a gentle grinding or as low as after electrical discharge machining (EDM). Definition There are two aspects to surface integrity: topography characteristics and surface layer characteristics. The topography is made up of surface roughness, waviness, errors of form, and flaws. The surface layer characteristics that can change through processing are: plastic deformation, residual stresses, cracks, hardness, overaging, phase changes, recrystallization, intergranular attack, and hydrogen embrittlement. When a traditional manufacturing process is used, such as machining, the surface layer sustains local plastic deformation. The processes that affect surface integrity can be conveniently broken up into three classes: traditional processes, non-traditional processes, and finishing treatments. Traditional processes are defined as processes where the tool contacts the workpiece surface; for example: grinding, turning, and machining. These processes will only damage the surface integrity if the improper parameters are used, such as dull tools, too high feed speeds, improper coolant or lubrication, or incorrect grinding wheel hardness. Nontraditional processes are defined as processes where the tool does not contact the workpiece; examples of this type of process include EDM, electrochemical machining, and chemical milling. These processes will produce different surface integrity depending on how the processes are controlled; for instance, they can leave a stress-free surface, a remelted surface, or excessive surface roughness. Finishing treatments are defined as processes that negate surface finishes imparted by traditional and non-traditional processes or improve the surface integrity. For example, compressive residual stress can be enhanced via peening or roller burnishing or the recast layer left by EDMing can be removed via chemical milling. Finishing treatments can affect the workpiece surface in a wide variety of manners. Some clean and/or remove defects, such as scratches, pores, burrs, flash, or blemishes. Other processes improve or modify the surface appearance by improving smoothness, texture, or color. They can also improve corrosion resistance, wear resistance, and/or reduce friction. Coatings are another type of finishing treatment that may be used to plate an expensive or scarce material onto a less expensive base material. Variables Manufacturing processes have five main variables: the workpiece, the tool, the machine tool, the environment, and process variables. All of these variables can affect the surface integrity of the workpiece by producing: High temperatures involved in various machining processes Plastic deformation in the workpiece (residual stresses) Surface geometry (roughness, cracks, distortion) Chemical reactions, especially between the tool and the workpiece References Bibliography . Mechanical engineering Metallurgy Plastics industry 1964 introductions
Surface integrity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
651
[ "Applied and interdisciplinary physics", "Metallurgy", "Materials science", "nan", "Mechanical engineering" ]
24,104,934
https://en.wikipedia.org/wiki/C21H20O9
{{DISPLAYTITLE:C21H20O9}} The molecular formula C21H20O9 (molar mass: 416.38 g/mol, exact mass: 416.1107 u) may refer to: Aleuritin, a coumarinolignoid Daidzin, an isoflavone Puerarin, an isoflavone Molecular formulas
C21H20O9
[ "Physics", "Chemistry" ]
84
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,106,109
https://en.wikipedia.org/wiki/Dental%20prosthesis
A dental prosthesis is an intraoral (inside the mouth) prosthesis used to restore (reconstruct) intraoral defects such as missing teeth, missing parts of teeth, and missing soft or hard structures of the jaw and palate. Prosthodontics is the dental specialty that focuses on dental prostheses. Such prostheses are used to rehabilitate mastication (chewing), improve aesthetics, and aid speech. A dental prosthesis may be held in place by connecting to teeth or dental implants, by suction, or by being held passively by surrounding muscles. Like other types of prostheses, they can either be fixed permanently or removable; fixed prosthodontics and removable dentures are made in many variations. Permanently fixed dental prostheses use dental adhesive or screws, to attach to teeth or dental implants. Removal prostheses may use friction against parallel hard surfaces and undercuts of adjacent teeth or dental implants, suction using the mucous retention (with or without aid from denture adhesives), and by exploiting the surrounding muscles and anatomical contours of the jaw to passively hold in place. Examples Some examples of dental prostheses include: dentures partial denture palatal obturator orthodontic appliance dental implant crown bridge pivot tooth See also Prosthodontics Dental restoration Dental braces References Dentistry Prosthetics Prosthodontology
Dental prosthesis
[ "Engineering", "Biology" ]
314
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Medical technology stubs", "Medical technology" ]
24,107,400
https://en.wikipedia.org/wiki/Computational%20complexity%20of%20matrix%20multiplication
In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance. Directly applying the mathematical definition of matrix multiplication gives an algorithm that requires field operations to multiply two matrices over that field ( in big O notation). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". The optimal number of field operations needed to multiply two square matrices up to constant factors is still unknown. This is a major open question in theoretical computer science. , the best bound on the asymptotic complexity of a matrix multiplication algorithm is . However, this and similar improvements to Strassen are not used in practice, because they are galactic algorithms: the constant coefficient hidden by the big O notation is so large that they are only worthwhile for matrices that are too large to handle on present-day computers. Simple algorithms If A, B are matrices over a field, then their product AB is also an matrix over that field, defined entrywise as Schoolbook algorithm The simplest approach to computing the product of two matrices A and B is to compute the arithmetic expressions coming from the definition of matrix multiplication. In pseudocode: input A and B, both n by n matrices initialize C to be an n by n matrix of all zeros for i from 1 to n: for j from 1 to n: for k from 1 to n: C[i][j] = C[i][j] + A[i][k]*B[k][j] output C (as A*B) This algorithm requires, in the worst case, multiplications of scalars and additions for computing the product of two square matrices. Its computational complexity is therefore , in a model of computation where field operations (addition and multiplication) take constant time (in practice, this is the case for floating point numbers, but not necessarily for integers). Strassen's algorithm Strassen's algorithm improves on naive matrix multiplication through a divide-and-conquer approach. The key observation is that multiplying two matrices can be done with only 7 multiplications, instead of the usual 8 (at the expense of 11 additional addition and subtraction operations). This means that, treating the input matrices as block matrices, the task of multiplying matrices can be reduced to 7 subproblems of multiplying matrices. Applying this recursively gives an algorithm needing field operations. Unlike algorithms with faster asymptotic complexity, Strassen's algorithm is used in practice. The numerical stability is reduced compared to the naive algorithm, but it is faster in cases where or so and appears in several libraries, such as BLAS. Fast matrix multiplication algorithms cannot achieve component-wise stability, but some can be shown to exhibit norm-wise stability. It is very useful for large matrices over exact domains such as finite fields, where numerical stability is not an issue. Matrix multiplication exponent The matrix multiplication exponent, usually denoted , is the smallest real number for which any two matrices over a field can be multiplied together using field operations. This notation is commonly used in algorithms research, so that algorithms using matrix multiplication as a subroutine have bounds on running time that can update as bounds on improve. Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that . Whether is a major open question in theoretical computer science, and there is a line of research developing matrix multiplication algorithms to get improved bounds on . All recent algorithms in this line of research use the laser method, a generalization of the Coppersmith–Winograd algorithm, which was given by Don Coppersmith and Shmuel Winograd in 1990 and was the best matrix multiplication algorithm until 2010. The conceptual idea of these algorithms is similar to Strassen's algorithm: a method is devised for multiplying two -matrices with fewer than multiplications, and this technique is applied recursively. The laser method has limitations to its power: Ambainis, Filmus and Le Gall prove that it cannot be used to show that by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd and neither for a wide class of variants of this approach. In 2022 Duan, Wu and Zhou devised a variant breaking the first of the two barriers with , they do so by identifying a source of potential optimization in the laser method termed combination loss for which they compensate using an asymmetric version of the hashing method in the Coppersmith–Winograd algorithm. Nonetheless, the above are classical examples of galactic algorithms. On the opposite, the above Strassen's algorithm of 1969 and Pan's algorithm of 1978, whose respective exponents are slightly above and below 2.78, have constant coefficients that make them feasible. Group theory reformulation of matrix multiplication algorithms Henry Cohn, Robert Kleinberg, Balázs Szegedy and Chris Umans put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different group-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the triple product property (TPP). They also give conjectures that, if true, would imply that there are matrix multiplication algorithms with essentially quadratic complexity. This implies that the optimal exponent of matrix multiplication is 2, which most researchers believe is indeed the case. One such conjecture is that families of wreath products of Abelian groups with symmetric groups realise families of subset triples with a simultaneous version of the TPP. Several of their conjectures have since been disproven by Blasiak, Cohn, Church, Grochow, Naslund, Sawin, and Umans using the Slice Rank method. Further, Alon, Shpilka and Chris Umans have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the sunflower conjecture, which in turn is related to the cap set problem. Lower bounds for ω There is a trivial lower bound of . Since any algorithm for multiplying two -matrices has to process all entries, there is a trivial asymptotic lower bound of operations for any matrix multiplication algorithm. Thus . It is unknown whether . The best known lower bound for matrix-multiplication complexity is , for bounded coefficient arithmetic circuits over the real or complex numbers, and is due to Ran Raz. The exponent ω is defined to be a limit point, in that it is the infimum of the exponent over all matrix multiplication algorithms. It is known that this limit point is not achieved. In other words, under the model of computation typically studied, there is no matrix multiplication algorithm that uses precisely operations; there must be an additional factor of . Rectangular matrix multiplication Similar techniques also apply to rectangular matrix multiplication. The central object of study is , which is the smallest such that one can multiply a matrix of size with a matrix of size with arithmetic operations. A result in algebraic complexity states that multiplying matrices of size and requires the same number of arithmetic operations as multiplying matrices of size and and of size and , so this encompasses the complexity of rectangular matrix multiplication. This generalizes the square matrix multiplication exponent, since . Since the output of the matrix multiplication problem is size , we have for all values of . If one can prove for some values of between 0 and 1 that , then such a result shows that for those . The largest k such that is known as the dual matrix multiplication exponent, usually denoted α. α is referred to as the "dual" because showing that is equivalent to showing that . Like the matrix multiplication exponent, the dual matrix multiplication exponent sometimes appears in the complexity of algorithms in numerical linear algebra and optimization. The first bound on α is by Coppersmith in 1982, who showed that . The current best peer-reviewed bound on α is , given by Williams, Xu, Xu, and Zhou. Related problems Problems that have the same asymptotic complexity as matrix multiplication include determinant, matrix inversion, Gaussian elimination (see next section). Problems with complexity that is expressible in terms of include characteristic polynomial, eigenvalues (but not eigenvectors), Hermite normal form, and Smith normal form. Matrix inversion, determinant and Gaussian elimination In his 1969 paper, where he proved the complexity for matrix computation, Strassen proved also that matrix inversion, determinant and Gaussian elimination have, up to a multiplicative constant, the same computational complexity as matrix multiplication. The proof does not make any assumptions on matrix multiplication that is used, except that its complexity is for some The starting point of Strassen's proof is using block matrix multiplication. Specifically, a matrix of even dimension may be partitioned in four blocks Under this form, its inverse is provided that and are invertible. Thus, the inverse of a matrix may be computed with two inversions, six multiplications and four additions or additive inverses of matrices. It follows that, denoting respectively by , and the number of operations needed for inverting, multiplying and adding matrices, one has If one may apply this formula recursively: If and one gets eventually for some constant . For matrices whose dimension is not a power of two, the same complexity is reached by increasing the dimension of the matrix to a power of two, by padding the matrix with rows and columns whose entries are 1 on the diagonal and 0 elsewhere. This proves the asserted complexity for matrices such that all submatrices that have to be inverted are indeed invertible. This complexity is thus proved for almost all matrices, as a matrix with randomly chosen entries is invertible with probability one. The same argument applies to LU decomposition, as, if the matrix is invertible, the equality defines a block LU decomposition that may be applied recursively to and for getting eventually a true LU decomposition of the original matrix. The argument applies also for the determinant, since it results from the block LU decomposition that Minimizing number of multiplications Related to the problem of minimizing the number of arithmetic operations is minimizing the number of multiplications, which is typically a more costly operation than addition. A algorithm for matrix multiplication must necessarily only use multiplication operations, but these algorithms are impractical. Improving from the naive multiplications for schoolbook multiplication, matrices in can be done with 47 multiplications, matrix multiplication over a commutative ring can be done in 21 multiplications (23 if non-commutative). The lower bound of multiplications needed is 2mn+2n−m−2 (multiplication of n×m-matrices with m×n-matrices using the substitution method, m⩾n⩾3), which means n=3 case requires at least 19 multiplications and n=4 at least 34. For n=2 optimal 7 multiplications 15 additions are minimal, compared to only 4 additions for 8 multiplications. See also Computational complexity of mathematical operations CYK algorithm, §Valiant's algorithm Freivalds' algorithm, a simple Monte Carlo algorithm that, given matrices , and , verifies in time if . Matrix chain multiplication Matrix multiplication, for abstract definitions Matrix multiplication algorithm, for practical implementation details Sparse matrix–vector multiplication References External links Yet another catalogue of fast matrix multiplication algorithms Computer arithmetic algorithms Computational complexity theory Matrix theory Unsolved problems in computer science
Computational complexity of matrix multiplication
[ "Mathematics" ]
2,420
[ "Unsolved problems in computer science", "Unsolved problems in mathematics", "Mathematical problems" ]
24,107,729
https://en.wikipedia.org/wiki/MARCM
Mosaic analysis with a repressible cell marker, or MARCM, is a genetics technique for creating individually labeled homozygous cells in an otherwise heterozygous Drosophila melanogaster. It has been a crucial tool in studying the development of the Drosophila nervous system. This technique relies on recombination during mitosis mediated by FLP-FRT recombination. As one copy of a gene, provided by the balancer chromosome, is often enough to rescue a mutant phenotype, MARCM clones can be used to study a mutant phenotype in an otherwise wildtype animal. Crosses In order to label small populations of cells from a common progenitor, MARCM uses the GAL4-UAS system. A marker gene such as GFP is placed under control of a UAS promoter. GAL4 is ubiquitously expressed in these flies, thus driving marker expression. In addition, GAL80 is driven by a strong promoter such as tubP. Gal80 is an inhibitor of GAL4, and will suppress GFP expression under normal conditions. This tubP-GAL80 element is placed distal to an FRT site. A second FRT site is placed in trans to the GAL80 site, usually with a gene or mutation of interest distal to it. Finally, FLP recombinase is driven by an inducible promoter such as heat shock. When FLP transcription is induced, it will recombine the chromosomes at the two FRT sites in cells undergoing mitosis. These cells will divide into two homozygous daughter cells—one carrying both GAL80 elements, and one carrying none. The daughter cell lacking GAL80 will be labeled due to expression of the marker via the GAL4-UAS system. All subsequent daughter cells from this progenitor will also express the marker. Labs will often have MARCM-ready lines which have the inducible FLP, GAL80 distal to a FRT site, GAL4, and UAS-Marker. These can be readily crossed with flies that have a mutation of interest distal to a FRT site. Uses By taking advantage of MARCM, one can easily trace all the cells that have been generated from a single progenitor. This is useful tool in tracking development and specific cell lineages in various environmental conditions. Applications for MARCM include studying neuronal circuits, clonal analysis, genetic screens, spermatogenesis, growth cone development, neurogenesis, and tumor metastasis. Many advances in the understanding of Drosophila development have been achieved through MARCM. The development, lineages, and characterizations of secondary axon tracts, anatomical maps of cholinergic neurons in the visual systems, lineages giving rise to a thoracic hemineuromere scaffold and the developmental framework for CNS architecture, the role of Delta in developmental programming in the ventral nerve cord, the wake-promoting octopaminergic cells in the medial protocerebrum, genes involved in neuronal morphogenesis of the mushroom bodies, and the regulation of commissural axon guidance have all been identified through MARCM techniques. Variations There are many variations of MARCM. Twin-spot MARCM allows for labeling of sister clones with two separate markers, thus allowing for a higher resolution of lineage tracing. In reverse MARCM, the mutation of interest is placed on the same chromosome as GAL80, so that the wild-type homozygous clones will be labeled. Flip-Out MARCM highlights individual cells inside of mutant clones (ref "Drosophila Dscam is required for divergent segregation of sister branches and suppresses ectopic bifurcation of axons," Neuron, 2002). The Q system allows for GAL4 independent MARCM by using the QF/QS system. Lethal MARCM allows for the generation of large marked homozygous populations by including a lethal mutation near the GAL80 site. Dual-expression control MARCM uses the LexA-VP16 transcriptional system in concordance with GAL4-UAS. MARCM is also often used as a genetic screen. See also Mosaic (genetics) References Molecular genetics
MARCM
[ "Chemistry", "Biology" ]
857
[ "Molecular genetics", "Molecular biology" ]
24,108,157
https://en.wikipedia.org/wiki/Collectin%20of%2043%20kDa
Collectin of 43 kDa (CL-43) is a collectin protein that acts as an antigen recognition protein. When an agent, zymosan, was injected into the tunicate Styela plicata (causing inflammation), secretion of this collectin was tripled within 96 hours. References Collectins Proteins
Collectin of 43 kDa
[ "Chemistry" ]
69
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
24,109,039
https://en.wikipedia.org/wiki/Quench%20press
A quench press is a machine that uses concentrated forces to hold an object as it is quenched. These types of quench facilities are used to quench large gears and other circular parts so that they remain circular. They are also used to quench saw blades and other flat or plate-shaped objects so that they remain flat. Quench presses are able to quench the part while it is being held because of the unique structure of the clamps holding the part. Clamps are slotted so that oil or water can flow through each slot and cool the part and the ribs of the clamps can hold the part in place. References Gears Metalworking tools Metallurgical processes
Quench press
[ "Chemistry", "Materials_science" ]
140
[ "Metallurgical processes", "Metallurgy" ]
4,976,789
https://en.wikipedia.org/wiki/Hydrogen%20potassium%20ATPase
Gastric hydrogen potassium ATPase, also known as H+/K+ ATPase, is an enzyme which functions to acidify the stomach. It is a member of the P-type ATPases, also known as E1-E2 ATPases due to its two states. Biological function and location The gastric hydrogen potassium ATPase or H+/K+ ATPase is the proton pump of the stomach. It exchanges potassium from the intestinal lumen with cytoplasmic hydronium and is the enzyme primarily responsible for the acidification of the stomach contents and the activation of the digestive enzyme pepsin (see gastric acid). The H+/K+ ATPase is found in parietal cells, which are highly specialized epithelial cells located in the inner cell lining of the stomach called the gastric mucosa. Parietal cells possess an extensive secretory membrane system and the H+/K+ ATPase is the major protein constituent of these membranes. A small amount of H+/K+ ATPase is also found in the renal medulla. Genes and protein structure The H+/K+ ATPase is a heterodimeric protein, the product of 2 genes. The gene ATP4A encodes the H+/K+ ATPase α subunit, and is a ~1000-amino acid protein that contains the catalytic sites of the enzyme and forms the pore through the cell membrane that allows the transport of ions. Hydronium ions bind to two active sites present in the α subunit. The α subunit also has a phosphorylation site (Asp385). The gene ATP4B encodes the β subunit of the H+/K+ ATPase, which is a ~300-amino acid protein with a 36-amino acid N-terminal cytoplasmic domain, a single transmembrane domain, and a highly glycosylated extracellular domain. The H+/K+ ATPase β subunit stabilizes the H+/K+ ATPase α subunit and is required for function of the enzyme. The β subunit prevents the pump from running in reverse, and it also appears to contain signals that direct the heterodimer to membrane destinations within the cell, although some of these signals are subordinate to signals found in H+/K+ ATPase α subunit. The structure of H+/K+ ATPase has been determined for humans, dogs, hogs, rats, and rabbits and is 98% homologous across all species. Enzyme mechanism and activity H+/K+ ATPase is a P2-type ATPase, a member of the eukaryotic class of P-type ATPases. Like the Ca2+ and the Na+/K+ ATPases, the H+/K+ ATPase functions as an α, β protomer. Unlike other eukaryotic ATPases, the H+/K+ ATPase is electroneutral, transporting one proton into the stomach lumen per potassium ion retrieved from the gastric lumen. As an ion pump the H+/K+ ATPase is able to transport ions against a concentration gradient using energy derived from the hydrolysis of ATP. Like all P-type ATPases, a phosphate group is transferred from adenosine triphosphate (ATP) to the H+/K+ ATPase during the transport cycle. This phosphate transfer powers a conformational change in the enzyme that helps drive ion transport. The hydrogen potassium ATPase is activated indirectly by gastrin that causes ECL cells to release histamine. The histamine binds to H2 receptors on the parietal cell, activating a cAMP-dependent pathway which causes the enzyme to move from the cytoplasmic tubular membranes to deeply folded canaliculi of the stimulated parietal cell. Once localized, the enzyme alternates between two conformations, E1 and E2, to transport ions across the membrane. The E1 conformation binds a phosphate from ATP and hydronium ion on the cytoplasmic side. The enzyme then changes to the E2 conformation, allowing hydronium to be released in the lumen. The E2 conformation binds potassium, and reverts to the E1 conformation to release phosphate and K+ into the cytoplasm where another ATP can be hydrolyzed to repeat the cycle. The β subunit prevents the E2-P conformation from reverting to the E1-P conformation, making proton pumping unidirectional. The number of ions transported per ATP varies from 2H+/2K+ to 1H+/1K+depending on the pH of the stomach. Disease relevance and inhibition Inhibiting the hydrogen potassium pump to decrease stomach acidity has been the most common method of treating diseases including gastroesophageal reflux disease (GERD/GORD) and peptic ulcer disease (PUD). Reducing acidity alleviates disease symptoms but does not treat the actual cause of GERD (abnormal relaxation of the esophageal sphincter) or PUD (Helicobacter pylori and NSAIDs). Three drug classes have been used to inhibit H+/K+-ATPases. H2-receptor antagonists, like cimetidine (Tagamet), inhibit the signaling pathway that leads to activation of the ATPase. This type of inhibitor is effective in treating ulcers but does not prevent them from forming, and patients develop tolerance to them after about one week, leading to a 50% reduction in efficacy. Proton pump inhibitors (PPIs) were later developed, starting with Timoprazole in 1975. PPIs are acid-activated prodrugs that inhibit the hydrogen-potassium ATPase by binding covalently to active pumps. Current PPIs like Omeprazole have a short half-life of approximately 90 minutes. Acid pump antagonists (APAs) or potassium-competitive acid blockers (PCABs) are a third type of inhibitor that blocks acid secretion by binding to the K+ active site. APAs provide faster inhibition than PPIs since they do not require acid activation. Revaprazan was the first APA used clinically in east Asia, and other APAs are being developed since they appear to provide better acid control in clinical trials. Inactivation of the proton pump can also lead to health problems. A study in mice by Krieg et al. found that a mutation of the pump's α-subunit led to achlorhydria, resulting in problems with iron absorption, leading to iron deficiency and anemia. The use of PPIs has not been correlated with an elevated risk of anemia, so the H+/K+-ATPase is thought to aid iron absorption but is not necessarily required. Current association of dementia and PPIs have been documented in Germany and in research articles denoting how Benzimidazole derivatives, Astemizole (AST) and Lansoprazole (LNS) interact with anomalous aggregates of tau protein (neurofibrillary tangles). Current theories include the non-selective blockade of sodium-potassium pumps in the brain causing osmotic imbalances or swelling in the cells. [auth opinion] Interaction of PPIs with other drug affecting the sodium-potassium pump, e.g., digoxin, warfarin etc., has been well documented. Memory has been associated with astrocytes and the alpha3 subunit of adenosine receptor found in hydrogen/Sodium-potassium pumps may be a focal point in dementia. Chronic use of PPIs may cause down regulation of alpha3 subunit increasing damage to astrocytes. Osteopetrosis via TCIRG1 gene has a strong association with pre-senile dementia. See also Discovery and development of proton pump inhibitors Solvated electron References External links Body fluids Digestive system Hydrogen biology Transport proteins EC 3.6.3
Hydrogen potassium ATPase
[ "Biology" ]
1,651
[ "Digestive system", "Organ systems" ]
4,978,101
https://en.wikipedia.org/wiki/CANFLEX
CANFLEX; the name is derived from its function: CANDU FLEXible fuelling, is an advanced fuel bundle design developed by Atomic Energy of Canada Ltd. (AECL), along with the Korean Atomic Energy Research Institute (KAERI) for use in CANDU design nuclear reactors. The designers claim that it will deliver many benefits to current and future CANDU reactors-using natural uranium or other advanced nuclear fuel cycles. These include greater operating and safety margins, extended plant life, better economics and increased power. The CANFLEX bundle has 43 fuel elements, with two element sizes. It is about 10 cm (four inches) in diameter, 0.5 m (20 inches) long and weighs about 20 kg (44 lbs) and replaces 37-pin standard bundle. It has been designed specifically to increase fuel performance by utilizing two different pin diameters. This reduces the power rating of the hottest pins in the bundles, for the same total bundle power output. Also, the design incorporates special geometry modifications that enhance the heat transfer between the fuel and surrounding coolant. Twenty-four of these fuel bundles have been tested in the Point Lepreau CANDU 6 reactor in New Brunswick, Canada, and results indicate CANFLEX meets all expectations and regulatory requirements. The Bruce Nuclear Generating Station has announced a conversion to CANFLEX fuel for their reactors, in 2006. See also Nuclear fuel Nuclear fuel cycle Uranium market Reprocessed uranium Global Nuclear Energy Partnership References Development of CANFLEX-NU Fuel as a CANDU Advanced Fuel Nuclear materials Nuclear reprocessing Nuclear technology in Canada
CANFLEX
[ "Physics" ]
321
[ "Materials", "Nuclear materials", "Matter" ]
4,979,244
https://en.wikipedia.org/wiki/Schaffel
Schaffel (the German spelling to match the English pronunciation of "shuffle") is a fusion style of techno and rock in which minimal techno's straight-up drum kick is shuffled to offbeat emphasis. Often triplet eighths are used to create swinging rhythms. History Originating from swing and R&B roots, the beat was popularized by glam rock performers like T. Rex with their 1971 hit "Hot Love" and Gary Glitter in his 1972 hit "Rock and Roll Part 2". The schaffel beat has remained in use in electronic music genres and can be found in such releases as "Personal Jesus" by Depeche Mode. Michael Mayer's label Kompakt has put out a series of compilations titled Schaffelfieber ("Schaffel Fever"). References Further reading 20th-century music genres German styles of music Techno genres Rhythm and meter Electronic rock
Schaffel
[ "Physics" ]
188
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
4,980,124
https://en.wikipedia.org/wiki/Faying
Welding
Faying
[ "Engineering" ]
3
[ "Welding", "Mechanical engineering" ]
4,980,632
https://en.wikipedia.org/wiki/Solar%20humidification
The solar humidification–dehumidification method (HDH) is a thermal water desalination method. It is based on evaporation of sea water or brackish water and subsequent condensation of the generated humid air, mostly at ambient pressure. This process mimics the natural water cycle, but over a much shorter time frame. Overview The simplest configuration is implemented in the solar still, evaporating the sea water inside a glass covered box and condensing the water vapor on the lower side of the glass cover but not below the unevaporated seawater. More sophisticated designs separate the solar heat gain section from the evaporation-condensation chamber. An optimized design comprises separated evaporation and condensation sections. A significant part of the heat consumed for evaporation can be regained during condensation. An example for such an optimized thermal desalination cycle is the multiple-effect humidification (MEH) method of desalination. Solar humidification takes place in every greenhouse. Water evaporates from the surfaces of soil, water and plants because of thermal input. In this way the humidification process is naturally integrated within the architecture of the greenhouse. Several companies like Seawater greenhouse utilize this inherent feature of a greenhouse in order to conduct desalination inside the atmosphere of the facility. Design The method can be optimized by using various effects in the categories of thermal energy collection and storage for continued nocturnal operation, choice of site location, various evaporation effects, as well as condensator design and provision of cooling energy to harvest distillate from the moist air. A Desalination Greenhouse using all of the effects in all categories, with an emphasis on the optimized combination of the effects including synergies, is the IBTS Greenhouse. The Global water cycle also includes all sub-effects of HDH, like increased evaporation over the oceans surface and surface increase by wind, making the generation of freshwater on the planet so efficient. Tests There are successful small-scale agricultural experimentation done in arid regions such as Israel, West Africa, and Peru. The major difficulty lies in effectively concentrating the energy of sun on a small area to speed up evaporation. References External links Encyclopedia of Desalination and Water Resources The MEH-Method (in German with English abstract): Solar Desalination using the MEH method, Diss. Technical University of Munich Desalination Systems using the MEH-Process Water treatment Drinking water Water desalination
Solar humidification
[ "Chemistry", "Engineering", "Environmental_science" ]
518
[ "Water desalination", "Water treatment", "Water pollution", "Environmental engineering", "Water technology" ]
4,982,431
https://en.wikipedia.org/wiki/Fermat%20%28computer%20algebra%20system%29
Fermat (named after Pierre de Fermat) is a program developed by Prof. Robert H. Lewis of Fordham University. It is a computer algebra system, in which items being computed can be integers (of arbitrary size), rational numbers, real numbers, complex numbers, modular numbers, finite field elements, multivariable polynomials, rational functions, or polynomials modulo other polynomials. The main areas of application are multivariate rational function arithmetic and matrix algebra over rings of multivariate polynomials or rational functions. Fermat does not do simplification of transcendental functions or symbolic integration. A session with Fermat usually starts by choosing rational or modular "mode" to establish the ground field (or ground ring) as or . On top of this may be attached any number of symbolic variables thereby creating the polynomial ring and its quotient field. Further, some polynomials involving some of the can be chosen to mod out with, creating the quotient ring Finally, it is possible to allow Laurent polynomials, those with negative as well as positive exponents. Once the computational ring is established in this way, all computations are of elements of this ring. The computational ring can be changed later in the session. The polynomial gcd procedures, which call each other in a highly recursive manner, are about 7000 lines of code. Fermat has extensive built-in primitives for array and matrix manipulations, such as submatrix, sparse matrix, determinant, normalize, column reduce, row echelon, Smith normal form, and matrix inverse. It is consistently faster than some well known computer algebra systems, especially in multivariate polynomial gcd. It is also space efficient. The basic data item in Fermat is a multivariate rational function or quolynomial. The numerator and denominator are polynomials with no common factor. Polynomials are implemented recursively as general linked lists, unlike some systems that implement polynomials as lists of monomials. To implement (most) finite fields, the user finds an irreducible monic polynomial in a symbolic variable, say and commands Fermat to mod out by it. This may be continued recursively, etc. Low level data structures are set up to facilitate arithmetic and gcd over this newly created ground field. Two special fields, and are more efficiently implemented at the bit level. History With Windows 10, and thanks to Bogdan Radu, it is now possible (May 2021) to run Fermat Linux natively on Windows. See the main web page http://home.bway.net/lewis Fermat was last updated on 20 May 2020 (Mac and Linux; latest Windows version: 1 November 2011). In an earlier version, called FFermat (Float Fermat), the basic number type is floating point numbers of 18 digits. That version allows for numerical computing techniques, has extensive graphics capabilities, no sophisticated polynomial gcd algorithms, and is available only for Mac OS 9. Fermat was originally written in Pascal for a DEC VAX, then for the classic Mac OS during 1985–1996. It was ported to Microsoft Windows in 1998. In 2003 it was translated into C and ported to Linux (Intel machines) and Unix (Sparc/Sun). It is about 98,000 lines of C code. The FFermat and (old) Windows Fermat Pascal source code have been made available to the public under a restrictive license. The manual was extensively revised and updated on 25 July 2011 (latest small revision in June 2016, apparently another revision on 25 March 2020). See also Comparison of computer algebra systems External links Windows Fermat Pascal source code Float Fermat Pascal source code Robert H. Lewis at academia.edu C (programming language) software Computer algebra system software for Linux Computer algebra systems Proprietary freeware for Linux
Fermat (computer algebra system)
[ "Mathematics" ]
799
[ "Computer algebra systems", "Mathematical software" ]
20,017,026
https://en.wikipedia.org/wiki/Dehn%20function
In the mathematical subject of geometric group theory, a Dehn function, named after Max Dehn, is an optimal function associated to a finite group presentation which bounds the area of a relation in that group (that is a freely reduced word in the generators representing the identity element of the group) in terms of the length of that relation (see pp. 79–80 in ). The growth type of the Dehn function is a quasi-isometry invariant of a finitely presented group. The Dehn function of a finitely presented group is also closely connected with non-deterministic algorithmic complexity of the word problem in groups. In particular, a finitely presented group has solvable word problem if and only if the Dehn function for a finite presentation of this group is recursive (see Theorem 2.1 in ). The notion of a Dehn function is motivated by isoperimetric problems in geometry, such as the classic isoperimetric inequality for the Euclidean plane and, more generally, the notion of a filling area function that estimates the area of a minimal surface in a Riemannian manifold in terms of the length of the boundary curve of that surface. History The idea of an isoperimetric function for a finitely presented group goes back to the work of Max Dehn in 1910s. Dehn proved that the word problem for the standard presentation of the fundamental group of a closed oriented surface of genus at least two is solvable by what is now called Dehn's algorithm. A direct consequence of this fact is that for this presentation the Dehn function satisfies Dehn(n) ≤ n. This result was extended in 1960s by Martin Greendlinger to finitely presented groups satisfying the C'(1/6) small cancellation condition. The formal notion of an isoperimetric function and a Dehn function as it is used today appeared in late 1980s – early 1990s together with the introduction and development of the theory of word-hyperbolic groups. In his 1987 monograph "Hyperbolic groups" Gromov proved that a finitely presented group is word-hyperbolic if and only if it satisfies a linear isoperimetric inequality, that is, if and only if the Dehn function of this group is equivalent to the function f(n) = n. Gromov's proof was in large part informed by analogy with filling area functions for compact Riemannian manifolds where the area of a minimal surface bounding a null-homotopic closed curve is bounded in terms of the length of that curve. The study of isoperimetric and Dehn functions quickly developed into a separate major theme in geometric group theory, especially since the growth types of these functions are natural quasi-isometry invariants of finitely presented groups. One of the major results in the subject was obtained by Sapir, Birget and Rips who showed that most "reasonable" time complexity functions of Turing machines can be realized, up to natural equivalence, as Dehn functions of finitely presented groups. Formal definition Let be a finite group presentation where the X is a finite alphabet and where R ⊆ F(X) is a finite set of cyclically reduced words. Area of a relation Let w ∈ F(X) be a relation in G, that is, a freely reduced word such that w = 1 in G. Note that this is equivalent to saying that w belongs to the normal closure of R in F(X), that is, there exists a representation of w as    (♠) where m ≥ 0 and where ri ∈ R± 1 for i = 1, ..., m. For w ∈ F(X) satisfying w = 1 in G, the area of w with respect to (∗), denoted Area(w), is the smallest m ≥ 0 such that there exists a representation (♠) for w as the product in F(X) of m conjugates of elements of R± 1. A freely reduced word w ∈ F(X) satisfies w = 1 in G if and only if the loop labeled by w in the presentation complex for G corresponding to (∗) is null-homotopic. This fact can be used to show that Area(w) is the smallest number of 2-cells in a van Kampen diagram over (∗) with boundary cycle labelled by w. Isoperimetric function An isoperimetric function for a finite presentation (∗) is a monotone non-decreasing function such that whenever w ∈ F(X) is a freely reduced word satisfying w = 1 in G, then Area(w) ≤ f(|w|), where |w| is the length of the word w. Dehn function Then the Dehn function of a finite presentation (∗) is defined as Equivalently, Dehn(n) is the smallest isoperimetric function for (∗), that is, Dehn(n) is an isoperimetric function for (∗) and for any other isoperimetric function f(n) we have Dehn(n) ≤ f(n) for every n ≥ 0. Growth types of functions Because the exact Dehn function usually depends on the presentation, one usually studies its asymptotic growth type as n tends to infinity, which only depends on the group. For two monotone-nondecreasing functions one says that f is dominated by g if there exists C ≥1 such that for every integer n ≥ 0. Say that f ≈ g if f is dominated by g and g is dominated by f. Then ≈ is an equivalence relation and Dehn functions and isoperimetric functions are usually studied up to this equivalence relation. Thus for any a,b > 1 we have an ≈ bn. Similarly, if f(n) is a polynomial of degree d (where d ≥ 1 is a real number) with non-negative coefficients, then f(n) ≈ nd. Also, 1 ≈ n. If a finite group presentation admits an isoperimetric function f(n) that is equivalent to a linear (respectively, quadratic, cubic, polynomial, exponential, etc.) function in n, the presentation is said to satisfy a linear (respectively, quadratic, cubic, polynomial, exponential, etc.) isoperimetric inequality. Basic properties If G and H are quasi-isometric finitely presented groups and some finite presentation of G has an isoperimetric function f(n) then for any finite presentation of H there is an isoperimetric function equivalent to f(n). In particular, this fact holds for G = H, where the same group is given by two different finite presentations. Consequently, for a finitely presented group the growth type of its Dehn function, in the sense of the above definition, does not depend on the choice of a finite presentation for that group. More generally, if two finitely presented groups are quasi-isometric then their Dehn functions are equivalent. For a finitely presented group G given by a finite presentation (∗) the following conditions are equivalent: G has a recursive Dehn function with respect to (∗). There exists a recursive isoperimetric function f(n) for (∗). The group G has solvable word problem. In particular, this implies that solvability of the word problem is a quasi-isometry invariant for finitely presented groups. Knowing the area Area(w) of a relation w allows to bound, in terms of |w|, not only the number of conjugates of the defining relations in (♠) but the lengths of the conjugating elements ui as well. As a consequence, it is known that if a finitely presented group G given by a finite presentation (∗) has computable Dehn function Dehn(n), then the word problem for G is solvable with non-deterministic time complexity Dehn(n) and deterministic time complexity Exp(Dehn(n)). However, in general there is no reasonable bound on the Dehn function of a finitely presented group in terms of the deterministic time complexity of the word problem and the gap between the two functions can be quite large. Examples For any finite presentation of a finite group G we have Dehn(n) ≈ n. For the closed oriented surface of genus 2, the standard presentation of its fundamental group satisfies Dehn(n) ≤ n and Dehn(n) ≈ n. For every integer k ≥ 2 the free abelian group has Dehn(n) ≈ n2. The Baumslag-Solitar group has Dehn(n) ≈ 2n (see ). The 3-dimensional discrete Heisenberg group satisfies a cubic but no quadratic isoperimetric inequality. Higher-dimensional Heisenberg groups , where k ≥ 2, satisfy quadratic isoperimetric inequalities. If G is a "Novikov-Boone group", that is, a finitely presented group with unsolvable word problem, then the Dehn function of G growths faster than any recursive function. For the Thompson group F the Dehn function is quadratic, that is, equivalent to n2 (see ). The so-called Baumslag-Gersten group has a Dehn function growing faster than any fixed iterated tower of exponentials. Specifically, for this group Dehn(n) ≈ exp(exp(exp(...(exp(1))...))) where the number of exponentials is equal to the integral part of log2(n) (see ). Known results A finitely presented group is word-hyperbolic group if and only if its Dehn function is equivalent to n, that is, if and only if every finite presentation of this group satisfies a linear isoperimetric inequality. Isoperimetric gap: If a finitely presented group satisfies a subquadratic isoperimetric inequality then it is word-hyperbolic. Thus there are no finitely presented groups with Dehn functions equivalent to nd with d ∈ (1,2). Automatic groups and, more generally, combable groups satisfy quadratic isoperimetric inequalities. A finitely generated nilpotent group has a Dehn function equivalent to nd where d ≥ 1 and all positive integers d are realized in this way. Moreover, every finitely generated nilpotent group G admits a polynomial isoperimetric inequality of degree c + 1, where c is the nilpotency class of G. The set of real numbers d ≥ 1, such that there exists a finitely presented group with Dehn function equivalent to nd, is dense in the interval . If all asymptotic cones of a finitely presented group are simply connected, then the group satisfies a polynomial isoperimetric inequality. If a finitely presented group satisfies a quadratic isoperimetric inequality, then all asymptotic cones of this group are simply connected. If (M,g) is a closed Riemannian manifold and G = π1(M) then the Dehn function of G is equivalent to the filling area function of the manifold. If G is a group acting properly discontinuously and cocompactly by isometries on a CAT(0) space, then G satisfies a quadratic isoperimetric inequality. In particular, this applies to the case where G is the fundamental group of a closed Riemannian manifold of non-positive sectional curvature (not necessarily constant). The Dehn function of SL(m, Z) is at most exponential for any m ≥ 3. For SL(3,Z) this bound is sharp and it is known in that case that the Dehn function does not admit a subexponential upper bound. The Dehn functions for SL(m,Z), where m > 4 are quadratic. The Dehn function of SL(4,Z), has been conjectured to be quadratic, by Thurston. This and, more generally, Gromov's conjecture that lattices in higher rank Lie groups have a quadratic Dehn function has been proved by Leuzinger and Young. Mapping class groups of surfaces of finite type are automatic and satisfy quadratic isoperimetric inequalities. The Dehn functions for the groups Aut(Fk) and Out(Fk) are exponential for every k ≥ 3. Exponential isoperimetric inequalities for Aut(Fk) and Out(Fk) when k ≥ 3 were found by Hatcher and Vogtmann. These bounds are sharp, and the groups Aut(Fk) and Out(Fk) do not satisfy subexponential isoperimetric inequalities, as shown for k = 3 by Bridson and Vogtmann, and for k ≥ 4 by Handel and Mosher. For every automorphism φ of a finitely generated free group Fk the mapping torus group of φ satisfies a quadratic isoperimetric inequality. Most "reasonable" computable functions that are ≥n4, can be realized, up to equivalence, as Dehn functions of finitely presented groups. In particular, if f(n) ≥ n4 is a superadditive function whose binary representation is computable in time by a Turing machine then f(n) is equivalent to the Dehn function of a finitely presented group. Although one cannot reasonably bound the Dehn function of a group in terms of the complexity of its word problem, Birget, Olʹshanskii, Rips and Sapir obtained the following result, providing a far-reaching generalization of Higman's embedding theorem: The word problem of a finitely generated group is decidable in nondeterministic polynomial time if and only if this group can be embedded into a finitely presented group with a polynomial isoperimetric function. Moreover, every group with the word problem solvable in time T(n) can be embedded into a group with isoperimetric function equivalent to n2T(n2)4. Generalizations There are several companion notions closely related to the notion of an isoperimetric function. Thus an isodiametric function bounds the smallest diameter (with respect to the simplicial metric where every edge has length one) of a van Kampen diagram for a particular relation w in terms of the length of w. A filling length function the smallest filling length of a van Kampen diagram for a particular relation w in terms of the length of w. Here the filling length of a diagram is the minimum, over all combinatorial null-homotopies of the diagram, of the maximal length of intermediate loops bounding intermediate diagrams along such null-homotopies. The filling length function is closely related to the non-deterministic space complexity of the word problem for finitely presented groups. There are several general inequalities connecting the Dehn function, the optimal isodiametric function and the optimal filling length function, but the precise relationship between them is not yet understood. There are also higher-dimensional generalizations of isoperimetric and Dehn functions. For k ≥ 1 the k-dimensional isoperimetric function of a group bounds the minimal combinatorial volume of (k + 1)-dimensional ball-fillings of k-spheres mapped into a k-connected space on which the group acts properly and cocompactly; the bound is given as a function of the combinatorial volume of the k-sphere. The standard notion of an isoperimetric function corresponds to the case k = 1. Compared to the classical case only little is known about these higher dimensional filling functions. One chief result is that lattices in higher rank semisimple Lie groups are undistorted in dimensions below the rank, i.e. they satisfy the same filling functions as their associated symmetric space. In his monograph Asymptotic invariants of infinite groups Gromov proposed a probabilistic or averaged version of Dehn function and suggested that for many groups averaged Dehn functions should have strictly slower asymptotics than the standard Dehn functions. More precise treatments of the notion of an averaged Dehn function or mean Dehn function were given later by other researchers who also proved that indeed averaged Dehn functions are subasymptotic to standard Dehn functions in a number of cases (such as nilpotent and abelian groups). A relative version of the notion of an isoperimetric function plays a central role in Osin' approach to relatively hyperbolic groups. Grigorchuk and Ivanov explored several natural generalizations of Dehn function for group presentations on finitely many generators but with infinitely many defining relations. See also van Kampen diagram Word-hyperbolic group Automatic group Small cancellation theory Geometric group theory Notes Further reading Noel Brady, Tim Riley and Hamish Short. The Geometry of the Word Problem for Finitely Generated Groups. Advanced Courses in Mathematics CRM Barcelona, Birkhäuser, Basel, 2007. . Martin R. Bridson. The geometry of the word problem. Invitations to geometry and topology, pp. 29–91, Oxford Graduate Texts in Mathematics, 7, Oxford University Press, Oxford, 2002. . External links The Isoperimetric Inequality for SL(n,Z). A September 2008 Workshop at the American Institute of Mathematics. PDF of Bridson's article The geometry of the word problem. Geometric group theory Geometric topology Combinatorics on words
Dehn function
[ "Physics", "Mathematics" ]
3,645
[ "Geometric group theory", "Group actions", "Geometric topology", "Combinatorics", "Topology", "Combinatorics on words", "Symmetry" ]
20,024,409
https://en.wikipedia.org/wiki/Transiting%20Exoplanet%20Survey%20Satellite
Transiting Exoplanet Survey Satellite (TESS) is a space telescope for NASA's Explorer program, designed to search for exoplanets using the transit method in an area 400 times larger than that covered by the Kepler mission. It was launched on 18 April 2018, atop a Falcon 9 launch vehicle and was placed into a highly elliptical 13.70-day orbit around the Earth. The first light image from TESS was taken on 7 August 2018, and released publicly on 17 September 2018. In the two-year primary mission, TESS was expected to detect about 1,250 transiting exoplanets orbiting the targeted stars, and an additional 13,000 orbiting stars not targeted but observed. After the end of the primary mission around 4 July 2020, scientists continued to search its data for more planets, while the extended missions acquires additional data. , TESS had identified 7,203 candidate exoplanets, of which 482 had been confirmed. The primary mission objective for TESS was to survey the brightest stars near the Earth for transiting exoplanets over a two-year period. The TESS satellite uses an array of wide-field cameras to perform a survey of 85% of the sky. With TESS, it is possible to study the mass, size, density and orbit of a large cohort of small planets, including a sample of rocky planets in the habitable zones of their host stars. TESS provides prime targets for further characterization by the James Webb Space Telescope (JWST), as well as other large ground-based and space-based telescopes of the future. While previous sky surveys with ground-based telescopes have mainly detected giant exoplanets and the Kepler space telescope has mostly found planets around distant stars that are too faint for characterization, TESS finds many small planets around the nearest stars in the sky. TESS records the nearest and brightest main sequence stars hosting transiting exoplanets, which are the most favorable targets for detailed investigations. Detailed information about such planetary systems with hot Jupiters makes it possible to better understand the architecture of such systems. Led by the Massachusetts Institute of Technology (MIT) with seed funding from Google, on 5 April 2013, it was announced that TESS, along with the Neutron Star Interior Composition Explorer (NICER), had been selected by NASA for launch. On 18 July 2019, after the first year of operation, the southern portion of the survey was completed, and the northern survey was started. The primary mission ended with the completion of the northern survey on 4 July 2020, which was followed by the first extended mission. The first extended mission concluded in September 2022 and the spacecraft entered its second extended mission which should last for another three years. History The concept of TESS was first discussed in 2005 by the Massachusetts Institute of Technology (MIT) and the Smithsonian Astrophysical Observatory (SAO). The genesis of TESS was begun during 2006, when a design was developed from private funding by individuals, Google, and The Kavli Foundation. In 2008, MIT proposed that TESS become a full NASA mission and submitted it for the Small Explorer program at Goddard Space Flight Center, but it was not selected. It was resubmitted in 2010 as an Explorer program mission, and was approved in April 2013 as a Medium Explorer mission. TESS passed its critical design review (CDR) in 2015, allowing production of the satellite to begin. While Kepler had cost US$640 million at launch, TESS cost only US$200 million (plus US$87 million for launch). The mission will find exoplanets that periodically block part of the light from their host stars, events called transits. TESS will survey 200,000 of the brightest stars near the Sun to search for transiting exoplanets. TESS was launched on 18 April 2018, aboard a SpaceX Falcon 9 launch vehicle. In July 2019, an Extended Mission 2020 to 2022 was approved. and on 3 January 2020, the Transit Exoplanet Survey Satellite reported the discovery of TOI-700 d, its first potentially habitable Earth-sized planet. Mission overview TESS is designed to carry out the first spaceborne all-sky transiting exoplanet survey. It is equipped with four wide-angle telescopes and associated charge-coupled device (CCD) detectors. Science data are transmitted to Earth every two weeks. Full-frame images with an effective exposure time of two hours are transmitted as well, enabling scientists to search for unexpected transient phenomena, such as the optical counterparts to gamma-ray bursts. TESS also hosts a Guest Investigator program, allowing scientists from other organizations to use TESS for their own research. The resources allocated to Guest programs allow an additional 20,000 celestial bodies to be observed. Orbital dynamics TESS uses a novel highly elliptical orbit around the Earth with an apogee approximately at the distance of the Moon and a perigee of . TESS orbits Earth twice during the time the Moon orbits once, a 2:1 resonance with the Moon. The orbit is expected to remain stable for a minimum of ten years. In order to obtain unobstructed imagery of both the northern and southern hemispheres of the sky, TESS utilizes a 2:1 lunar resonant orbit called P/2, an orbit that has never been used before (although Interstellar Boundary Explorer (IBEX) uses a similar P/3 orbit). The highly elliptical orbit has a apogee, timed to be positioned approximately 90° away from the position of the Moon to minimize its destabilizing effect. This orbit should remain stable for decades and will keep TESS's cameras in a stable temperature range. The orbit is entirely outside the Van Allen belts to avoid radiation damage to TESS, and most of the orbit is spent far outside the belts. Every 13.70 days at its perigee of , TESS downlinks to Earth over a period of approximately 3 hours the data it has collected during the just finished orbit. Science objectives TESS's two-year all-sky survey would focus on nearby G-, K-, and M-type stars with apparent magnitudes brighter than magnitude 12. Approximately 500,000 stars were to be studied, including the 1,000 closest red dwarfs across the whole sky, an area 400 times larger than that covered by the Kepler mission. TESS was expected to find more than 3,000 transiting exoplanet candidates, including 500 Earth-sized planets and super-Earths. Of those discoveries, an estimated 20 were expected to be super-Earths located in the habitable zone around a star. The stated goal of the mission was to determine the masses of at least 50 Earth-sized planets (at most 4 times Earth radius). Most detected exoplanets are expected to be between 30 and 300 light-years away. The survey was broken up into 26 observation sectors, each sector being 24° × 96°, with an overlap of sectors at the ecliptic poles to allow additional sensitivity toward smaller and longer-period exoplanets in that region of the celestial sphere. The spacecraft will spend two 13.70-day orbits observing each sector, mapping the southern hemisphere of sky in its first year of operation and the northern hemisphere in its second year. The cameras actually take images every 2 seconds, but all the raw images would represent much more data volume than can be stored or downlinked. To deal with this, cutouts around 15,000 selected stars (per orbit) will be coadded over a 2-minute period and saved on board for downlink, while full-frame images will also be coadded over a 30-minute period and saved for downlink. The actual data downlinks will occur every 13.70 days near perigee. This means that during the 2 years, TESS will continuously survey 85% of the sky for 27 days, with certain parts being surveyed across multiple runs. The survey methodology was designed such that the area that will be surveyed, essentially continuously, over an entire year (351 observation days) and makes up about 5% of the entire sky, will encompass the regions of sky (near the ecliptic poles) which will be observable at any time of year with the James Webb Space Telescope (JWST). In October 2019, Breakthrough Listen started a collaboration with scientists from the TESS team to look for signs of advanced extraterrestrial life. Thousands of new planets found by TESS will be scanned for "technosignatures" by Breakthrough Listen partner facilities across the globe. Data from TESS monitoring of stars will also be searched for anomalies. Asteroseismology The TESS team also plans to use a 30-minute observation cadence for full-frame images, which has been noted for imposing a hard Nyquist limit that can be problematic for asteroseismology of stars. Asteroseismology is the science that studies the internal structure of stars by the interpretation of their frequency spectra. Different oscillation modes penetrate to different depths inside the star. The Kepler and PLATO observatories are also intended for asteroseismology. Extended missions During the 27 month First Extended Mission, data collection was slightly changed: A new set of target stars will be selected The number of stars monitored at 2-minute cadence was increased from 15,000 to 20,000 per observing sector. Up to 1000 stars per sector will be monitored at a new fast 20-second cadence. The full-frame image cadence will be increased from every 30 minutes to every 10 minutes. The pointings and gaps in coverage will be slightly different during the extended mission. Regions near the ecliptic will be covered. During the second extended mission, the full-frame image cadence will be further increased from every 10 minutes to every 200 seconds, number of 2-minute cadence targets reduced to ~8000 per sector, and number of 20-second cadence targets increased to ~2000 per sector. Launch In December 2014, SpaceX was awarded the contract to launch TESS in August 2017, for a total contract value of US$87 million. The spacecraft was originally scheduled to launch on 20 March 2018, but this was pushed back by SpaceX to allow additional time to prepare the launch vehicle and meet NASA launch service requirements. A static fire of the Falcon 9 rocket was completed on 11 April 2018, at approximately 18:30 UTC. The launch was postponed again from 16 April 2018, and TESS was eventually launched on a SpaceX Falcon 9 launch vehicle from the SLC-40 launch site at Cape Canaveral Air Force Station (CCAFS) on 18 April 2018. The Falcon 9 launch sequence included a 149-second burn by the first stage, followed by a 6-minute second stage burn. Meanwhile, the first-stage booster performed controlled-reentry maneuvers and successfully landed on the autonomous drone ship Of Course I Still Love You. An experimental water landing was performed for the fairing, as part of SpaceX's attempt to develop fairing reusability. After coasting for 35 minutes, the second stage performed a final 54-second burn that placed TESS into a supersynchronous transfer orbit of at an inclination of 28.50°. The second stage released the payload, after which the stage itself was placed in a heliocentric orbit. Spacecraft In 2013, Orbital Sciences Corporation received a four-year, US$75 million contract to build TESS for NASA. TESS uses an Orbital Sciences LEOStar-2 satellite bus, capable of three-axis stabilization using four hydrazine thrusters plus four reaction wheels providing better than three arcsecond fine spacecraft pointing control. Power is provided by two single-axis solar arrays generating 400 watts. A Ka-band dish antenna provides a 100 Mbit/s science downlink. Operational orbit Once injected into the initial orbit by the Falcon 9 second stage, the spacecraft performed four additional independent burns that placed it into a lunar flyby orbit. On 17 May 2018, the spacecraft underwent a gravity assist by the Moon at above the surface, and performed the final period adjustment burn on 30 May 2018. It achieved an orbital period of 13.65 days in the desired 2:1 resonance with the Moon, at 90° phase offset to the Moon at apogee, which is expected to be a stable orbit for at least 20 years, thus requiring very little fuel to maintain. The entire maneuvering phase was expected to take a total of two months, and put the craft in an eccentric orbit () at a 37° inclination. The total delta-v budget for orbit maneuvers was , which is 80% of the mission's total available reserves. If TESS receives an on-target or slightly above nominal orbit insertion by the Falcon 9, a theoretical mission duration in excess of 15 years would be possible from a consumables standpoint. Project timeline The first light image was made on 7 August 2018, and released publicly on September 17, 2018. TESS completed its commissioning phase at the end of July and the science phase officially started on 25 July 2018. For the first two years of operation TESS monitored both the southern (year 1) and northern (year 2) celestial hemispheres. During its nominal mission TESS tiles the sky in 26 separate segments, with a 27.4-day observing period per segment. The first southern survey was completed in July 2019. The first northern survey finished in July 2020. A 27-month First Extended mission ran until September 2022. A second extended mission will run approximately additional three years. Instruments The sole instrument on TESS is a package of four wide-field-of-view charge-coupled device (CCD) cameras. Each camera features four low-noise, low-power 4 megapixel CCDs created by MIT Lincoln Laboratory. The four CCDs are arranged in a 2 x 2 detector array for a total of 16 megapixels per camera and 16 CCDs for the entire instrument. Each camera has a 24° × 24° field of view, a effective pupil diameter, a lens assembly with seven optical elements, and a bandpass range of 600 to 1000 nm. The TESS lenses have a combined field of view of 24° × 96° (2300 deg2, around 5% of the entire sky) and a focal ratio of f/1.4. The ensquared energy, the fraction of the total energy of the point-spread function that is within a square of the given dimensions centered on the peak, is 50% within 15 × 15 μm and 90% within 60 × 60 μm. For comparison, Kepler's primary mission only covered an area of the sky measuring 105 deg2, though the K2 extension has covered many such areas for shorter times. The four telescopes in the assembly each have a 10.5-cm diameter lens entrance aperture, with a f/1.4 focal ratio, with a total of seven lenses in the optical train. Ground operations The TESS ground system is divided between eight sites around the United States. These include Space Network and the Jet Propulsion Laboratory's NASA Deep Space Network for command and telemetry, Orbital ATK's Mission Operations Center, Massachusetts Institute of Technology's Payload Operations Center, the Ames Research Center's Science Processing Operations Center, The Goddard Space Flight Center's Flight Dynamics Facility, the Smithsonian Astrophysical Observatory's TESS Science Office, and the Mikulski Archive for Space Telescopes (MAST). Stable light source for tests One of the issues facing the development of this type of instrument is having an ultra-stable light source to test on. In 2015, a group at the University of Geneva made a breakthrough in the development of a stable light source. While this instrument was created to support ESA's CHEOPS exoplanet observatory, one was also ordered by the TESS program. Although both observatories plan to look at bright nearby stars using the transit method, CHEOPS is focused on collecting more data on known exoplanets, including those found by TESS and other survey missions. Results Current mission results as of 18 November 2022: 273 confirmed exoplanets discovered by TESS, with 4079 candidate-planets that are still awaiting confirmation or rejection as false positive by the scientific community. TESS team partners include the Massachusetts Institute of Technology, the Kavli Institute for Astrophysics and Space Research, NASA's Goddard Space Flight Center, MIT's Lincoln Laboratory, Orbital ATK, NASA's Ames Research Center, the Harvard-Smithsonian Center for Astrophysics, and the Space Telescope Science Institute. C/2018 N1 TESS started science operations on 25 July 2018. The first announced finding from the mission was the observation of comet C/2018 N1. Pi Mensae The first exoplanet detection announcement was on 18 September 2018, announcing the discovery of a super-Earth in the Pi Mensae system orbiting the star every 6 days, adding to a known Super-Jupiter orbiting the same star every 5.9 years. LHS 3844 b On 20 September 2018, the discovery of an ultra-short period planet was announced, slightly larger than Earth, orbiting the red dwarf LHS 3844. With an orbital period of 11 hours, LHS 3844 b is one of the planets with the shortest known period. It orbits its star at a distance of . LHS 3844 b is also one of the closest known exoplanets to Earth, at a distance of 14.9 parsecs. HD 202772 Ab TESS's third discovered exoplanet is HD 202772 Ab, a hot Jupiter orbiting the brighter component of the visual binary star HD 202772, located in the constellation Capricornus at a distance of about 480 light-years from Earth. The discovery was announced on 5 October 2018. HD 202772 Ab orbits its host star once every 3.3 days. It is an inflated hot Jupiter, and a rare example of hot Jupiters around evolved stars. It is also one of the most strongly irradiated planets known, with an equilibrium temperature of . HD 21749 On 15 April 2019, TESS' first discovery of an earth-sized planet was reported. HD 21749 c is a planet described as "likely rocky", with about 89% of Earth's diameter and orbits the K-type main sequence star HD 21749 in about 8 days. The planet's surface temperature is estimated to be as high as 427 °C. Both known planets in the system, HD 21749 b and HD 21749 c, were discovered by TESS. HD 21749 c represents the 10th confirmed planet discovery by TESS. MAST Data collaboration Data on exoplanet candidates continue to be made available at MAST. As of 20 April 2019, the total number of candidates on the list was up to 335. Besides candidates identified as previously discovered exoplanets, this list also includes ten newly discovered exoplanets, including the five mentioned above. Forty-four of the candidates from Sector 1 in this list were selected for follow-up observations by the TESS Follow-Up Program (TFOP), which aims to aid the discovery of 50 planets with a planetary radius of R < 4 RE through repeated observations. The list of candidate exoplanets continues to grow as additional results are being published on the same MAST page. Changing to the Northern Sky On 18 July 2019, after the first year of operation the southern portion of the survey was completed, it turned its cameras to the Northern Sky. As of this time it has discovered 21 planets and has over 850 candidate exoplanets. DS Tucanae Ab On 23 July 2019, the discovery of the young exoplanet DS Tucanae Ab (HD 222259 Ab) in the ~45 Myr old Tucana-Horologium young moving group was published in a paper. TESS first observed the planet in November 2018 and it was confirmed in March 2019. The young planet is larger than Neptune, but smaller than Saturn. The system is bright enough to follow up with radial velocity and transmission spectroscopy. ESA's CHEOPS mission will observe the transits of the young exoplanet DS Tuc Ab. A team of scientists got 23.4 orbits approved in the first Announcement of Opportunity (AO-1) for the CHEOPS Guest Observers (GO) Programme to characterize the planet. Gliese 357 On 31 July 2019, the discovery of exoplanets around the M-type dwarf star Gliese 357 at a distance of 31 light years from Earth was announced. TESS directly observed the transit of GJ 357 b, a hot earth with an equilibrium temperature of around 250 °C. Follow-up ground observations and analyses of historic data lead to the discovery of GJ 357 c and GJ 357 d. While GJ 357 b and GJ 357 c are too close to the star to be habitable, GJ 357 d resides at the outer edge of the star's habitable zone and may possess habitable conditions if it has an atmosphere. With at least 6.1 ME it is classified as a Super-Earth. Count of exoplanets in 2019 As of September 2019, over 1000 TESS Objects of Interest (ToI) have been listed in the public database, at least 29 of which are confirmed planets, about 20 of which within the stated goal of the mission of Earth-sized (<4 Earth radii). ASASSN-19bt On 26 September 2019, it was announced that TESS observed its first tidal disruption event (TDE), called ASASSN-19bt. The TESS data revealed that ASASSN-19bt began to brighten on 21 January 2019, ~8.3 days before the discovery by ASAS-SN. TOI-700 On 6 January 2020, NASA reported the discovery of TOI-700 d, the first Earth-sized exoplanet in the habitable zone discovered by the TESS. The exoplanet orbits the star TOI-700 100 light-years away in the Dorado constellation. The TOI-700 system contains two other planets: TOI-700 b, another Earth-sized planet, and TOI-700 c, a super-Earth. This system is unique in that the larger planet is found between the two smaller planets. It is currently unknown how this arrangement of planets came to be, whether these planets formed in this order or if the larger planet migrated to its current orbit. On the same day, NASA announced that astronomers used TESS data to show that Alpha Draconis is an eclipsing binary star. TOI-1338 The same day, the discovery of TOI-1338 b was announced, the first circumbinary planet discovered with TESS. TOI-1338 b is around 6.9 times larger than Earth, or between the sizes of Neptune and Saturn. It lies in a system 1,300 light-years away in the constellation Pictor. The stars in the system make an eclipsing binary, which occurs when the stellar companions circle each other in our plane of view. One is about 10% more massive than the Sun, while the other is cooler, dimmer and only one-third the Sun's mass. TOI-1338 b's transits are irregular, between every 93 and 95 days, and vary in depth and duration thanks to the orbital motion of its stars. TESS only sees the transits crossing the larger star — the transits of the smaller star are too faint to detect. Although the planet transits irregularly, its orbit is stable for at least the next 10 million years. The orbit's angle to us, however, changes enough that the planet transit will cease after November 2023 and resume eight years later. HD 108236 On 25 January 2021, a team led by astrochemist Tansu Daylan, with the help of two high school interns as part of the Science Research Mentoring Program at Harvard & MIT, discovered and validated four extrasolar planets — composed of one super-Earth and three sub-Neptunes - hosted by the bright, nearby, Sun-like star HD 108236. The two high schoolers, 18 year old Jasmine Wright of Bedford High School in Bedford, Massachusetts, and 16 year old Kartik Pinglé of Cambridge Ringe And Latin School, of Cambridge, Massachusetts, are reported to be the youngest individuals in history to discover a planet, let alone four. TIC 168789840 On 27 January 2021, several news agencies reported that a team using TESS had determined that TIC 168789840, a stellar system with six stars in three binary pairs was oriented so astronomers could observe the eclipses of all the stars. It is the first six star system of its kind. Count of exoplanets in 2021 In March 2021, NASA announced that TESS found 2200 exoplanet candidates. By the end of 2021, TESS had discovered over 5000 candidates. TOI-1231 b On 17 May 2021, an international team of scientists, including researchers from NASA's Jet Propulsion Laboratory and the University of New Mexico, reported, and confirmed by a ground based telescope, the space telescope's first discovery of a Neptune-sized exoplanet, TOI-1231 b, inside a habitable zone. The planet orbits a nearby red dwarf star, 90 light-years away in the Vela constellation. Exoplanet search programs The TESS Objects of Interest (TOI) are assigned by the TESS team and the Community TOIs (CTOI) are assigned by independent researchers. The primary mission of TESS produced 2241 TOIs. Other small and large collaborations of researchers try to confirm the TOIs and CTOIs, or try to find new CTOIs. Some of the collaborations with names that are searching exclusively for TESS planets are: The citizen science project Planet Hunters: TESS (PHT) TESS Hunt for Young and Maturing Exoplanets (THYME) The TESS-Keck Survey (TKS) TESS Giants Transiting Giants (TESS GTG) Collaborations with currently a smaller amount of discovery papers: Warm gIaNts with tEss collaboration (WINE) The TESS Grand Unified Hot Jupiter Survey The TESS community is also producing software and programs to help validate the planet candidates, such as TRICERATOPS, DAVE, Lightkurve, Eleanor and Planet Patrol. In popular culture TESS is featured accurately in the 2018 film Clara. See also ARIEL, 2029 exoplanet atmospheres observatory CHEOPS, 2019 exoplanet observatory CoRoT, 2006–2012 exoplanet observatory Kepler, 2009–2018 exoplanet observatory MOST, 2003–2019 asteroseismology and exoplanet observatory PLATO, 2026 exoplanet observatory SWEEPS, 2006 Hubble Space Telescope exoplanet survey TOI-2119 List of transiting exoplanets References Further reading External links TESS twitter account by NASA TESS website by NASA Goddard TESS website by Massachusetts Institute of Technology (MIT) TESS discovered exoplanets by MIT TESS website by the Kavli Foundation Planet Hunters TESS: anyone can help classifying TESS data TESS listing of Southern Sky panoramas (July 18, 2019) TESS launch closeup, atop Falcon 9 rocket. APOD (April 21, 2018) Interactive 3D simulation of TESS's 2:1 lunar resonant orbit Space probes launched in 2018 Space telescopes Explorers Program NASA space probes NASA programs Exoplanet search projects SpaceX payloads contracted by NASA 2018 establishments in Florida Asteroseismology
Transiting Exoplanet Survey Satellite
[ "Physics", "Astronomy" ]
5,607
[ "Physical phenomena", "Exoplanet search projects", "Astrophysics", "Space telescopes", "Asteroseismology", "Astronomy projects", "Stellar phenomena" ]
18,839,594
https://en.wikipedia.org/wiki/Test%20strategy
A test strategy is an outline that describes the testing approach of the software development cycle. The purpose of a test strategy is to provide a rational deduction from organizational, high-level objectives to actual test activities to meet those objectives from a quality assurance perspective. The creation and documentation of a test strategy should be done in a systematic way to ensure that all objectives are fully covered and understood by all stakeholders. It should also frequently be reviewed, challenged and updated as the organization and the product evolve over time. Furthermore, a test strategy should also aim to align different stakeholders of quality assurance in terms of terminology, test and integration levels, roles and responsibilities, traceability, planning of resources, etc. Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which types of testing are to be performed, and which entry and exit criteria apply. They are created based on development design documents. System design documents are primarily used, and occasionally conceptual design documents may be referred to. Design documents describe the functionality of the software to be enabled in the upcoming release. For every stage of development design, a corresponding test strategy should be created to test the new feature sets. Test levels The test strategy describes the test level to be performed. There are primarily three levels of testing: unit testing, integration testing, and system testing. In most software development organizations, the developers are responsible for unit testing. Individual testers or test teams are responsible for integration and system testing. Roles and responsibilities The roles and responsibilities of the test leader, individual testers, and project manager are to be clearly defined at a project level in this section. This may not have names associated, but the role must be very clearly defined. Testing strategies should be reviewed by the developers. They should also be reviewed by leads for all levels of testing to make sure the coverage is complete, yet not overlapping. Both the testing manager and the development managers should approve the test strategy before testing can begin. Environment requirements Environment requirements are an important part of the test strategy. It describes what operating systems are used for testing. It also clearly informs the necessary OS patch levels and security updates required. For example: a certain test plan may require Windows 8.1 to be installed as a prerequisite at testing. Testing tools There are two methods used in executing test cases: manual and automated. Depending on the nature of the testing, it is usually the case that a combination of manual and automated testing is the best testing method. Risks and mitigation Any risks that will affect the testing process must be listed along with the mitigation. By documenting a risk, its occurrence can be anticipated well ahead of time. Proactive action may be taken to prevent it from occurring, or to mitigate its damage. Sample risks are dependency of completion of coding done by sub-contractors, or capability of testing tools. Test schedule A test plan should make an estimation of how long it will take to complete the testing phase. There are many requirements to complete testing phases. First, testers have to execute all test cases at least once. Furthermore, if a defect was found, the developers will need to fix the problem. The testers should then re-test the failed test case until it is functioning correctly. Last but not the least, the testers need to conduct regression testing towards the end of the cycle to make sure the developers did not accidentally break parts of the software while fixing another part. This can occur on test cases that were previously functioning properly. The test schedule should also document the number of testers available for testing. If possible, assign test cases to each tester. It is often difficult to make an accurate estimate of the test schedule since the testing phase involves many uncertainties. Planners should take into account the extra time needed to accommodate contingent issues. One way to make this approximation is to look at the time needed by the previous releases of the software. If the software is new, multiplying the initial testing schedule approximation by two is a good way to start. Regression test approach When a particular problem is identified, the programs are debugged and the fix is applied to the program. To make sure that the fix works, the program will be tested again for that criterion. Regression tests will reduce the likelihood that one fix creates some other problems in that program or in any other interface. So, a set of related test cases may have to be repeated, to test whether anything else is affected by a particular fix. How this is going to be carried out must be elaborated in this section. Consider different testing levels when selecting regression test cases. Unit-, integration- and system test cases are good candidates. Select cases that have direct relationship with the fix and also include few business critical cases that prove basic business scenarios still work. Remember also that non-functional testing (security, performance, usability) plays an important role in proving business continuation. In some companies, whenever there is a fix in one unit, all unit test cases for that unit will be repeated. Test groups From the list of requirements, we can identify related areas, whose functionality is similar. These areas are the test groups. For example, in a railway reservation system, anything related to ticket booking is a functional group; anything related with report generation is a functional group. In the same way, we have to identify the test groups based on the functionality aspect. Test priorities Among test cases, we need to establish priorities. While testing software projects, certain test cases will be treated as the most important ones, and if they fail the product cannot be released. Some other test cases may be of lesser functional importance, or even cosmetic and if they fail, we can release the product without much compromise on the functionality. These priority levels must be clearly stated. These may be mapped to the test groups also. Test status collections and reporting When test cases are executed, the test leader and the project manager must know, where exactly the project stands in terms of testing activities. To know where the project stands, the inputs from the individual testers must come to the test leader. This will include, what test cases are executed, how long it took, how many test cases passed, how many failed, and how many are not executable. Also, how often the project collects the status is to be clearly stated. Some projects will have a practice of collecting the status on a daily basis or weekly basis. Test records maintenance When the test cases are executed, it is important to keep track of the execution details such as when it is executed, who did it, how long it took, what is the result etc. This data must be available to the test leader and the project manager, along with all the team members, in a central location. This may be stored in a specific directory in a central server and the document must say clearly about the locations and the directories. The naming convention for the documents and files must also be mentioned. Requirements traceability matrix Ideally, the software must completely satisfy the set of requirements. From design, each requirement must be addressed in every single document in the software process. The documents include the HLD, LLD, source codes, unit test cases, integration test cases and the system test cases. In a requirements traceability matrix, the rows will have the requirements. The columns represent each document. Intersecting cells are marked when a document addresses a particular requirement with information related to the requirement ID in the document. Ideally, if every requirement is addressed in every single document, all the individual cells have valid section ids or names filled in, then we know that every requirement is addressed. If any cells are empty, it represents that a requirement has not been correctly addressed. Test summary The senior management may like to have test summary on a weekly or monthly basis. If the project is very critical, they may need it even on daily basis. This section must address what kind of test summary reports will be produced for the senior management along with the frequency. The test strategy must give a clear vision of what the testing team will do for the whole project for the entire duration. This document can be presented to the client, if needed. The person who prepares this document must be functionally strong in the product domain, with very good experience, as this is the document that is going to drive the entire team for the testing activities. Test strategy must be clearly explained to the testing team members right at the beginning of the project. See also Software testing Test case Test plan Risk-based testing References Ammann, Paul and Offutt, Jeff. Introduction to software testing, New York: Cambridge University Press, 2008 Dasso, Aristides. Verification, validation and testing in software engineering, Hershey, PA: Idea Group Pub., 2007 Software testing
Test strategy
[ "Engineering" ]
1,772
[ "Software engineering", "Software testing" ]
18,841,364
https://en.wikipedia.org/wiki/IP%20in%20IP
IP in IP is an IP tunneling protocol that encapsulates one IP packet in another IP packet. To encapsulate an IP packet in another IP packet, an outer header is added with Source IP, the entry point of the tunnel, and Destination IP, the exit point of the tunnel. While doing this, the inner packet is unmodified (except the TTL field, which is decremented). The Don't Fragment and the Type Of Service fields should be copied to the outer packet. If the packet size, including the outer header, is greater than the Path MTU, the encapsulator fragments the packet. The decapsulator will reassemble the packet. IP packet encapsulated in IP packet Outer IP header has the following fields: See also Internet Control Message Protocol (ICMP) Generic Routing Encapsulation (GRE) 6in4 4in6 RFC 1853 - IP in IP Tunneling References Internet Protocol Tunneling protocols
IP in IP
[ "Engineering" ]
205
[ "Computer networks engineering", "Tunneling protocols" ]
18,842,002
https://en.wikipedia.org/wiki/Shampoo
Shampoo () is a hair care product, typically in the form of a viscous liquid, that is formulated to be used for cleaning (scalp) hair. Less commonly, it is available in solid bar format. ("Dry shampoo" is a separate product.) Shampoo is used by applying it to wet hair, massaging the product in the hair, roots and scalp, and then rinsing it out. Some users may follow a shampooing with the use of hair conditioner. Shampoo is typically used to remove the unwanted build-up of sebum (natural oils) in the hair without stripping out so much as to make hair unmanageable. Shampoo is generally made by combining a surfactant, most often sodium lauryl sulfate or sodium laureth sulfate, with a co-surfactant, most often cocamidopropyl betaine in water. The sulfate ingredient acts as a surfactant, trapping oils and other contaminants, similarly to soap. Shampoos are marketed to people with hair. There are also shampoos intended for animals that may contain insecticides or other medications to treat skin conditions or parasite infestations such as fleas. History Indian subcontinent In the Indian subcontinent, a variety of herbs and their extracts have been used as shampoos since ancient times. The first origin of shampoo came from the Indus Valley Civilization. A very effective early shampoo was made by boiling Sapindus with dried Indian gooseberry (amla) and a selection of other herbs, using the strained extract. Sapindus, also known as soapberries or soapnuts, a tropical tree widespread in India, is called ksuna (Sanskrit: क्षुण) in ancient Indian texts and its fruit pulp contains saponins which are a natural surfactant. The extract of soapberries creates a lather which Indian texts called phenaka (Sanskrit: फेनक). It leaves the hair soft, shiny and manageable. Other products used for hair cleansing were shikakai (Acacia concinna), hibiscus flowers, ritha (Sapindus mukorossi) and arappu (Albizzia amara). Guru Nanak, the founder and the first Guru of Sikhism, made references to soapberry tree and soap in the 16th century. Cleansing the hair and body massage (champu) during one's daily bath was an indulgence of early colonial traders in India. When they returned to Europe, they introduced the newly learned habits, including the hair treatment they called shampoo. The word shampoo entered the English language from the Indian subcontinent during the colonial era. It dated to 1762 and was derived from the Hindi word (, ), itself derived from the Sanskrit root (), which means 'to press, knead, or soothe'. Europe Sake Dean Mahomed, an Indian traveller, surgeon, and entrepreneur, is credited with introducing the practice of shampoo or "shampooing" to Britain. In 1814, Mahomed, with his Irish wife Jane Daly, opened the first commercial "shampooing" vapour masseur bath in England, in Brighton. He described the treatment in a local paper as "The Indian Medicated Vapour Bath (type of Turkish bath), a cure to many diseases and giving full relief when everything fails; particularly Rheumatic and paralytic, gout, stiff joints, old sprains, lame legs, aches and pains in the joints". This medical work featured testimonies from his patients, as well as the details of the treatment made him famous. The book acted as a marketing tool for his unique baths in Brighton and capitalised on the early 19th-century trend for seaside spa treatments. During the early stages of shampoo in Europe, English hair stylists boiled shaved soap in water and added herbs to give the hair shine and fragrance. Commercially made shampoo was available from the turn of the 20th century. A 1914 advertisement for Canthrox Shampoo in American Magazine showed young women at camp washing their hair with Canthrox in a lake; magazine advertisements in 1914 by Rexall featured Harmony Hair Beautifier and Shampoo.<ref>Victoria Sherrow, Encyclopedia of hair: a cultural history, 2007 s.v. "Advertising" p. 7.</ref> In 1900, German perfumer and hair-stylist Josef Wilhelm Rausch developed the first liquid hair washing soap and named it "Champooing" in Emmishofen, Switzerland. Later, in 1919, J.W. Rausch developed an antiseptic chamomile shampooing with a pH of 8.5. In 1927, liquid shampoo was improved for mass production by German inventor Hans Schwarzkopf in Berlin; his name became a shampoo brand sold in Europe. Originally, soap and shampoo were very similar products; both containing the same naturally derived surfactants, a type of detergent. Modern shampoo as it is known today was first introduced in the 1930s with Drene, the first shampoo using synthetic surfactants instead of soap. Indonesia Early shampoos used in Indonesia were made from the husk and straw (merang) of rice. The husks and straws were burned into ash, and the ashes (which have alkaline properties) are mixed with water to form lather. The ashes and lather were scrubbed into the hair and rinsed out, leaving the hair clean, but very dry. Afterwards, coconut oil was applied to the hair in order to moisturize it. Philippines Filipinos have been traditionally using gugo before commercial shampoos were sold in stores. The shampoo is obtained by soaking and rubbing the bark of the vine Gugo (Entada phaseoloides), producing a lather that cleanses the scalp effectively. Gugo is also used as an ingredient in hair tonics. Pre-Columbian North America Certain Native American tribes used extracts from North American plants as hair shampoo; for example the Costanoans of present-day coastal California used extracts from the coastal woodfern, Dryopteris expansa. Pre-Columbian South America Before quinoa can be eaten the saponin must be washed out from the grain prior to cooking. Pre-Columbian Andean civilizations used this soapy by-product as a shampoo. Types Shampoos can be classified into four main categories: deep cleansing shampoos, sometimes marketed under descriptions such as volumizing, clarifying, balancing, oil control, or thickening, which have a slightly higher amount of detergent and create a lot of foam; conditioning shampoos, sometimes marketed under descriptions such as moisturizing, 2-in-1, smoothing, anti-frizz, color care, and hydrating, which contain an ingredient like silicone or polyquaternium-10 to smooth the hair; baby shampoos, sometimes marketed as tear-free, which contain less detergent and produce less foam; and anti-dandruff shampoos, which are medicated to reduce dandruff. Composition Shampoo is generally made by combining a surfactant, most often sodium lauryl sulfate or sodium laureth sulfate, with a co-surfactant, most often cocamidopropyl betaine in water to form a thick, viscous liquid. Other essential ingredients include salt (sodium chloride), which is used to adjust the viscosity, a preservative and fragrance. Other ingredients are generally included in shampoo formulations to maximize the following qualities: pleasing foam ease of rinsing minimal skin and eye irritation thick or creamy feeling pleasant fragrance low toxicity good biodegradability slight acidity (pH less than 7) no damage to hair repair of damage already done to hair Many shampoos are pearlescent. This effect is achieved by the addition of tiny flakes of suitable materials, e.g. glycol distearate, chemically derived from stearic acid, which may have either animal or vegetable origins. Glycol distearate is a wax. Many shampoos also include silicone to provide conditioning benefits. Commonly used ingredients Ammonium chloride Ammonium lauryl sulfate Glycol Sodium laureth sulfate is derived from coconut oils and is used to soften water and create a lather. Hypromellose cellulose ethers are widely used as thickeners, rheology modifiers, emulsifiers and dispersants in Shampoo products. Sodium lauroamphoacetate is naturally derived from coconut oils and is used as a cleanser and counter-irritant. This is the ingredient that makes the product tear-free. Polysorbate 20 (abbreviated as PEG(20)) is a mild glycol-based surfactant that is used to solubilize fragrance oils and essential oils, meaning it causes liquid to spread across and penetrate the surface of a solid (i.e. hair). Polysorbate 80 (abbreviated as PEG(80)) is a glycol used to emulsify (or disperse) oils in water so the oils do not float on top. PEG-150 distearate is a simple thickener. Citric acid is produced biochemically and is used as an antioxidant to preserve the oils in the product. While it is a severe eye-irritant, the sodium lauroamphoacetate counteracts that property. Citric acid is used to adjust the pH down to approximately 5.5. It is a fairly weak acid which makes the adjustment easier. Shampoos usually are at pH 5.5 because at slightly acidic pH, the scales on a hair follicle lie flat, making the hair feel smooth and look shiny. It also has a small amount of preservative action. Citric acid, as opposed to any other acid, will prevent bacterial growth. Quaternium-15 is used as a bacterial and fungicidal preservative. Polyquaternium-10 acts as the conditioning ingredient, providing moisture and fullness to the hair. Di-PPG-2 myreth-10 adipate is a water-dispersible emollient that forms clear solutions with surfactant systems. Chloromethylisothiazolinone, or CMIT, is a powerful biocide and preservative. Benefit claims regarding ingredients In the United States, the Food and Drug Administration (FDA) mandates that shampoo containers accurately list ingredients on the products container. The government further regulates what shampoo manufacturers can and cannot claim as any associated benefit. Shampoo producers often use these regulations to challenge marketing claims made by competitors, helping to enforce these regulations. While the claims may be substantiated, however, the testing methods and details of such claims are not as straightforward. For example, many products are purported to protect hair from damage due to ultraviolet radiation. While the ingredient responsible for this protection does block UV, it is not often present in a high enough concentration to be effective. The North American Hair Research Society has a program to certify functional claims based on third-party testing. Shampoos made for treating medical conditions such as dandruff or itchy scalp are regulated as OTC drugs in the US marketplace. In the European Union, there is a requirement for the anti-dandruff claim to be substantiated as with any other advertising claim, but it is not considered to be a medical problem. Health risks A number of contact allergens are used as ingredients in shampoos, and contact allergy caused by shampoos is well known. Patch testing can identify ingredients to which patients are allergic, after which a physician can help the patient find a shampoo that is free of the ingredient to which they are allergic. The US bans 11 ingredients from shampoos, Canada bans 587, and the EU bans 1328. Specialized shampoos Dandruff Cosmetic companies have developed shampoos specifically for those who have dandruff. These contain fungicides such as ketoconazole, zinc pyrithione and selenium disulfide, which reduce loose dander by killing fungi like Malassezia furfur''. Coal tar and salicylate derivatives are often used as well. Alternatives to medicated shampoos are available for people who wish to avoid synthetic fungicides. Such shampoos often use tea tree oil, essential oils or herbal extracts. Colored hair Many companies have also developed color-protection shampoos suitable for colored hair; some of these shampoos contain gentle cleansers according to their manufacturers. Shampoos for color-treated hair are a type of moisturizing shampoo. Baby Shampoo for infants and young children is formulated so that it is less irritating and usually less prone to produce a stinging or burning sensation if it were to get into the eyes. For example, Johnson's Baby Shampoo advertises under the premise of "No More Tears". This is accomplished by one or more of the following formulation strategies. dilution, in case the product comes in contact with eyes after running off the top of the head with minimal further dilution adjusting pH to that of non-stress tears, approximately 7, which may be a higher pH than that of shampoos which are pH adjusted for skin or hair effects, and lower than that of shampoo made of soap Use of surfactants which, alone or in combination, are less irritating than those used in other shampoos (e.g. Sodium lauroamphoacetate) use of nonionic surfactants of the form of polyethoxylated synthetic glycolipids and polyethoxylated synthetic monoglycerides, which counteract the eye sting of other surfactants without producing the anesthetizing effect of alkyl polyethoxylates or alkylphenol polyethoxylates The distinction in 4 above does not completely surmount the controversy over the use of shampoo ingredients to mitigate eye sting produced by other ingredients, or the use of the products so formulated. The considerations in 3 and 4 frequently result in a much greater multiplicity of surfactants being used in individual baby shampoos than in other shampoos, and the detergency or foaming of such products may be compromised thereby. The monoanionic sulfonated surfactants and viscosity-increasing or foam stabilizing alkanolamides seen so frequently in other shampoos are much less common in the better baby shampoos. Sulfate-free shampoos Sulfate-free shampoos are composed of natural ingredients and free from both sodium lauryl sulfate and sodium laureth sulfate. These shampoos use alternative surfactants to cleanse the hair. Animal Shampoo intended for animals may contain insecticides or other medications for treatment of skin conditions or parasite infestations such as fleas or mange. These must never be used on humans. While some human shampoos may be harmful when used on animals, any human haircare products that contain active ingredients or drugs (such as zinc in anti-dandruff shampoos) are potentially toxic when ingested by animals. Special care must be taken not to use those products on pets. Cats are at particular risk due to their instinctive method of grooming their fur with their tongues. Shampoos that are especially designed to be used on pets, commonly dogs and cats, are normally intended to do more than just clean the pet's coat or skin. Most of these shampoos contain ingredients which act different and are meant to treat a skin condition or an allergy or to fight against fleas. The main ingredients contained by pet shampoos can be grouped in insecticidals, antiseborrheic, antibacterials, antifungals, emollients, emulsifiers and humectants. Whereas some of these ingredients may be efficient in treating some conditions, pet owners are recommended to use them according to their veterinarian's indications because many of them cannot be used on cats or can harm the pet if it is misused. Generally, insecticidal pet shampoos contain pyrethrin, pyrethroids (such as permethrin and which may not be used on cats) and carbaryl. These ingredients are mostly found in shampoos that are meant to fight against parasite infestations. Antifungal shampoos are used on pets with yeast or ringworm infections. These might contain ingredients such as miconazole, chlorhexidine, providone iodine, ketoconazole or selenium sulfide (which cannot be used on cats). Bacterial infections in pets are sometimes treated with antibacterial shampoos. They commonly contain benzoyl peroxide, chlorhexidine, povidone iodine, triclosan, ethyl lactate, or sulfur. Antipruritic shampoos are intended to provide relief of itching due to conditions such as atopy and other allergies. These usually contain colloidal oatmeal, hydrocortisone, Aloe vera, pramoxine hydrochloride, menthol, diphenhydramine, sulfur or salicylic acid. These ingredients are aimed to reduce the inflammation, cure the condition and ease the symptoms at the same time while providing comfort to the pet. Antiseborrheic shampoos are those especially designed for pets with scales or those with excessive oily coats. These shampoos are made of sulfur, salicylic acid, refined tar (which cannot be used on cats), selenium sulfide (cannot be used on cats) and benzoyl peroxide. All these are meant to treat or prevent seborrhea oleosa, which is a condition characterized by excess oils. Dry scales can be prevented and treated with shampoos that contain sulfur or salicylic acid and which can be used on both cats and dogs. Emollient shampoos are efficient in adding oils to the skin and relieving the symptoms of a dry and itchy skin. They usually contain oils such as almond, corn, cottonseed, coconut, olive, peanut, Persia, safflower, sesame, lanolin, mineral or paraffin oil. The emollient shampoos are typically used with emulsifiers as they help distributing the emollients. These include ingredients such as cetyl alcohol, laureth-5, lecithin, PEG-4 dilaurate, stearic acid, stearyl alcohol, carboxylic acid, lactic acid, urea, sodium lactate, propylene glycol, glycerin, or polyvinylpyrrolidone. Although some of the pet shampoos are highly effective, some others may be less effective for some condition than another. Yet, although natural pet shampoos exist, it has been brought to attention that some of these might cause irritation to the skin of the pet. Natural ingredients that might be potential allergens for some pets include eucalyptus, lemon or orange extracts and tea tree oil. On the contrary, oatmeal appears to be one of the most widely skin-tolerated ingredients that is found in pet shampoos. Most ingredients found in a shampoo meant to be used on animals are safe for the pet as there is a high likelihood that the pets will lick their coats, especially in the case of cats. Pet shampoos which include fragrances, deodorants or colors may harm the skin of the pet by causing inflammations or irritation. Shampoos that do not contain any unnatural additives are known as hypoallergenic shampoos and are increasing in popularity. Solid shampoo bars Solid shampoos or shampoo bars can either be soap-based or use other plant-based surfactants, such as sodium cocoyl isethionate or sodium coco-sulfate combined with oils and waxes. Soap-based shampoo bars are high in pH (alkaline) compared to human hair and scalps, which are slightly acidic. Alkaline pH increases the friction of the hair fibres which may cause damage to the hair cuticle, making it feel rough and drying out the scalp. Jelly and gel Stiff, non-pourable clear gels to be squeezed from a tube were once popular forms of shampoo, and can be produced by increasing a shampoo's viscosity. This type of shampoo cannot be spilled, but unlike a solid, it can still be lost down the drain by sliding off wet skin or hair. Paste and cream Shampoos in the form of pastes or creams were formerly marketed in jars or tubes. The contents were wet but not completely dissolved. They would apply faster than solids and dissolve quickly. Antibacterial Antibacterial shampoos are often used in veterinary medicine for various conditions, as well as in humans before some surgical procedures. No Poo Movement Closely associated with environmentalism, the "no poo" movement consists of people rejecting the societal norm of frequent shampoo use. Some adherents of the no poo movement use baking soda or vinegar to wash their hair, while others use diluted honey. Further methods include the use of raw eggs (potentially mixed with salt water), rye flour, or chickpea flour dissolved in water. Other people use nothing or rinse their hair only with conditioner. Theory In the 1970s, ads featuring Farrah Fawcett and Christie Brinkley asserted that it was unhealthy not to shampoo several times a week. This mindset is reinforced by the greasy feeling of the scalp after a day or two of not shampooing. Using shampoo every day removes sebum, the oil produced by the scalp. This causes the sebaceous glands to produce oil at a higher rate, to compensate for what is lost during shampooing. According to Michelle Hanjani, a dermatologist at Columbia University, a gradual reduction in shampoo use will cause the sebum glands to produce at a slower rate, resulting in less grease in the scalp. Although this approach might seem unappealing to some individuals, many people try alternate shampooing techniques like baking soda and vinegar in order to avoid ingredients used in many shampoos that make hair greasy over time. Whereas the use of baking soda for hair cleansing has been associated with hair damage and skin irritation, likely due to its high pH value and exfoliating properties, honey, egg, rye flour, and chickpea flour hair washes seem gentler for long-term use. See also Soap Dry shampoo Baby shampoo Hair conditioner Exfoliant No Poo References External links Drug delivery devices Hairdressing Indian inventions Personal hygiene products Toiletry
Shampoo
[ "Chemistry" ]
4,757
[ "Pharmacology", "Drug delivery devices" ]
18,842,308
https://en.wikipedia.org/wiki/Stream
A stream is a continuous body of surface water flowing within the bed and banks of a channel. Depending on its location or certain characteristics, a stream may be referred to by a variety of local or regional names. Long, large streams are usually called rivers, while smaller, less voluminous and more intermittent streams are known as streamlets, brooks or creeks. The flow of a stream is controlled by three inputs – surface runoff (from precipitation or meltwater), daylighted subterranean water, and surfaced groundwater (spring water). The surface and subterranean water are highly variable between periods of rainfall. Groundwater, on the other hand, has a relatively constant input and is controlled more by long-term patterns of precipitation. The stream encompasses surface, subsurface and groundwater fluxes that respond to geological, geomorphological, hydrological and biotic controls. Streams are important as conduits in the water cycle, instruments in groundwater recharge, and corridors for fish and wildlife migration. The biological habitat in the immediate vicinity of a stream is called a riparian zone. Given the status of the ongoing Holocene extinction, streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity. The study of streams and waterways in general is known as surface hydrology and is a core element of environmental geography. Types Brook A brook is a stream smaller than a creek, especially one that is fed by a spring or seep. It is usually small and easily forded. A brook is characterised by its shallowness. Creek A creek () or crick (): In Australia, Canada, New Zealand and the United States, a (narrow) stream that is smaller than a river; a minor tributary of a river; a brook. Sometimes navigable by water craft and may be intermittent. In the United Kingdom, India, and parts of Maryland, New England, a tidal inlet, typically in a salt marsh or mangrove swamp, or between enclosed and drained former salt marshes or swamps (e.g. Portsbridge Creek separating Portsea Island from the mainland). In these cases, the "stream" is the tidal stream, the course of the seawater through the creek channel at low and high tide. In hydrography, gut is a small creek; this is seen in proper names in eastern North America from the Mid-Atlantic states (for instance, The Gut in Pennsylvania, Ash Gut in Delaware, and other streams) down into the Caribbean (for instance, Guinea Gut, Fish Bay Gut, Cob Gut, Battery Gut and other rivers and streams in the United States Virgin Islands, in Jamaica (Sandy Gut, Bens Gut River, White Gut River), and in many streams and creeks of the Dutch Caribbean). River A river is a large natural stream that is much wider and deeper than a creek and not easily fordable, and may be a navigable waterway. Runnel The linear channel between the parallel ridges or bars on a shoreline beach or river floodplain, or between a bar and the shore. Also called a swale. Tributary A tributary is a contributory stream to a larger stream, or a stream which does not reach a static body of water such as a lake, bay or ocean but joins another river (a parent river). Sometimes also called a branch or fork. Distributary A distributary, or a distributary channel, is a stream that branches off and flows away from a main stream channel, and the phenomenon is known as river bifurcation. Distributaries are common features of river deltas, and are often found where a valleyed stream enters wide flatlands or approaches the coastal plains around a lake or an ocean. They can also occur inland, on alluvial fans, or where a tributary stream bifurcates as it nears its confluence with a larger stream. Common terms for individual river distributaries in English-speaking countries are arm and channel. Other names There are a number of regional names for a stream. Northern America Branch is used to name streams in Maryland and Virginia. Creek is common throughout the United States, as well as Australia. Falls is also used to name streams in Maryland, for streams/rivers which have waterfalls on them, even if such falls only have a small vertical drop. Little Gunpowder Falls and the Jones Falls are actually rivers named in this manner, unique to Maryland. Kill in New York, Pennsylvania, Delaware, and New Jersey comes from a Dutch language word meaning "riverbed" or "water channel", and can also be used for the UK meaning of 'creek'. Run in Ohio, Maryland, Michigan, New Jersey, Pennsylvania, Virginia, or West Virginia can be the name of a stream. Run in Florida is the name given to streams coming out of small natural springs. River is used for streams from larger springs like the Silver River and Rainbow River. Stream and brook are used in Midwestern states, Mid-Atlantic states, and New England. United Kingdom Allt is used in the Scottish Highlands. Beck is used in areas between Lincolnshire and Cumbria in areas which were once occupied by the Danes and Norwegians. Bourne or winterbourne is used in the chalk downland of southern England for ephemeral rivers. When permanent, they are chalk streams. Brook. Burn is used in Scotland and North East England. Gill or ghyll is seen in the north of England and Kent and Surrey influenced by Old Norse. The variant "ghyll" is used in the Lake District and appears to have been an invention of William Wordsworth. Nant is used in Wales. Rivulet is a term encountered in Victorian era publications. Syke is used in the Scottish Lowlands and Cumbria for a seasonal stream. Related terminology Bar A shoal that develops in a stream as sediment is deposited as the current slows or is impeded by wave action at the confluence. Bifurcation A fork into two or more streams. Channel A depression created by constant erosion that carries the stream's flow. Confluence The point at which the two streams merge. If the two tributaries are of approximately equal size, the confluence may be called a fork. Drainage basin (also known as a watershed in the United States) The area of land where water flows into a stream. A large drainage basin such as the Amazon River contains many smaller drainage basins. Floodplain Lands adjacent to the stream that are subject to flooding when a stream overflows its banks. Headwaters or source The part of a stream or river proximate to its source. The word is most commonly used in the plural where there is no single point source. Knickpoint The point on a stream's profile where a sudden change in stream gradient occurs. Mouth The point at which the stream discharges, possibly via an estuary or delta, into a static body of water such as a lake or ocean. Pool A segment where the water is deeper and slower moving. Rapids A turbulent, fast-flowing stretch of a stream or river. Riffle A segment where the flow is shallower and more turbulent. River A large natural stream, which may be a waterway. Run A somewhat smoothly flowing segment of the stream. Spring The point at which a stream emerges from an underground course through unconsolidated sediments or through caves. A stream can, especially with caves, flow aboveground for part of its course, and underground for part of its course. Stream bed The bottom of a stream. Stream corridor Stream, its floodplains, and the transitional upland fringe.<ref>"Stream Corridor Structure" Adapted from Stream Corridor Restoration: Principles, Processes, and Practices</ref> Streamflow The water moving through a stream channel. Stream gauge A site along the route of a stream or river, used for reference marking or water monitoring. Thalweg The river's longitudinal section, or the line joining the deepest point in the channel at each stage from source to mouth. Watercourse The channel followed by a stream (a flowing body of water) or the stream itself. In the UK, some aspects of criminal law, such as the Rivers (Prevention of Pollution) Act 1951, specify that a watercourse includes those rivers which are dry for part of the year. In some jurisdictions, owners of land over which the water flows may have the legal right to use or retain some or much of that water. This right may extend to estuaries, rivers, streams, anabranches and canals. Waterfall or cascade The fall of water where the stream goes over a sudden drop called a knickpoint; some knickpoints are formed by erosion when water flows over an especially resistant stratum, followed by one less so. The stream expends kinetic energy in "trying" to eliminate the knickpoint. Wetted perimeter The line on which the stream's surface meets the channel walls. Sources A stream's source depends on the surrounding landscape and its function within larger river networks. While perennial and intermittent streams are typically supplied by smaller upstream waters and groundwater, headwater and ephemeral streams often derive most of their water from precipitation in the form of rain and snow. Most of this precipitated water re-enters the atmosphere by evaporation from soil and water bodies, or by the evapotranspiration of plants. Some of the water proceeds to sink into the earth by infiltration and becomes groundwater, much of which eventually enters streams. Some precipitated water is temporarily locked up in snow fields and glaciers, to be released later by evaporation or melting. The rest of the water flows off the land as runoff, the proportion of which varies according to many factors, such as wind, humidity, vegetation, rock types, and relief. This runoff starts as a thin film called sheet wash, combined with a network of tiny rills, together constituting sheet runoff; when this water is concentrated in a channel, a stream has its birth. Some creeks may start from ponds or lakes. The streams typically derive most of their water from rain and snow precipitation. Most of this water re-enters the atmosphere either by evaporation from soil and water bodies, or by plant evapotranspiration. By infiltration some of the water sinks into the earth and becomes groundwater, much of which eventually enters streams. Most precipitated water is partially bottled up by evaporation or freezing in snow fields and glaciers. The majority of the water flows as a runoff from the ground; the proportion of this varies depending on several factors, such as climate, temperature, vegetation, types of rock, and relief. This runoff begins as a thin layer called sheet wash, combined with a network of tiny rills, which together form the sheet runoff; when this water is focused in a channel, a stream is born. Some rivers and streams may begin from lakes or ponds. Freshwater's primary sources are precipitation and mountain snowmelt. However, rivers typically originate in the highlands, and are slowly created by the erosion of mountain snowmelt into lakes or rivers. Rivers usually flow from their source topographically, and erode as they pass until they reach the base stage of erosion. The scientists have offered a way based on data to define the origin of the lake. A classified sample was the one measured by the Chinese researchers from the University of Chinese Academy of Sciences. As an essential symbol of the river formation environment, the river source needs an objective and straightforward and effective method of judging. A calculation model of river source catchment area based on critical support flow (CSD) proposed, and the relationship between CSA and CSD with a minimum catchment area established. Using the model for comparison in two basins in Tibet (Helongqu and Niyang River White Water), the results show that the critical support flow (Qc) of the is 0.0028 m3/s. At the same time, the white water curvature is 0.0085 m3/s. Besides, the critical support flow can vary with hydrologic climate conditions, and the vital support flow Qc in wet areas (white water) is larger than in semi-arid regions (heap slot). The proposed critical support flow (CSD) concept and model method can be used to determine the hydrographic indicators of river sources in complex geographical areas, and it can also reflect the impact of hydrologic climate change on river recharge in different regions. The source of a river or stream (its point of origin) can consist of lakes, swamps, springs, or glaciers. A typical river has several tributaries; each of these may be made up of several other smaller tributaries, so that together this stream and all its tributaries are called a drainage network. Although each tributary has its own source, international practice is to take the source farthest from the river mouth as the source of the entire river system, from which the most extended length of the river measured as the starting point is taken as the length of the whole river system, and that furthest starting point is conventionally taken as the source of the whole river system. For example, the origin of the Nile River is the confluence of the White Nile and the Blue Nile, but the source of the whole river system is in its upper reaches. If there is no specific designation, "length of the Nile" refers to the "river length of the Nile system", rather than to the length of the Nile river from the point where it is formed by a confluence of tributaries. The Nile's source is often cited as Lake Victoria, but the lake has significant feeder rivers. The Kagera River, which flows into Lake Victoria near Bukoba's Tanzanian town, is the longest feeder, though sources do not agree on which is the Kagera's longest tributary and therefore the Nile's most remote source itself. Characteristics Ranking To qualify as a stream, a body of water must be either recurring or perennial. Recurring (intermittent) streams have water in the channel for at least part of the year. A stream of the first order is a stream which does not have any other recurring or perennial stream feeding into it. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream. Streams of lower order joining a higher order stream do not change the order of the higher stream. Gradient The gradient of a stream is a critical factor in determining its character and is entirely determined by its base level of erosion. The base level of erosion is the point at which the stream either enters the ocean, a lake or pond, or enters a stretch in which it has a much lower gradient, and may be specifically applied to any particular stretch of a stream. In geological terms, the stream will erode down through its bed to achieve the base level of erosion throughout its course. If this base level is low, then the stream will rapidly cut through underlying strata and have a steep gradient, and if the base level is relatively high, then the stream will form a flood plain and meander. Profile Typically, streams are said to have a particular elevation profile, beginning with steep gradients, no flood plain, and little shifting of channels, eventually evolving into streams with low gradients, wide flood plains, and extensive meanders. The initial stage is sometimes termed a "young" or "immature" stream, and the later state a "mature" or "old" stream. Meander Meanders are looping changes of direction of a stream caused by the erosion and deposition of bank materials. These are typically serpentine in form. Typically, over time the meanders gradually migrate downstream. If some resistant material slows or stops the downstream movement of a meander, a stream may erode through the neck between two legs of a meander to become temporarily straighter, leaving behind an arc-shaped body of water termed an oxbow lake or bayou. A flood may also cause a meander to be cut through in this way. Stream load The stream load is defined as the solid matter carried by a stream. Streams can carry sediment, or alluvium. The amount of load it can carry (capacity) as well as the largest object it can carry (competence) are both dependent on the velocity of the stream. Classification Perennial or not A perennial stream is one which flows continuously all year. Some perennial streams may only have continuous flow in segments of its stream bed year round during years of normal rainfall. Blue-line streams are perennial streams and are marked on topographic maps with a solid blue line. The word "perennial" from the 1640s, meaning "evergreen," is established in Latin perennis, keeping the meaning as "everlasting all year round," per "over" plus annus "year." This has been proved since the 1670s by the "living years" in the sense of botany. The metaphorical sense of "enduring, eternal" originates from 1750. They are related to "perennial." See biennial for shifts in vowels. Perennial streams have one or more of these characteristics: Direct observation or compelling evidence suggests that there is no interruption in the flow at ground. The existence of one or more specific features of the perennial streams, including: Riverbed forms, for example, riffles, pools, runs, gravel bars, other depositional characteristics, bed armor layer. Riverbank erosion and/or polishment. Indications of waterborne debris and sediment transport. Defined river or stream bed and banks. The catchment area exceeds . USGS regression on the VHD data layer-oriented application on the probability of intermittent flow. The existence of aquatic organisms that require uninterrupted circulation. As shown by bank leakage, spring, or other indicators, grass-roots flow mainly supports groundwater recharge. There are high channels of permeability, especially stratospheric, boundary conditions; while stratospheric groundwater also decreases on occasion. Existence of native aquatic organisms which require undisturbed survival flow. The surrounding topography exhibits features of being formed by fluvial processes. Absence of such characteristics supports classifying a stream as intermittent, "showing interruptions in time or space". Ephemeral stream Generally, streams that flow only during and immediately after precipitation are termed ephemeral. There is no clear demarcation between surface runoff and an ephemeral stream, and some ephemeral streams can be classed as intermittent—flow all but disappearing in the normal course of seasons but ample flow (backups) restoring stream presence such circumstances are documented when stream beds have opened up a path into mines or other underground chambers. According to official U.S. definitions, the channels of intermittent streams are well-defined, as opposed to ephemeral streams, which may or may not have a defined channel, and rely mainly on storm runoff, as their aquatic bed is above the water table. An ephemeral stream does not have the biological, hydrological, and physical characteristics of a continuous or intermittent stream. The same non-perennial channel might change characteristics from intermittent to ephemeral over its course. Intermittent or seasonal stream Washes can fill up quickly during rains, and there may be a sudden torrent of water after a thunderstorm begins upstream, such as during monsoonal conditions. In the United States, an intermittent or seasonal stream is one that only flows for part of the year and is marked on topographic maps with a line of blue dashes and dots. A wash, desert wash, or arroyo is normally a dry streambed in the deserts of the American Southwest, which flows after sufficient rainfall. In Italy, an intermittent stream is termed a torrent (). In full flood the stream may or may not be "torrential" in the dramatic sense of the word, but there will be one or more seasons in which the flow is reduced to a trickle or less. Typically torrents have Apennine rather than Alpine sources, and in the summer they are fed by little precipitation and no melting snow. In this case the maximum discharge will be during the spring and autumn. An intermittent stream can also be called a winterbourne in Britain, a wadi in the Arabic-speaking world or torrente or rambla (this last one from arabic origin) in Spain and Latin America. In Australia, an intermittent stream is usually called a creek and marked on topographic maps with a solid blue line. Consequential or not There are five generic classifications: Consequent streams are streams whose course is a direct consequence of the original slope of the surface upon which it developed, i.e., streams that follow slope of the land over which they originally formed. Subsequent streams are streams whose course has been determined by selective headward erosion along weak strata. These streams have generally developed after the original stream. Subsequent streams developed independently of the original relief of the land and generally follow paths determined by the weak rock belts. Resequent streams are streams whose course follows the original relief, but at a lower level than the original slope (e.g., flows down a course determined by the underlying strata in the same direction). These streams develop later and are generally a tributary to a subsequent stream. Obsequent streams are streams flowing in the opposite direction of the consequent drainage. Insequent streams have an almost random drainage often forming dendritic patterns. These are typically tributaries and have developed by a headward erosion on a horizontally stratified belt or on homogeneous rocks. These streams follow courses that apparently were not controlled by the original slope of the surface, its structure or the type of rock. According to the water underneath Gaining: A stream or path to receive water from groundwater. Losing: A stream or reach of a stream which shows a net loss of water to groundwater or evaporation. Isolated: The water flow or channel shall not supply or remove water from the saturated region. Perched'': refers to the loss or isolation flow separated from the groundwater in the air zone. Classification Indicators of a perennial stream Benthic macroinvertebrates "Macroinvertebrate" refers to easily seen invertebrates, larger than 0.5 mm, found in stream and river bottoms. Macroinvertebrates are larval stages of most aquatic insects and their presence is a good indicator that the stream is perennial. Larvae of caddisflies, mayflies, stoneflies, and damselflies require a continuous aquatic habitat until they reach maturity. Crayfish and other crustaceans, snails, bivalves (clams), and aquatic worms also indicate the stream is perennial. These require a persistent aquatic environment for survival. Vertebrates Fish and amphibians are secondary indicators in assessment of a perennial stream because some fish and amphibians can inhabit areas without persistent water regime. When assessing for fish, all available habitat should be assessed: pools, riffles, root clumps and other obstructions. Fish will seek cover if alerted to human presence, but should be easily observed in perennial streams. Amphibians also indicate a perennial stream and include tadpoles, frogs, salamanders, and newts. These amphibians can be found in stream channels, along stream banks, and even under rocks. Frogs and tadpoles usually inhabit shallow and slow moving waters near the sides of stream banks. Frogs will typically jump into water when alerted to human presence. Geological indicators Well defined river beds composed of riffles, pools, runs, gravel bars, a bed armor layer, and other depositional features, plus well defined banks due to bank erosion, are good identifiers when assessing for perennial streams. Particle size will help identify a perennial stream. Perennial streams cut through the soil profile, which removes fine and small particles. By assessing areas for relatively coarse material left behind in the stream bed and finer sediments along the side of the stream or within the floodplain will be a good indicator of persistent water regime. Hydrological indicators A perennial stream can be identified 48 hours after a storm. Direct storm runoff usually has ceased at this point. If a stream is still flowing and contributing inflow is not observed above the channel, the observed water is likely baseflow. Another perennial stream indication is an abundance of red rust material in a slow-moving wetted channel or stagnant area. This is evidence that iron-oxidizing bacteria are present, indicating persistent expression of oxygen-depleted ground water. In a forested area, leaf and needle litter in the stream channel is an additional indicator. Accumulation of leaf litter does not occur in perennial streams since such material is continuously flushed. In the adjacent overbank of a perennial stream, fine sediment may cling to riparian plant stems and tree trunks. Organic debris drift lines or piles may be found within the active overbank area after recent high flow. Importance Streams, headwaters, and streams flowing only part of the year provide many benefits upstream and downstream. They defend against floods, remove contaminants, recycle nutrients that are potentially dangerous as well as provide food and habitat for many forms of fish. Such streams also play a vital role in preserving our drinking water quality and supply, ensuring a steady flow of water to surface waters and helping to restore deep aquifers. Clean drinking water Flood and erosion protection Groundwater recharge Pollution reduction Wildlife habitat Economic importance in fishing, hunting, manufacturing and agriculture. Drainage basins The extent of land basin drained by a stream is termed its drainage basin (also known in North America as the watershed and, in British English, as a catchment). A basin may also be composed of smaller basins. For instance, the Continental Divide in North America divides the mainly easterly-draining Atlantic Ocean and Arctic Ocean basins from the largely westerly-flowing Pacific Ocean basin. The Atlantic Ocean basin, however, may be further subdivided into the Atlantic Ocean and Gulf of Mexico drainages. (This delineation is termed the Eastern Continental Divide.) Similarly, the Gulf of Mexico basin may be divided into the Mississippi River basin and several smaller basins, such as the Tombigbee River basin. Continuing in this vein, a component of the Mississippi River basin is the Ohio River basin, which in turn includes the Kentucky River basin, and so forth. Crossings Stream crossings are where streams are crossed by roads, pipelines, railways, or any other thing which might restrict the flow of the stream in ordinary or flood conditions. Any structure over or in a stream which results in limitations on the movement of fish or other ecological elements may be an issue. See also Aqueduct (water supply) Environmental flow Fluvial sediment processes Head cut Playfair's Law River ecosystem Rock-cut basin Tidal stream generator Winterbourne, a stream that flows only in winter References Further reading Nile Basin Initiative. 2011. Archived from the original on 2 September 2010. Retrieved 1 February 2011. Cheng Haining, Liu Shaoyuan. Discussion on criteria for the determination of sources of large rivers [J]. Qinghai Land Survey 2009, 06:24–28. External links Glossary of stream-related terms, StreamNet Bodies of water Fluvial landforms Geomorphology Hydrology Rivers
Stream
[ "Chemistry", "Engineering", "Environmental_science" ]
5,514
[ "Hydrology", "Environmental engineering" ]
18,842,511
https://en.wikipedia.org/wiki/Klincewicz%20method
In thermodynamic modelling, the Klincewicz method is a predictive method based both on group contributions and on a correlation with some basic molecular properties. The method estimates the critical temperature, the critical pressure, and the critical volume of pure components. It is named after Karen Klincewicz Gleason who developed it in 1984 in collaboration with Robert C. Reid. Model description As a group contribution method the Klincewicz method correlates some structural information of a chemical molecule with the critical data. The used structural information are small functional groups which are assumed to have no interactions. This assumption makes it possible to calculate the thermodynamic properties directly from the sums of the group contributions. The correlation method does not even use these functional groups, only the molecular weight and the number of atoms are used as molecular descriptors. The prediction of the critical temperature relies on the knowledge of the normal boiling point because the method only predicts the relation of the normal boiling point and the critical temperature and not directly the critical temperature. The critical volume and pressure however are directly predicted. Model quality The quality of the Klincewicz method is not superior to older methods, especially the method of Ambrose gives somewhat better results as stated by the original authors and by Reid et al. The advantage of the Klincewicz method is that it is less complex. The quality and complexity of the Klincewicz method is comparable to the Lydersen method from 1955 which has been used widely in chemical engineering. The aspect where the Klincewicz method is unique and useful are the alternative equations where only very basic molecular data like the molecular weight and the atom count are used. Deviation diagrams The diagrams show estimated critical data of hydrocarbons together with experimental data. An estimation would be perfect if all data points would lie directly on the diagonal line. Only the simple correlation of the Klincewicz method with the molecular weight and the atom count have been used in this example. Equations Klincewicz published two sets of equations. The first uses contributions of 35 different groups. These group contribution based equations are giving somewhat better results than the very simple equations based only on correlations with the molecular weight and the atom count. Group-contribution-based equations Equations based on correlation with molecular weight and atom count only with Group contributions The group XCX is used to take the pairwise interaction of halogens connected to a single carbon into account. Its contribution has to be added once for two halogens but three times for three halogens (interactions between the halogens 1 and 2, 1 and 3, and 2 and 3). Example calculations Example calculation for acetone with group contributions *used normal boiling point Tb= 329.250 K Example calculation for acetone with molecular weight and atom count only Used molecular weight: 58.080 g/mol Used atom count: 10 For comparison, experimental values for Tc, Pc and Vc are 508.1 K, 47.0 bar and 209 cm3/mol, respectively. References Thermodynamic models
Klincewicz method
[ "Physics", "Chemistry" ]
624
[ "Thermodynamic models", "Thermodynamics" ]
18,844,051
https://en.wikipedia.org/wiki/Synthetic%20genetic%20array
Synthetic genetic array analysis (SGA) is a high-throughput technique for exploring synthetic lethal and synthetic sick genetic interactions (SSL). SGA allows for the systematic construction of double mutants using a combination of recombinant genetic techniques, mating and selection steps. Using SGA methodology a query gene deletion mutant can be crossed to an entire genome deletion set to identify any SSL interactions, yielding functional information of the query gene and the genes it interacts with. A large-scale application of SGA in which ~130 query genes were crossed to the set of ~5000 viable deletion mutants in yeast revealed a genetic network containing ~1000 genes and ~4000 SSL interactions. The results of this study showed that genes with similar function tend to interact with one another and genes with similar patterns of genetic interactions often encode products that tend to work in the same pathway or complex. Synthetic Genetic Array analysis was initially developed using the model organism S. cerevisiae. This method has since been extended to cover 30% of the S. cerevisiae genome. Methodology has since been developed to allow SGA analysis in S.pombe and E. coli. Background Synthetic genetic array analysis was initially developed by Tong et al. in 2001 and has since been used by many groups working in a wide range of biomedical fields. SGA utilizes the entire genome yeast knock-out set created by the yeast genome deletion project. Procedure Synthetic genetic array analysis is generally conducted using colony arrays on petriplates at standard densities (96, 384, 768, 1536). To perform a SGA analysis in S.cerevisiae, the query gene deletion is crossed systematically with a deletion mutant array (DMA) containing every viable knockout ORF of the yeast genome (currently 4786 strains). The resulting diploids are then sporulated by transferring to a media containing reduced nitrogen. The haploid progeny are then put through a series of selection platings and incubations to select for double mutants. The double mutants are screened for SSL interactions visually or using imaging software by assessing the size of the resulting colonies. Robotics Due to the large number of precise replication steps in SGA analysis, robots are widely used to perform the colony manipulations. There are a few systems specifically designed for SGA analysis, which greatly decrease the time to analyse a query gene. Generally these have a series of pins which are used to transfer cells to and from plates, with one system utilizing disposable pads of pins to eliminate washing cycles. Computer programs can be used to analyze the colony sizes from images of the plates thus automating the SGA scoring and chemical-genetics profiling. Steps for a yeast high content genome-wide genetic screening system (SGA-road map) There are six major components: Mutant collection Material and tools for handling the mutants Image analysis system Automatic quantification and scoring system Confirmation approaches Data analysis tools See also Tetrad (genetics) Yeast two-hybrid Synthetic lethality Synthetic viability References Genetics Microarrays
Synthetic genetic array
[ "Chemistry", "Materials_science", "Biology" ]
635
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Genetics", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
18,844,117
https://en.wikipedia.org/wiki/Russian%20stove
The Russian stove () is a type of masonry stove that first appeared in the 15th century or earlier. These stoves combine the functions of a traditional stove, oven, and fireplace into a single unit, and serve a broad range of purposes, including cooking (boiling, baking, and smoking), drying plants and mushrooms, providing interior heating and ventilation, bathing, and providing a warm place to sleep (many units include a sleeping berth atop the stove). They can be found in traditional Russian, Ukrainian, Romanian, and Belarusian households. Such stoves burn only firewood. Design A Russian stove is designed to retain heat for long periods of time. This is achieved by channeling the smoke and hot air produced by combustion through a complex labyrinth of passages, warming the bricks from which the stove is constructed. A brick flue () in the attic, sometimes with a chamber for smoking food, is required to slow down the cooling of the stove. The Russian stove is usually in the centre of the log hut (izba). The builders of Russian stoves are referred to as pechniki, "stovemakers". Good stovemakers always had a high status among the population. A badly built Russian stove may be very difficult to repair, bake unevenly, smoke, or retain heat poorly. There are many designs for the Russian stove. For example, there is a variant with two hearths (one of the hearths is used mainly for fast cooking, the other mainly for heating in winter). Early Russian culture also made use of a tiled cocklestove. Uses Cooking The stove was, and is still used today for cooking and had a strong influence on the taste of Russian cuisine. Dishes where the stove is used are pancakes to bake or pies. The porridge or the pancakes prepared in such a stove may differ in taste from the same meal prepared on a modern stove or range. The process of cooking in the Russian stove can be called "languor" – holding dishes for a long period of time at a steady temperature. Foods that are believed to acquire a distinctive character from being prepared in a Russian stove include baked milk, pastila candies, mushrooms cooked in sour cream, or even a simple potato. Bread is put in and taken out from the stove using a special wooden paddle on a long shank. Cast iron pots of special shape called chugun are handled in the oven with , a long wooden handle with the two-pronged metal "grabber". Bathing As well as warming and cooking, Russian stoves were used for bathing. Once the stove became hot the burning wood was removed, and cast iron containers were put into the stove and filled with water. That allowed people to bathe inside of the stove. A grown man can easily fit inside, and during World War II some people escaped the Nazis by hiding in the stoves. Heated sleeping area Besides its use for domestic heating, in winter people may sleep on top of the stove to keep warm: the large thermal mass (a proper Russian stove weighs about two tonnes) and layered design (in many variants the hot flue is separated from the outer brick shell with a layer of sand or pebbles) ensure that the outer surface of the stove is safe to touch. In former times the stove was used to treat winter diseases by warming a sick person inside of it. In Russian culture Because of the harsh winter, the Russian stove was a major element of Russian life and consequently it often appears in folklore, in particular in Russian fairy tales. Bogatyr Ilya Muromets could only lie on a Russian stove for 33 years, until he was miraculously healed by two pilgrims. Emelya from "At the Pike's Behest" was so reluctant to leave it that he simply rode on it. Baba Yaga from fairy tales baked lost children in her stove. In fairy tales the stove may receive human characteristics. For example, in "The Magic Swan Geese" a girl meets a Russian stove, and asks it for directions. The stove offers the girl rye buns, and subsequently, on the girl's return, hides her from the swan geese. See also List of cooking appliances Hearth Brazier Wash copper Kitchen stove Wood-burning stove Firebox Crucible Masonry heater Kamado (Japanese) Japanese kitchen Hibachi Kama (Japanese tea ceremony) Wok stove Agungi/Buttumak (Korean) References Russian inventions Masonry Baking Fireplaces Stoves Culture of Russia Residential heating appliances
Russian stove
[ "Engineering" ]
915
[ "Construction", "Masonry" ]
18,844,184
https://en.wikipedia.org/wiki/Institut%20d%27%C3%A9lectronique%20de%20micro%C3%A9lectronique%20et%20de%20nanotechnologie
The Institute of Electronics, Microelectronics and Nanotechnology or IEMN (Institut d'électronique de microélectronique et de nanotechnologie in French) is a research institute of University of Lille, CNRS and École Centrale de Lille (UMR CNRS 8520). IEMN research activities The main research focus is in six major scientific areas : Materials and Nanostructures Physics, Microtechnologies - Microsystems, Micro and optoelectronics, Communication circuits and systems, Acoustics, Instrumentation. With 200 permanent researchers and 100 doctoral students for a total staff of more than 500, IEMN focuses on the following research area: physics of matter, nanostructures, microsystems and microtechnologies, microwave components and microelectronic circuits, RF and microwave circuits, digital communications, optoelectronics and photonic circuits, acoustic and ultrasonic sensors, microwave instrumentation. IEMN labs sites IEMN central laboratory : The 12,000 m² premises of IEMN central laboratory are located in University of Lille Science campus at Villeneuve d'Ascq. The site is 200 metres far from the Ecole Centrale de Lille and near metro access Quatre Cantons. IEMN premises include 2,000 m² of cleanroom. IEMN - Antenne USTL : This IEMN site is also located in University of Lille Science campus at Villeneuve d'Ascq and near metro access Cité scientifique. IEMN - Antenne OAE : This IEMN site is located in Campus Valenciennes IEMN - Antenne ISEN : This IEMN site is located at Institut supérieur de l'électronique et du numérique (ISEN) in Lille historical centre. See also Institut des molécules et de la matière condensée de Lille References External links IEMN web site University of Lille Nord de France Nanotechnology institutions Materials science institutes Physics education in France Physics research institutes
Institut d'électronique de microélectronique et de nanotechnologie
[ "Materials_science" ]
424
[ "Nanotechnology", "Nanotechnology institutions", "Materials science organizations", "Materials science institutes" ]
863,494
https://en.wikipedia.org/wiki/Lead%28II%29%20sulfide
Lead(II) sulfide (also spelled sulphide) is an inorganic compound with the formula PbS. Galena is the principal ore and the most important compound of lead. It is a semiconducting material with niche uses. Formation, basic properties, related materials Addition of hydrogen sulfide or sulfide salts to a solution containing a lead salt, such as PbCl2, gives a black precipitate of lead sulfide. Pb2+ + H2S → PbS↓ + 2 H+ This reaction is used in qualitative inorganic analysis. The presence of hydrogen sulfide or sulfide ions may be tested using "lead acetate paper." Like the related materials PbSe and PbTe, PbS is a semiconductor. In fact, lead sulfide was one of the earliest materials to be used as a semiconductor. Lead sulfide crystallizes in the sodium chloride motif, unlike many other IV-VI semiconductors. Since PbS is the main ore of lead, much effort has focused on its conversion. A major process involves smelting of PbS followed by reduction of the resulting oxide. Idealized equations for these two steps are: 2 PbS + 3 O2 → 2 PbO + 2 SO2 PbO + C → Pb + CO The sulfur dioxide is converted to sulfuric acid. Nanoparticles Lead sulfide-containing nanoparticle and quantum dots have been well studied. Traditionally, such materials are produced by combining lead salts with a variety of sulfide sources. In 2009, PbS nanoparticles have been examined for use in solar cells. Applications Photodetector PbS was one of the first materials used for electrical diodes that could detect electromagnetic radiation, including infrared light. As an infrared sensor, PbS directly detects light, as opposed to thermal detectors, which respond to a change in detector element temperature caused by the radiation. A PbS element can be used to measure radiation in either of two ways: by measuring the tiny photocurrent the photons cause when they hit the PbS material, or by measuring the change in the material's electrical resistance that the photons cause. Measuring the resistance change is the more commonly used method. At room temperature, PbS is sensitive to radiation at wavelengths between approximately 1 and 2.5 μm. This range corresponds to the shorter wavelengths in the infra-red portion of the spectrum, the so-called short-wavelength infrared (SWIR). Only very hot objects emit radiation in these wavelengths. Cooling the PbS elements, for example using liquid nitrogen or a Peltier element system, shifts its sensitivity range to between approximately 2 and 4 μm. Objects that emit radiation in these wavelengths still have to be quite hot—several hundred degrees Celsius—but not as hot as those detectable by uncooled sensors. (Other compounds used for this purpose include indium antimonide (InSb) and mercury-cadmium telluride (HgCdTe), which have somewhat better properties for detecting the longer IR wavelengths.) The high dielectric constant of PbS leads to relatively slow detectors (compared to silicon, germanium, InSb, or HgCdTe). Planetary science Elevations above 2.6 km (1.63 mi) on the planet Venus are coated with a shiny substance. Though the composition of this coat is not entirely certain, one theory is that Venus "snows" crystallized lead sulfide much as Earth snows frozen water. If this is the case, it would be the first time the substance was identified on a foreign planet. Other less likely candidates for Venus' "snow" are bismuth sulfide and tellurium. Safety Lead(II) sulfide is so insoluble that it is almost nontoxic, but pyrolysis of the material, as in smelting, gives dangerous toxic fumes of lead and oxides of sulfur. Lead sulfide is insoluble and a stable compound in the pH of blood and so is probably one of the less toxic forms of lead. A large safety risk occurs in the synthesis of PbS using lead carboxylates, as they are particularly soluble and can cause negative physiological conditions. References Cited sources External links Case Studies in Environmental Medicine (CSEM): Lead Toxicity ToxFAQs: Lead National Pollutant Inventory – Lead and Lead Compounds Fact Sheet Lead(II) compounds Monosulfides IV-VI semiconductors Infrared sensor materials Rock salt crystal structure
Lead(II) sulfide
[ "Chemistry" ]
922
[ "Semiconductor materials", "IV-VI semiconductors" ]
863,608
https://en.wikipedia.org/wiki/AN%20thread
The AN thread (also A-N) is a particular type of fitting used to connect flexible hoses and rigid metal tubing that carry fluid. It is a US military-derived specification that dates back to World War II and stems from a joint standard agreed upon by the Army Air Corps and Navy, hence AN. The Air Corps-Navy involvement is also the origin of the red/blue color combination that was traditionally used in the anodized finishing process. AN sizes range from -2 (dash two) to -32 in irregular steps, with each step equating to the OD (outside diameter) of the tubing in -inch increments. Therefore, a -8 AN size would be equal to -inch OD tube. However, this system does not specify the ID (inside diameter) of the tubing because the tube wall can vary in thickness. Each AN size also uses its own standard thread size. AN fittings are a flare fitting, using 37° flared tubing to form a metal-to-metal seal. They are similar to other 37° flared fittings, such as JIC, which is their industrial variant. The two are interchangeable in theory, though this is typically not recommended due to the exacting specifications and demands of the aerospace industry. The differences between them relates to thread class and shape (how tight of a fit the threads are), and the metals used. Although similar, 37° AN and 45° SAE fittings and tooling are not interchangeable due to the different flare angles. Mixing them can cause leakage at the flare. Note that AN threads are different for bolts and fittings. In bolts the number refers to the diameter of the bolt whereas in a fitting it refers to the OD of the tube and thereby have different threads. For example, AN6 bolt has a -24 thread whereas an -6 AN fitting has a -18 thread. Originally parts were made compliant to the specification MIL-F-5509, but they are now controlled under SAE AS (Aerospace Standards) specifications AS4841 through AS4843 and AS4875. See also Flare fitting National pipe thread O-ring boss seal Threaded pipe References External links Article on difference between AN and JIC PDF displaying various AN fittings Thread standards Piping Plumbing Standards of the United States
AN thread
[ "Chemistry", "Engineering" ]
474
[ "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Mechanical engineering", "Piping" ]
863,668
https://en.wikipedia.org/wiki/Backscatter
In physics, backscatter (or backscattering) is the reflection of waves, particles, or signals back to the direction from which they came. It is usually a diffuse reflection due to scattering, as opposed to specular reflection as from a mirror, although specular backscattering can occur at normal incidence with a surface. Backscattering has important applications in astronomy, photography, and medical ultrasonography. The opposite effect is forward scatter, e.g. when a translucent material like a cloud diffuses sunlight, giving soft light. Backscatter of waves in physical space Backscattering can occur in quite different physical situations, where the incoming waves or particles are deflected from their original direction by different mechanisms: Diffuse reflection from large particles and Mie scattering, causing alpenglow and gegenschein, and showing up in weather radar; Inelastic collisions between electromagnetic waves and the transmitting medium (Brillouin scattering and Raman scattering), important in fiber optics, see below; Elastic collisions between accelerated ions and a sample (Rutherford backscattering) Bragg diffraction from crystals, used in inelastic scattering experiments (neutron backscattering, X-ray backscattering spectroscopy); Compton scattering, used in Backscatter X-ray imaging. Stimulated backscatter, observed in non-linear optics, and described by a class of solutions to the three-wave equation. Sometimes, the scattering is more or less isotropic, i.e. the incoming particles are scattered randomly in various directions, with no particular preference for backward scattering. In these cases, the term "backscattering" just designates the detector location chosen for some practical reasons: in X-ray imaging, backscattering means just the opposite of transmission imaging; in inelastic neutron or X-ray spectroscopy, backscattering geometry is chosen because it optimizes the energy resolution; in astronomy, backscattered light is that which is reflected with a phase angle of less than 90°. In other cases, the scattering intensity is enhanced in backward direction. This can have different reasons: In alpenglow, red light prevails because the blue part of the spectrum is depleted by Rayleigh scattering. In gegenschein, constructive interference might play a role. Coherent backscattering is observed in random media; for visible light most typically in suspensions like milk. Due to weak localization, enhanced multiple scattering is observed in back direction. The Back Scattering Alignment (BSA) coordinate system is often used in radar applications The Forward Scattering Alignment (FSA) coordinate system is primarily used in optical applications Backscattering properties of a target are wavelength dependent and can also be polarization dependent. Sensor systems using multiple wavelengths or polarizations can thus be used to infer additional information about target properties. Radar, especially weather radar Backscattering is the principle behind radar systems. In weather radar, backscattering is proportional to the 6th power of the diameter of the target multiplied by its inherent reflective properties, provided the wavelength is larger than the particle diameter (Rayleigh scattering). Water is almost 4 times more reflective than ice but droplets are much smaller than snow flakes or hail stones. So the backscattering is dependent on a mix of these two factors. The strongest backscatter comes from hail and large graupel (solid ice) due to their sizes, but non-Rayleigh (Mie scattering) effects can confuse interpretation. Another strong return is from melting snow or wet sleet, as they combine size and water reflectivity. They often show up as much higher rates of precipitation than actually occurring in what is called a brightband. Rain is a moderate backscatter, being stronger with large drops (such as from a thunderstorm) and much weaker with small droplets (such as mist or drizzle). Snow has rather weak backscatter. Dual polarization weather radars measure backscatter at horizontal and vertical polarizations to infer shape information from the ratio of the vertical and horizontal signals. In waveguides The backscattering method is also employed in fiber optics applications to detect optical faults. Light propagating through a fiber-optic cable gradually attenuates due to Rayleigh scattering. Faults are thus detected by monitoring the variation of part of the Rayleigh backscattered light. Since the backscattered light attenuates exponentially as it travels along the optical fiber cable, the attenuation characteristic is represented in a logarithmic scale graph. If the slope of the graph is steep, then power loss is high. If the slope is gentle, then optical fiber has a satisfactory loss characteristic. The loss measurement by the backscattering method allows measurement of a fiber-optic cable at one end without cutting the optical fiber hence it can be conveniently used for the construction and maintenance of optical fibers. In photography The term backscatter in photography refers to light from a flash,or strobe or video lights reflecting back from particles in the lens's field of view causing specks of light to appear in the photo. This gives rise to what are sometimes referred to as orb artifacts. Photographic backscatter can result from snowflakes, rain or mist, or airborne dust. Due to the size limitations of the modern compact and ultra-compact cameras, especially digital cameras, the distance between the lens and the built-in flash has decreased, thereby decreasing the angle of light reflection to the lens and increasing the likelihood of light reflection off normally sub-visible particles. Hence, the orb artifact is commonplace with small digital or film camera photographs. See also Backscatter (email) Backscatter X-ray (in security scanning applications, e.g. at airports) Forward scattering Scattering Electron backscatter diffraction References Scattering
Backscatter
[ "Physics", "Chemistry", "Materials_science" ]
1,182
[ "Condensed matter physics", "Scattering", "Particle physics", "Nuclear physics" ]
863,712
https://en.wikipedia.org/wiki/Liquid%20breathing
Liquid breathing is a form of respiration in which a normally air-breathing organism breathes an oxygen-rich liquid which is capable of CO2 gas exchange (such as a perfluorocarbon). The liquid involved requires certain physical properties, such as respiratory gas solubility, density, viscosity, vapor pressure and lipid solubility, which some perfluorochemicals (PFCs) have. Thus, it is critical to choose the appropriate PFC for a specific biomedical application, such as liquid ventilation, drug delivery or blood substitutes. The physical properties of PFC liquids vary substantially; however, the one common property is their high solubility for respiratory gases. In fact, these liquids carry more oxygen and carbon dioxide than blood. In theory, liquid breathing could assist in the treatment of patients with severe pulmonary or cardiac trauma, especially in pediatric cases. Liquid breathing has also been proposed for use in deep diving and space travel. Despite some recent advances in liquid ventilation, a standard mode of application has not yet been established. Approaches As liquid breathing is still a highly experimental technique, there are several proposed approaches. Total liquid ventilation Although total liquid ventilation (TLV) with completely liquid-filled lungs can be beneficial, the complex liquid-filled tube system required is a disadvantage compared to gas ventilation—the system must incorporate a membrane oxygenator, heater, and pumps to deliver to, and remove from the lungs tidal volume aliquots of conditioned perfluorocarbon (PFC). One research group led by Thomas H. Shaffer has maintained that with the use of microprocessors and new technology, it is possible to maintain better control of respiratory variables such as liquid functional residual capacity and tidal volume during TLV than with gas ventilation. Consequently, the total liquid ventilation necessitates a dedicated liquid ventilator similar to a medical ventilator except that it uses a breathable liquid. Many prototypes are used for animal experimentation, but experts recommend continued development of a liquid ventilator toward clinical applications. Specific preclinical liquid ventilator (Inolivent) is currently under joint development in Canada and France. The main application of this liquid ventilator is the ultra-fast induction of therapeutic hypothermia after cardiac arrest. This has been demonstrated to be more protective than slower cooling method after experimental cardiac arrest. Partial liquid ventilation In contrast, partial liquid ventilation (PLV) is a technique in which a PFC is instilled into the lung to a volume approximating functional residual capacity (approximately 40% of total lung capacity). Conventional mechanical ventilation delivers tidal volume breaths on top of it. This mode of liquid ventilation currently seems technologically more feasible than total liquid ventilation, because PLV could utilise technology currently in place in many neonatal intensive-care units (NICU) worldwide. The influence of PLV on oxygenation, carbon dioxide removal and lung mechanics has been investigated in several animal studies using different models of lung injury. Clinical applications of PLV have been reported in patients with acute respiratory distress syndrome (ARDS), meconium aspiration syndrome, congenital diaphragmatic hernia and respiratory distress syndrome (RDS) of neonates. In order to correctly and effectively conduct PLV, it is essential to properly dose a patient to a specific lung volume (10–15 ml/kg) to recruit alveolar volume redose the lung with PFC liquid (1–2 ml/kg/h) to oppose PFC evaporation from the lung. If PFC liquid is not maintained in the lung, PLV can not effectively protect the lung from biophysical forces associated with the gas ventilator. New application modes for PFC have been developed. Partial liquid ventilation (PLV) involves filling the lungs with a liquid. This liquid is a perfluorocarbon such as perflubron (brand name Liquivent). The liquid has some unique properties. It has a very low surface tension, similar to the surfactant substances produced in the lungs to prevent the alveoli from collapsing and sticking together during exhalation. It also has a high density, oxygen readily diffuses through it, and it may have some anti-inflammatory properties. In PLV, the lungs are filled with the liquid, the patient is then ventilated with a conventional ventilator using a protective lung ventilation strategy. The hope is that the liquid will help the transport of oxygen to parts of the lung that are flooded and filled with debris, help remove this debris and open up more alveoli improving lung function. The study of PLV involves comparison to protocolized ventilator strategy designed to minimize lung damage. PFC vapor Vaporization of perfluorohexane with two anesthetic vaporizers calibrated for perfluorohexane has been shown to improve gas exchange in oleic acid-induced lung injury in sheep. Predominantly PFCs with high vapor pressure are suitable for vaporization. Aerosol-PFC With aerosolized perfluorooctane, significant improvement of oxygenation and pulmonary mechanics was shown in adult sheep with oleic acid-induced lung injury. In surfactant-depleted piglets, persistent improvement of gas exchange and lung mechanics was demonstrated with Aerosol-PFC. The aerosol device is of decisive importance for the efficacy of PFC aerosolization, as aerosolization of PF5080 (a less purified FC77) has been shown to be ineffective using a different aerosol device in surfactant-depleted rabbits. Partial liquid ventilation and Aerosol-PFC reduced pulmonary inflammatory response. Human usage Medical treatment The most promising area for the use of liquid ventilation is in the field of pediatric medicine. The first medical use of liquid breathing was treatment of premature babies and adults with acute respiratory distress syndrome (ARDS) in the 1990s. Liquid breathing was used in clinical trials after the development by Alliance Pharmaceuticals of the fluorochemical perfluorooctyl bromide, or perflubron for short. Current methods of positive-pressure ventilation can contribute to the development of lung disease in pre-term neonates, leading to diseases such as bronchopulmonary dysplasia. Liquid ventilation removes many of the high pressure gradients responsible for this damage. Furthermore, perfluorocarbons have been demonstrated to reduce lung inflammation, improve ventilation-perfusion mismatch and to provide a novel route for the pulmonary administration of drugs. In order to explore drug delivery techniques that would be useful for both partial and total liquid ventilation, more recent studies have focused on PFC drug delivery using a nanocrystal suspension. The first image is a computer model of a PFC liquid (perflubron) combined with gentamicin molecules. The second image shows experimental results comparing both plasma and tissue levels of gentamicin after an intratracheal (IT) and intravenous (IV) dose of 5 mg/kg in a newborn lamb during gas ventilation. Note that the plasma levels of the IV dose greatly exceed the levels of the IT dose over the 4 hour study period; whereas, the lung tissue levels of gentamicin when delivered by an intratracheal (IT) suspension, uniformly exceed the intravenous (IV) delivery approach after 4 hours. Thus, the IT approach allows more effective delivery of the drug to the target organ while maintaining a safer level systemically. Both images represent the in-vivo time course over 4 hours. Numerous studies have now demonstrated the effectiveness of PFC liquids as a delivery vehicle to the lungs. Clinical trials with premature infants and adults have been conducted. Since the safety of the procedure and the effectiveness were apparent from an early stage, the US Food and Drug Administration (FDA) gave the product "fast track" status (meaning an accelerated review of the product, designed to get it to the public as quickly as is safely possible) due to its life-saving potential. Clinical trials showed that using perflubron with ordinary ventilators improved outcomes as much as using high frequency oscillating ventilation (HFOV). But because perflubron was not better than HFOV, the FDA did not approve perflubron, and Alliance is no longer pursuing the partial liquid ventilation application. Whether perflubron would improve outcomes when used with HFOV or has fewer long-term consequences than HFOV remains an open question. In 1996 Mike Darwin and Steven B. Harris proposed using cold liquid ventilation with perfluorocarbon to quickly lower the body temperature of victims of cardiac arrest and other brain trauma to allow the brain to better recover. The technology came to be called gas/liquid ventilation (GLV), and was shown able to achieve a cooling rate of 0.5 °C per minute in large animals. It has not yet been tried in humans. Most recently, hypothermic brain protection has been associated with rapid brain cooling. In this regard, a new therapeutic approach is the use of intranasal perfluorochemical spray for preferential brain cooling. The nasopharyngeal (NP) approach is unique for brain cooling due to anatomic proximity to the cerebral circulation and arteries. Based on preclinical studies in adult sheep, it was shown that independent of region, brain cooling was faster during NP-perfluorochemical versus conventional whole body cooling with cooling blankets. To date, there have been four human studies including a completed randomized intra-arrest study (200 patients). Results clearly demonstrated that prehospital intra-arrest transnasal cooling is safe, feasible and is associated with an improvement in cooling time. Proposed uses Diving Gas pressure increases with depth, rising 1 bar () every 10 meters to over 1,000 bar at the bottom of the Mariana Trench. Diving becomes more dangerous as depth increases, and deep diving presents many hazards. All surface-breathing animals are subject to decompression sickness, including aquatic mammals and free-diving humans. Breathing at depth can cause nitrogen narcosis and oxygen toxicity. Holding the breath while ascending after breathing at depth can cause air embolisms, burst lung, and collapsed lung. Special breathing gas mixes such as trimix or heliox reduce the risk of nitrogen narcosis but do not eliminate it. Heliox further eliminates the risk of nitrogen narcosis but introduces the risk of helium tremors below about . Atmospheric diving suits maintain body and breathing pressure at 1 bar, eliminating most of the hazards of descending, ascending, and breathing at depth. However, the rigid suits are bulky, clumsy, and very expensive. Liquid breathing offers a third option, promising the mobility available with flexible dive suits and the reduced risks of rigid suits. With liquid in the lungs, the pressure within the diver's lungs could accommodate changes in the pressure of the surrounding water without the huge partial pressure gas exposures required when the lungs are filled with gas. Liquid breathing would not result in the saturation of body tissues with high pressure nitrogen or helium that occurs with the use of non-liquids, thus would reduce or remove the need for slow decompression. A significant problem, however, arises from the high viscosity of the liquid and the corresponding reduction in its ability to remove CO2. All uses of liquid breathing for diving must involve total liquid ventilation (see above). Total liquid ventilation, however, has difficulty moving enough liquid to carry away CO2, because no matter how great the total pressure is, the amount of partial CO2 gas pressure available to dissolve CO2 into the breathing liquid can never be much more than the pressure at which CO2 exists in the blood (about 40 mm of mercury (Torr)). At these pressures, most fluorocarbon liquids require about 70 mL/kg minute-ventilation volumes of liquid (about 5 L/min for a 70 kg adult) to remove enough CO2 for normal resting metabolism. This is a great deal of fluid to move, particularly as liquids are more viscous and denser than gases, (for example water is about 850 times the density of air). Any increase in the diver's metabolic activity also increases CO2 production and the breathing rate, which is already at the limits of realistic flow rates in liquid breathing. It seems unlikely that a person would move 10 liters/min of fluorocarbon liquid without assistance from a mechanical ventilator, so "free breathing" may be unlikely. However, it has been suggested that a liquid breathing system could be combined with a CO2 scrubber connected to the diver's blood supply; a US patent has been filed for such a method. Space travel Liquid immersion provides a way to reduce the physical stress of G forces. Forces applied to fluids are distributed as omnidirectional pressures. As liquids cannot be practically compressed, they do not change density under high acceleration such as performed in aerial maneuvers or space travel. A person immersed in liquid of the same density as tissue has acceleration forces distributed around the body, rather than applied at a single point such as a seat or harness straps. This principle is used in a new type of G-suit called the Libelle G-suit, which allows aircraft pilots to remain conscious and functioning at more than 10g acceleration by surrounding them with water in a rigid suit. Acceleration protection by liquid immersion is limited by the differential density of body tissues and immersion fluid, limiting the utility of this method to about 15g to 20g. Extending acceleration protection beyond 20g requires filling the lungs with fluid of density similar to water. An astronaut totally immersed in liquid, with liquid inside all body cavities, will feel little effect from extreme G forces because the forces on a liquid are distributed equally, and in all directions simultaneously. Effects will still be felt because of density differences between different body tissues, so an upper acceleration limit still exists. However, it can likely be higher than hundreds of G. Liquid breathing for acceleration protection may never be practical because of the difficulty of finding a suitable breathing medium of similar density to water that is compatible with lung tissue. Perfluorocarbon fluids are twice as dense as water, hence unsuitable for this application. Examples in fiction Literary works Alexander Beliaev's 1928 science fiction novel Amphibian Man is based on a scientist and a maverick surgeon, who makes his son, Ichthyander (etymology: "fish" + "man") a life-saving transplant – a set of shark gills. There is a film based on the novel. L. Sprague de Camp's 1938 short story "The Merman" hinges on an experimental process to make lungs function as gills, thus allowing a human being to "breathe" under water. Hal Clement's 1973 novel Ocean on Top portrays a small underwater civilization living in a 'bubble' of oxygenated fluid denser than seawater. Joe Haldeman's 1975 novel The Forever War describes liquid immersion and breathing in great detail as a key technology to allow space travel and combat with acceleration up to 50 G. In the Star Trek: The Next Generation novel The Children of Hamlin (1988) the crew of the Enterprise-D encounter an alien race whose ships contain a breathable liquid environment. Peter Benchley's 1994 novel White Shark centers around a Nazi scientist's experimental attempts to create an amphibious human, whose lungs are surgically modified to breathe underwater, and trained to reflexively do so after being flooded with a fluorocarbon solution. Judith and Garfield Reeves-Stevens' 1994 Star Trek novel Federation explains that before the invention of the inertial dampener, the stresses of high-G acceleration required starship pilots to be immersed in liquid-filled capsules, breathing an oxygen-rich saline solution to prevent their lungs from being crushed. Nicola Griffith's novel Slow River (1995) features a sex scene occurring within a twenty cubic foot silvery pink perflurocarbon pool, with the sensation described as "like breathing a fist". Ben Bova's novel Jupiter (2000) features a craft in which the crew are suspended in a breathable liquid that allows them to survive in the high-pressure environment of Jupiter's atmosphere. In Scott Westerfeld's sci-fi novel The Risen Empire (2003), the lungs of soldiers performing insertion from orbit are filled with an oxygen-rich polymer gel with embedded pseudo-alveoli and a rudimentary artificial intelligence. The novel Mechanicum (2008) by Graham McNeill, Book 9 in the Horus Heresy book series, describes physically crippled (gigantic war machine) pilots encased in nutrient fluid tanks. This allows them to continue operating beyond the limits normally imposed by the body. In Liu Cixin's novel The Dark Forest (2008), the warships of humanity in the 23rd century flood their compartments with an oxygen-rich liquid called 'deep-sea acceleration fluid' to protect the crew against the forces of extreme acceleration that the ships undergo. Ships enter a 'deep-sea state' where the crew are immersed in the fluid and sedated before acceleration can commence. In the 2009 novel The Lost Symbol by Dan Brown, Robert Langdon (the protagonist) is completely submerged in breathable liquid mixed with hallucinogenic chemicals and sedatives as a torture and interrogation technique by Mal'akh (the antagonist). He goes through a near death experience when he inhales the liquid and blacks out, losing control over his body, but is soon revived. In Greg van Eekhout's 2014 novel California Bones, two characters are put into tanks filled with liquid: "They were given no breathing apparatus, but the water in the tank was rich with perfluorocarbon, which carried more oxygen than blood." In author A.L. Mengel's science fiction novel The Wandering Star (2016), several characters breathe oxygenated fluid during a dive to explore an underwater city. They submerge in high pressure "bubbles" filled with the perfluorocarbon fluid. In Tiamat's Wrath, a 2019 novel in The Expanse series by James S. A. Corey, The Laconian empire utilizes a ship with full-immersion liquid-breathing pods that allow the crew to undergo significantly increased g-forces. As powerful and fuel-efficient fusion engines in the series have made the only practical limitations of a ships' acceleration the survivability of the crew, this makes the ship the fastest in all of human-colonized space. Films and television The aliens in the Gerry Anderson UFO series (1970-1971) use liquid-breathing spacesuits. The 1989 film The Abyss by James Cameron features a character using liquid breathing to dive thousands of feet without compressing. The Abyss also features a scene with a rat submerged in and breathing fluorocarbon liquid, filmed in real life. In the 1995 anime Neon Genesis Evangelion, the cockpits of the titular mecha are filled with a fictional oxygenated liquid called LCL which is required for the pilot to mentally sync with an Evangelion, as well as providing direct oxygenation of their blood, and dampening the impacts from battle. Once the cockpit is flooded the LCL is ionized, bringing its density, opacity, and viscosity close to that of air. Protagonist Shinji Ikari notices that LCL smells like blood. It is eventually revealed that LCL is the blood of the Evangelions' progenitor, Lilith. In the movie Mission to Mars (2000), a character is depicted as being immersed in apparent breathable fluid before a high-acceleration launch. In season 1, episode 13 of Seven Days (1998-2001) chrononaut Frank Parker is seen breathing a hyper-oxygenated perfluorocarbon liquid that is pumped through a sealed full body suit that he is wearing. This suit and liquid combination allow him to board a Russian submarine through open ocean at a depth of almost 1000 feet. Upon boarding the submarine he removes his helmet, expels the liquid from his lungs and is able to breathe air again. In an episode of the Adult Swim cartoon series Metalocalypse (2006-2013), the other members of the band submerge guitarist Toki in a "liquid oxygen isolation chamber" while recording an album in the Mariana Trench. In a Series 11 episode of Dalziel and Pascoe (1996-2007) entitled Demons on Our Shoulders, magician Lee Knight, played by Richard E Grant, performs an underwater trick using breathable fluid. In an episode of the Syfy Channel show Eureka (2006-2012), Sheriff Jack Carter is submerged in a tank of "oxygen rich plasma" to be cured of the effects of a scientific accident. In the anime series Aldnoah.Zero (2014-2015), episode 5 shows that Slaine Troyard was in a liquid-filled capsule when he crashed. Princess Asseylum witnessed the crash, helped him to get out of the capsule, then used CPR on him to draw out the liquid from his lungs. In the 2024 anime Bang Brave Bang Bravern, the titular mecha Bravern fills its cockpit with liquid during underwater combat, telling pilot Ao Isami that it will supply oxygen directly to him while also counteracting the pressure. Bravern directly compares this to the scene from The Abyss, prompting Ao to ask how Bravern knows about the film. Video games In the classic 1995 PC turn-based strategy game X-COM: Terror from the Deep, "Aquanauts" fighting in deep ocean conditions breathe a dense oxygen-carrying fluid. In the EVE Online Universe (2003), pilots in capsules (escape pods that function as the control center for the spacecraft) breathe an oxygen rich, nano-saturated, breathable glucose-based suspension solution. In the game Helldivers 2 (2024), after an upgrade, jet pilots use breathable perfluorocarbons in the cockpit to absorb G-forces and allow more dangerous maneuvers. See also Artificial gills (human) Breathing gas Liquid ventilator Mechanical ventilation References External links Here, Breathe This Liquid, from Discover Magazine Miracle Girl, from Reader's Digest Liquids Underwater diving equipment Respiration University at Buffalo Medical procedures Modes of mechanical ventilation Respiratory system procedures
Liquid breathing
[ "Physics", "Chemistry" ]
4,556
[ "Phases of matter", "Matter", "Liquids" ]
863,813
https://en.wikipedia.org/wiki/Representation%20theory%20of%20the%20Poincar%C3%A9%20group
In mathematics, the representation theory of the Poincaré group is an example of the representation theory of a Lie group that is neither a compact group nor a semisimple group. It is fundamental in theoretical physics. In a physical theory having Minkowski space as the underlying spacetime, the space of physical states is typically a representation of the Poincaré group. (More generally, it may be a projective representation, which amounts to a representation of the double cover of the group.) In a classical field theory, the physical states are sections of a Poincaré-equivariant vector bundle over Minkowski space. The equivariance condition means that the group acts on the total space of the vector bundle, and the projection to Minkowski space is an equivariant map. Therefore, the Poincaré group also acts on the space of sections. Representations arising in this way (and their subquotients) are called covariant field representations, and are not usually unitary. For a discussion of such unitary representations, see Wigner's classification. In quantum mechanics, the state of the system is determined by the Schrödinger equation, which is invariant under Galilean transformations. Quantum field theory is the relativistic extension of quantum mechanics, where relativistic (Lorentz/Poincaré invariant) wave equations are solved, "quantized", and act on a Hilbert space composed of Fock states. There are no finite unitary representations of the full Lorentz (and thus Poincaré) transformations due to the non-compact nature of Lorentz boosts (rotations in Minkowski space along a space and time axis). However, there are finite non-unitary indecomposable representations of the Poincaré algebra, which may be used for modelling of unstable particles. In case of spin 1/2 particles, it is possible to find a construction that includes both a finite-dimensional representation and a scalar product preserved by this representation by associating a 4-component Dirac spinor with each particle. These spinors transform under Lorentz transformations generated by the gamma matrices (). It can be shown that the scalar product is preserved. It is not, however, positive definite, so the representation is not unitary. References . Notes See also Wigner's classification Representation theory of the Lorentz group Representation theory of the Galilean group Representation theory of diffeomorphism groups Particle physics and representation theory Symmetry in quantum mechanics Representation theory of Lie groups Quantum field theory
Representation theory of the Poincaré group
[ "Physics" ]
523
[ "Quantum field theory", "Quantum mechanics" ]
863,918
https://en.wikipedia.org/wiki/Eternal%20return
Eternal return (or eternal recurrence) is a philosophical concept which states that time repeats itself in an infinite loop, and that exactly the same events will continue to occur in exactly the same way, over and over again, for eternity. In ancient Greece, the concept of eternal return was most prominently associated with Empedocles and with Stoicism, the school of philosophy founded by Zeno of Citium. The Stoics believed that the universe is periodically destroyed and reborn, and that each universe is exactly the same as the one before. This doctrine was fiercely criticised by Christian authors such as Augustine, who saw in it a fundamental denial of free will and of the possibility of salvation. The global spread of Christianity therefore brought an end to classical theories of eternal return. The concept was revived in the 19th century by German philosopher Friedrich Nietzsche. Having briefly presented the idea as a thought experiment in The Gay Science, he explored it more thoroughly in his novel Thus Spoke Zarathustra, in which the protagonist learns to overcome his horror of the thought of eternal return. It is not known whether Nietzsche believed in the literal truth of eternal return, or, if he did not, what he intended to demonstrate by it. Nietzsche's ideas were subsequently taken up and re-interpreted by other writers, such as Russian esotericist P. D. Ouspensky, who argued that it was possible to break the cycle of return. Classical antiquity Pythagoreanism There are hints in ancient writings that the theory of eternal return may have originated with Pythagoras (c. 570 – c. 495 BC). According to Porphyry, it was one of the teachings of Pythagoras that "after certain specified periods, the same events occur again" and that "nothing was entirely new". Eudemus of Rhodes also references this Pythagorean doctrine in his commentary on Aristotle's Physics. In a fragment preserved by Simplicius, Eudemus writes: Stoicism The Stoics, possibly inspired by the Pythagoreans, incorporated the theory of eternal recurrence into their natural philosophy. According to Stoic physics, the universe is periodically destroyed in an immense conflagration (ekpyrosis), and then experiences a rebirth (palingenesis). These cycles continue for eternity, and the same events are exactly repeated in every cycle. The Stoics may have found support for this doctrine in the concept of the Great Year, the oldest known expression of which is found in Plato's Timaeus. Plato hypothesised that one complete cycle of time would be fulfilled when the sun, moon and planets all completed their various circuits and returned to their original positions. Sources differ as to whether the Stoics believed that the contents of each new universe would be one and the same with those of the previous universe, or only so similar as to be indistinguishable. The former point of view was attributed to the Stoic Chrysippus (c. 279 – c. 206 BC) by Alexander of Aphrodisias, who wrote: On the other hand, Origen (c. 185 – c. 253 AD) characterises the Stoics as claiming that the contents of each cycle will not be identical, but only indistinguishable: Origen also records a heterodox version of the doctrine, noting that some Stoics suggest that "there is a slight and very minute difference between one period and the events in the period before it". This was probably not a widely-held belief, as it represents a denial of the deterministic viewpoint which stands at the heart of Stoic philosophy. Christian response Christian authors attacked the doctrine of eternal recurrence on various grounds. Origen argued that the theory was incompatible with free will (although he did allow the possibility of diverse and non-identical cycles). Augustine of Hippo (AD 354–430) objected to the fact that salvation was not possible in the Stoic scheme, arguing that even if a temporary happiness was attained, a soul could not be truly blessed if it was doomed to return again to misery. Augustine also mentions "certain philosophers" who cite Ecclesiastes 1:9–10 as evidence of eternal return: "What is that which hath been? It is that which shall be. And what is that which is done? It is that which shall be done: and there is no new thing under the sun. Who can speak and say, See, this is new? It hath been already of old time, which was before us." Augustine denies that this has reference to the recurrence of specific people, objects, and events, instead interpreting the passage in a more general sense. In support of his argument, he appeals to scriptural passages such as Romans 6:9, which affirms that Christ "being raised from the dead dieth no more". Friedrich Nietzsche Eternal recurrence () is one of the central concepts of the philosophy of Friedrich Nietzsche (1844–1900). While the idea itself is not original to Nietzsche, his unique response to it gave new life to the theory, and speculation as to the correct interpretation of Nietzsche's doctrine continues to this day. Precursors The discovery of the laws of thermodynamics in the 19th century restarted the debate among scientists and philosophers about the ultimate fate of the universe, which brought in its train many questions about the nature of time. Eduard von Hartmann argued that the universe's final state would be identical to the state in which it had begun; Eugen Dühring rejected this idea, claiming that it carried with it the necessary consequence that the universe would begin again, and that the same forms would repeat themselves eternally, a doctrine which Dühring viewed as dangerously pessimistic. , on the other hand, argued in favour of a cyclical system, additionally positing the spatial co-existence of an infinite number of identical worlds. Louis Auguste Blanqui similarly claimed that in an infinite universe, every possible combination of forms must repeat itself eternally across both time and space. Nietzsche's formulation Nietzsche may have drawn upon a number of sources in developing his own formulation of the theory. He had studied Pythagorean and Stoic philosophy, was familiar with the works of contemporary philosophers such as Dühring and Vogt, and may have encountered references to Blanqui in a book by Friedrich Albert Lange. He was also a fan of the author Heinrich Heine, one of whose books contains a passage discussing the theory of eternal return. Nevertheless, Nietzsche claimed that the doctrine struck him one day as a sudden revelation, while walking beside Lake Silvaplana in Switzerland. The first published presentation of Nietzsche's version of the theory appears in The Gay Science, section 341, where it is proposed to the reader as a thought experiment. Nietzsche expanded upon this concept in the philosophical novel Thus Spoke Zarathustra, later writing that eternal return was "the fundamental idea of the work". In this novel, the titular Zarathustra is initially struck with horror at the thought that all things must recur eternally; ultimately, however, he overcomes his aversion to eternal return and embraces it as his most fervent desire. In the penultimate chapter of the work ("The Drunken Song"), Zarathustra declares: "All things are entangled, ensnared, enamored; if you ever wanted one thing twice, if you ever said, 'You please me, happiness! Abide, moment!' then you wanted all back ... For all joy wants—eternity." Interpretation Martin Heidegger points out that Nietzsche's first mention of eternal recurrence in The Gay Science presents this concept as a hypothetical question rather than postulating it as a fact. Many readings argue that Nietzsche was not attempting to make a cosmological or theoretical claim i.e. saying that eternal recurrence is a true statement about how the world works. Instead, the emotional reaction to the thought experiment serves to reveal whether one is living life to the best. According to Heidegger, the significant point is the burden imposed by the question of eternal recurrence, regardless of whether or not such a thing could possibly be true. The idea is similar to Nietzsche's concept of amor fati, which he describes in Ecce Homo: "My formula for greatness in a human being is amor fati: that one wants nothing to be different, not forward, not backward, not in all eternity. Not merely to bear what is necessary, still less conceal it ... but love it." On the other hand, Nietzsche's posthumously published notebooks contain an attempt at a logical proof of eternal return, which is often adduced in support of the claim that Nietzsche believed in the theory as a real possibility. The proof is based upon the premise that the universe is infinite in duration, but contains a finite quantity of energy. This being the case, all matter in the universe must pass through a finite number of combinations, and each series of combinations must eventually repeat in the same order, thereby creating "a circular movement of absolutely identical series". However, scholars such as Neil Sinhababu and Kuong Un Teng have suggested that the reason this material remained unpublished was because Nietzsche himself was unconvinced that his argument would hold up to scrutiny. A third possibility is that Nietzsche was attempting to create a new ethical standard by which people should judge their own behaviour. In one of his unpublished notes, Nietzsche writes: "The question which thou wilt have to answer before every deed that thou doest: 'is this such a deed as I am prepared to perform an incalculable number of times?' is the best ballast." Taken in this sense, the doctrine has been compared to the categorical imperative of Immanuel Kant. Once again, however, the objection is raised that no such ethical imperative appears in any of Nietzsche's published writings, and this interpretation is therefore rejected by most modern scholars. P. D. Ouspensky Russian esotericist P. D. Ouspensky (1878–1947) believed in the literal truth of eternal recurrence. As a child, he had been prone to vivid sensations of déjà vu, and when he encountered the theory of eternal return in the writings of Nietzsche, it occurred to him that this was a possible explanation for his experiences. He subsequently explored the idea in his semi-autobiographical novel, Strange Life of Ivan Osokin. In this story, Ivan Osokin implores a magician to send him back to his childhood and give him the chance to live his life over again. The magician obliges, but warns Ivan that he will be unable to correct any of his mistakes. This turns out to be the case; although Ivan always knows in advance what the outcome of his actions will be, he is unable to keep himself from repeating those actions. Having re-lived his life up to the point of his conversation with the magician, Ivan asks in despair whether there is any way of changing the past. The magician answers that he must first change himself; if he works on improving his character, he may have a chance of making better decisions next time around. The earliest version of the novel, however, did not include the magician, and ended on "a totally pessimistic note". The revolution in Ouspensky's thoughts on recurrence – the idea that change is possible – took place after he became a disciple of the mystic George Gurdjieff, who taught that a person could achieve a higher state of consciousness through a system of strict self-discipline. When Ouspensky asked about eternal recurrence, Gurdjieff told him: Ouspensky incorporated this idea into his later writings. In A New Model of the Universe, he argued against Nietzsche's proof of the mathematical necessity of eternal repetition, claiming that a large enough quantity of matter would be capable of an infinite number of possible combinations. According to Ouspensky, everyone is reborn again into the same life at the moment of their death, and many people will indeed continue to live the exact same lives for eternity, but it is also possible to break the cycle and enter into a new plane of existence. Science and mathematics The Poincaré recurrence theorem states that certain dynamical systems, such as particles of gas in a sealed container, will return infinitely often to a state arbitrarily close to their original state. The theorem, first advanced by Henri Poincaré in 1890, remains influential, and is today the basis of ergodic theory. Attempts have been made to prove or disprove the possibility of Poincaré recurrence in a system the size of a galaxy or a universe. Philosopher Michael Huemer has argued that if this is so, then reincarnation can be proved by a person's current existence, using Bayesian probability theory. See also Notes References Further reading External links Causality Religious cosmologies Religious philosophical concepts Philosophy of time Periodic phenomena Concepts in metaphysics Existentialist concepts
Eternal return
[ "Physics" ]
2,706
[ "Spacetime", "Philosophy of time", "Physical quantities", "Time" ]
865,107
https://en.wikipedia.org/wiki/Stover
Stover are the leaves and stalks of field crops, such as corn (maize), sorghum or soybean that are commonly left in a field after harvesting the grain. It is similar to straw, the residue left after any cereal grain or grass has been harvested at maturity for its seed. It can be directly grazed by cattle or dried for use as fodder. Stover has attracted some attention as a potential fuel source, and as biomass for fermentation or as a feedstock for cellulosic ethanol production. Stover from various crops can also be used in mushroom compost preparation. The word stover derives from the English legal term estovers, referring to the right of tenants to cut timber. See also Corn stover Crop residue Notes Biodegradable materials Energy crops Fodder
Stover
[ "Physics", "Chemistry" ]
166
[ "Biodegradation", "Biodegradable materials", "Materials", "Matter" ]
865,138
https://en.wikipedia.org/wiki/Euler%27s%20rotation%20theorem
In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group. The theorem is named after Leonhard Euler, who proved it in 1775 by means of spherical geometry. The axis of rotation is known as an Euler axis, typically represented by a unit vector ê. Its product by the rotation angle is known as an axis-angle vector. The extension of the theorem to kinematics yields the concept of instant axis of rotation, a line of fixed points. In linear algebra terms, the theorem states that, in 3D space, any two Cartesian coordinate systems with a common origin are related by a rotation about some fixed axis. This also means that the product of two rotation matrices is again a rotation matrix and that for a non-identity rotation matrix one eigenvalue is 1 and the other two are both complex, or both equal to −1. The eigenvector corresponding to this eigenvalue is the axis of rotation connecting the two systems. Euler's theorem (1776) Euler states the theorem as follows: Theorema. Quomodocunque sphaera circa centrum suum conuertatur, semper assignari potest diameter, cuius directio in situ translato conueniat cum situ initiali. or (in English): When a sphere is moved around its centre it is always possible to find a diameter whose direction in the displaced position is the same as in the initial position. Proof Euler's original proof was made using spherical geometry and therefore whenever he speaks about triangles they must be understood as spherical triangles. Previous analysis To arrive at a proof, Euler analyses what the situation would look like if the theorem were true. To that end, suppose the yellow line in Figure 1 goes through the center of the sphere and is the axis of rotation we are looking for, and point is one of the two intersection points of that axis with the sphere. Then he considers an arbitrary great circle that does not contain (the blue circle), and its image after rotation (the red circle), which is another great circle not containing . He labels a point on their intersection as point . (If the circles coincide, then can be taken as any point on either; otherwise is one of the two points of intersection.) Now is on the initial circle (the blue circle), so its image will be on the transported circle (red). He labels that image as point . Since is also on the transported circle (red), it is the image of another point that was on the initial circle (blue) and he labels that preimage as (see Figure 2). Then he considers the two arcs joining and to . These arcs have the same length because arc is mapped onto arc . Also, since is a fixed point, triangle is mapped onto triangle , so these triangles are isosceles, and arc bisects angle . Construction of the best candidate point Let us construct a point that could be invariant using the previous considerations. We start with the blue great circle and its image under the transformation, which is the red great circle as in the Figure 1. Let point be a point of intersection of those circles. If ’s image under the transformation is the same point then is a fixed point of the transformation, and since the center is also a fixed point, the diameter of the sphere containing is the axis of rotation and the theorem is proved. Otherwise we label ’s image as and its preimage as , and connect these two points to with arcs and . These arcs have the same length. Construct the great circle that bisects and locate point on that great circle so that arcs and have the same length, and call the region of the sphere containing and bounded by the blue and red great circles the interior of . (That is, the yellow region in Figure 3.) Then since and is on the bisector of , we also have . Proof of its invariance under the transformation Now let us suppose that is the image of . Then we know and orientation is preserved, so must be interior to . Now is transformed to , so . Since is also the same length as , then and . But , so and . Therefore is the same point as . In other words, is a fixed point of the transformation, and since the center is also a fixed point, the diameter of the sphere containing is the axis of rotation. Final notes about the construction Euler also points out that can be found by intersecting the perpendicular bisector of with the angle bisector of , a construction that might be easier in practice. He also proposed the intersection of two planes: the symmetry plane of the angle (which passes through the center of the sphere), and the symmetry plane of the arc (which also passes through ). Proposition. These two planes intersect in a diameter. This diameter is the one we are looking for. Proof. Let us call either of the endpoints (there are two) of this diameter over the sphere surface. Since is mapped on and the triangles have the same angles, it follows that the triangle is transported onto the triangle . Therefore the point has to remain fixed under the movement. Corollaries. This also shows that the rotation of the sphere can be seen as two consecutive reflections about the two planes described above. Points in a mirror plane are invariant under reflection, and hence the points on their intersection (a line: the axis of rotation) are invariant under both the reflections, and hence under the rotation. Another simple way to find the rotation axis is by considering the plane on which the points , , lie. The rotation axis is obviously orthogonal to this plane, and passes through the center of the sphere. Given that for a rigid body any movement that leaves an axis invariant is a rotation, this also proves that any arbitrary composition of rotations is equivalent to a single rotation around a new axis. Matrix proof A spatial rotation is a linear map in one-to-one correspondence with a rotation matrix that transforms a coordinate vector into , that is . Therefore, another version of Euler's theorem is that for every rotation , there is a nonzero vector for which ; this is exactly the claim that is an eigenvector of associated with the eigenvalue 1. Hence it suffices to prove that 1 is an eigenvalue of ; the rotation axis of will be the line , where is the eigenvector with eigenvalue 1. A rotation matrix has the fundamental property that its inverse is its transpose, that is where is the identity matrix and superscript T indicates the transposed matrix. Compute the determinant of this relation to find that a rotation matrix has determinant ±1. In particular, A rotation matrix with determinant +1 is a proper rotation, and one with a negative determinant −1 is an improper rotation, that is a reflection combined with a proper rotation. It will now be shown that a proper rotation matrix has at least one invariant vector , i.e., . Because this requires that , we see that the vector must be an eigenvector of the matrix with eigenvalue . Thus, this is equivalent to showing that . Use the two relations for any matrix A and (since ) to compute This shows that is a root (solution) of the characteristic equation, that is, In other words, the matrix is singular and has a non-zero kernel, that is, there is at least one non-zero vector, say , for which The line for real is invariant under , i.e., is a rotation axis. This proves Euler's theorem. Equivalence of an orthogonal matrix to a rotation matrix Two matrices (representing linear maps) are said to be equivalent if there is a change of basis that makes one equal to the other. A proper orthogonal matrix is always equivalent (in this sense) to either the following matrix or to its vertical reflection: Then, any orthogonal matrix is either a rotation or an improper rotation. A general orthogonal matrix has only one real eigenvalue, either +1 or −1. When it is +1 the matrix is a rotation. When −1, the matrix is an improper rotation. If has more than one invariant vector then and . Any vector is an invariant vector of . Excursion into matrix theory In order to prove the previous equation some facts from matrix theory must be recalled. An matrix has orthogonal eigenvectors if and only if is normal, that is, if . This result is equivalent to stating that normal matrices can be brought to diagonal form by a unitary similarity transformation: and is unitary, that is, The eigenvalues are roots of the characteristic equation. If the matrix happens to be unitary (and note that unitary matrices are normal), then and it follows that the eigenvalues of a unitary matrix are on the unit circle in the complex plane: Also an orthogonal (real unitary) matrix has eigenvalues on the unit circle in the complex plane. Moreover, since its characteristic equation (an th order polynomial in ) has real coefficients, it follows that its roots appear in complex conjugate pairs, that is, if is a root then so is . There are 3 roots, thus at least one of them must be purely real (+1 or −1). After recollection of these general facts from matrix theory, we return to the rotation matrix . It follows from its realness and orthogonality that we can find a such that: If a matrix can be found that gives the above form, and there is only one purely real component and it is −1, then we define to be an improper rotation. Let us only consider the case, then, of matrices R that are proper rotations (the third eigenvalue is just 1). The third column of the matrix will then be equal to the invariant vector . Writing and for the first two columns of , this equation gives If has eigenvalue 1, then and has also eigenvalue 1, which implies that in that case . In general, however, as implies that also holds, so can be chosen for . Similarly, can result in a with real entries only, for a proper rotation matrix . Finally, the matrix equation is transformed by means of a unitary matrix, which gives The columns of are orthonormal as it is a unitary matrix with real-valued entries only, due to its definition above, that is the complex conjugate of and that is a vector with real-valued components. The third column is still , the other two columns of are perpendicular to . We can now see how our definition of improper rotation corresponds with the geometric interpretation: an improper rotation is a rotation around an axis (here, the axis corresponding to the third coordinate) and a reflection on a plane perpendicular to that axis. If we only restrict ourselves to matrices with determinant 1, we can thus see that they must be proper rotations. This result implies that any orthogonal matrix corresponding to a proper rotation is equivalent to a rotation over an angle around an axis . Equivalence classes The trace (sum of diagonal elements) of the real rotation matrix given above is . Since a trace is invariant under an orthogonal matrix similarity transformation, it follows that all matrices that are equivalent to by such orthogonal matrix transformations have the same trace: the trace is a class function. This matrix transformation is clearly an equivalence relation, that is, all such equivalent matrices form an equivalence class. In fact, all proper rotation rotation matrices form a group, usually denoted by SO(3) (the special orthogonal group in 3 dimensions) and all matrices with the same trace form an equivalence class in this group. All elements of such an equivalence class share their rotation angle, but all rotations are around different axes. If is an eigenvector of with eigenvalue 1, then is also an eigenvector of T, also with eigenvalue 1. Unless , and are different. Applications Generators of rotations Suppose we specify an axis of rotation by a unit vector , and suppose we have an infinitely small rotation of angle about that vector. Expanding the rotation matrix as an infinite addition, and taking the first order approach, the rotation matrix is represented as: A finite rotation through angle about this axis may be seen as a succession of small rotations about the same axis. Approximating as where is a large number, a rotation of about the axis may be represented as: It can be seen that Euler's theorem essentially states that all rotations may be represented in this form. The product is the "generator" of the particular rotation, being the vector associated with the matrix . This shows that the rotation matrix and the axis–angle format are related by the exponential function. One can derive a simple expression for the generator . One starts with an arbitrary plane (in Euclidean space) defined by a pair of perpendicular unit vectors and . In this plane one can choose an arbitrary vector with perpendicular . One then solves for in terms of and substituting into an expression for a rotation in a plane yields the rotation matrix which includes the generator . To include vectors outside the plane in the rotation one needs to modify the above expression for by including two projection operators that partition the space. This modified rotation matrix can be rewritten as an exponential function. Analysis is often easier in terms of these generators, rather than the full rotation matrix. Analysis in terms of the generators is known as the Lie algebra of the rotation group. Quaternions It follows from Euler's theorem that the relative orientation of any pair of coordinate systems may be specified by a set of three independent numbers. Sometimes a redundant fourth number is added to simplify operations with quaternion algebra. Three of these numbers are the direction cosines that orient the eigenvector. The fourth is the angle about the eigenvector that separates the two sets of coordinates. Such a set of four numbers is called a quaternion. While the quaternion as described above, does not involve complex numbers, if quaternions are used to describe two successive rotations, they must be combined using the non-commutative quaternion algebra derived by William Rowan Hamilton through the use of imaginary numbers. Rotation calculation via quaternions has come to replace the use of direction cosines in aerospace applications through their reduction of the required calculations, and their ability to minimize round-off errors. Also, in computer graphics the ability to perform spherical interpolation between quaternions with relative ease is of value. Generalizations In higher dimensions, any rigid motion that preserves a point in dimension or is a composition of at most rotations in orthogonal planes of rotation, though these planes need not be uniquely determined, and a rigid motion may fix multiple axes. Also, any rigid motion that preserves linearly independent points, which span an -dimensional body in dimension or , is a single plane of rotation. To put it another way, if two rigid bodies, with identical geometry, share at least points of 'identical' locations within themselves, the convex hull of which is -dimensional, then a single planar rotation can bring one to cover the other accurately in dimension or . A rigid motion in three dimensions that does not necessarily fix a point is a "screw motion". This is because a composition of a rotation with a translation perpendicular to the axis is a rotation about a parallel axis, while composition with a translation parallel to the axis yields a screw motion; see screw axis. This gives rise to screw theory. See also Euler angles Euler–Rodrigues formula Rotation formalisms in three dimensions Angular velocity Rotation around a fixed axis Matrix exponential Axis–angle representation Chasles' theorem (kinematics), for an extension concerning general rigid body displacements. Notes References Euler's theorem and its proof are contained in paragraphs 24–26 of the appendix (Additamentum. pp. 201–203) of L. Eulero (Leonhard Euler), Formulae generales pro translatione quacunque corporum rigidorum (General formulas for the translation of arbitrary rigid bodies), presented to the St. Petersburg Academy on October 9, 1775, and first published in Novi Commentarii academiae scientiarum Petropolitanae 20, 1776, pp. 189–207 (E478) and was reprinted in Theoria motus corporum rigidorum, ed. nova, 1790, pp. 449–460 (E478a) and later in his collected works Opera Omnia, Series 2, Volume 9, pp. 84–98. External links Euler's original treatise in The Euler Archive: entry on E478, first publication 1776 (pdf) Euler's original text (in Latin) and English translation (by Johan Sten) Wolfram Demonstrations Project for Euler's Rotation Theorem (by Tom Verhoeff) Euclidean symmetries Theorems in geometry Eponymous theorems of geometry Rotation in three dimensions Leonhard Euler
Euler's rotation theorem
[ "Physics", "Mathematics" ]
3,545
[ "Mathematical theorems", "Functions and mappings", "Euclidean symmetries", "Mathematical objects", "Eponymous theorems of geometry", "Mathematical relations", "Geometry", "Theorems in geometry", "Mathematical problems", "Symmetry" ]
865,211
https://en.wikipedia.org/wiki/Homes%27s%20law
In superconductivity, Homes's law is an empirical relation that states that a superconductor's critical temperature (Tc) is proportional to the strength of the superconducting state for temperatures well below Tc close to zero temperature (also referred to as the fully formed superfluid density, ) multiplied by the electrical resistivity measured just above the critical temperature. In cuprate high-temperature superconductors the relation follows the form , or alternatively . Many novel superconductors are anisotropic, so the resistivity and the superfluid density are tensor quantities; the superscript denotes the crystallographic direction along which these quantities are measured. Note that this expression assumes that the conductivity and temperature have both been recast in units of cm−1 (or s−1), and that the superfluid density has units of cm−2 (or s−2); the constant is dimensionless. The expected form for a BCS dirty-limit superconductor has slightly larger numerical constant of ~8.1. The law is named for physicist Christopher Homes and was first presented in the July 29, 2004 edition of Nature, and was the subject of a News and Views article by Jan Zaanen in the same issue in which he speculated that the high transition temperatures observed in the cuprate superconductors are because the metallic states in these materials are as viscous as permitted by the laws of quantum physics. A more detailed version of this scaling relation subsequently appeared in Physical Review B in 2005, in which it was argued that any material that falls on the scaling line is likely in the dirty limit (superconducting coherence length ξ0 is much greater than the normal-state mean-free path l, ξ0≫ l); however, a paper by Vladimir Kogan in Physical Review B in 2013 has shown that the scaling relation is valid even when ξ0~ l, suggesting that only materials in the clean limit (ξ0≪ l) will fall off of this scaling line. Francis Pratt and Stephen Blundell have argued that Homes's law is violated in the organic superconductors. This work was first presented in Physical Review Letters in March 2005. On the other hand, it has been recently demonstrated by Sasa Dordevic and coworkers that if the dc conductivity and the superfluid density are measured on the same sample at the same time using either infrared or microwave impedance spectroscopy, then the organic superconductors do indeed fall on the universal scaling line, along with a number of other exotic superconductors. This work was published in Scientific Reports in 2013. References Superconductivity Superfluidity
Homes's law
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
550
[ "Matter", "Physical phenomena", "Phase transitions", "Physical quantities", "Superconductivity", "Phases of matter", "Materials science", "Superfluidity", "Condensed matter physics", "Exotic matter", "Electrical resistance and conductance", "Fluid dynamics" ]
866,104
https://en.wikipedia.org/wiki/Acceleron
An acceleron is a hypothetical subatomic particle postulated to relate the mass of the neutrino to the dark energy conjectured to be responsible for the accelerating expansion of the universe. The acceleron was postulated by researchers at the University of Washington in 2004. References Dark energy Hypothetical elementary particles
Acceleron
[ "Physics", "Astronomy" ]
64
[ "Unsolved problems in astronomy", "Physical quantities", "Concepts in astronomy", "Unsolved problems in physics", "Energy (physics)", "Particle physics", "Particle physics stubs", "Dark energy", "Hypothetical elementary particles", "Wikipedia categories named after physical quantities", "Physics...
1,331,441
https://en.wikipedia.org/wiki/Document%20classification
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification. The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied. Documents may be classified according to their subjects or according to other attributes (such as document type, author, printing year etc.). In the rest of this article only subject classification is considered. There are two main philosophies of subject classification of documents: the content-based approach and the request-based approach. "Content-based" versus "request-based" classification Content-based classification is classification in which the weight given to particular subjects in a document determines the class to which the document is assigned. It is, for example, a common rule for classification in libraries, that at least 20% of the content of a book should be about the class to which the book is assigned. In automatic classification it could be the number of times given words appears in a document. Request-oriented classification (or -indexing) is classification in which the anticipated request from users is influencing how documents are being classified. The classifier asks themself: “Under which descriptors should this entity be found?” and “think of all the possible queries and decide for which ones the entity at hand is relevant” (Soergel, 1985, p. 230). Request-oriented classification may be classification that is targeted towards a particular audience or user group. For example, a library or a database for feminist studies may classify/index documents differently when compared to a historical library. It is probably better, however, to understand request-oriented classification as policy-based classification: The classification is done according to some ideals and reflects the purpose of the library or database doing the classification. In this way it is not necessarily a kind of classification or indexing based on user studies. Only if empirical data about use or users are applied should request-oriented classification be regarded as a user-based approach. Classification versus indexing Sometimes a distinction is made between assigning documents to classes ("classification") versus assigning subjects to documents ("subject indexing") but as Frederick Wilfrid Lancaster has argued, this distinction is not fruitful. "These terminological distinctions,” he writes, “are quite meaningless and only serve to cause confusion” (Lancaster, 2003, p. 21). The view that this distinction is purely superficial is also supported by the fact that a classification system may be transformed into a thesaurus and vice versa (cf., Aitchison, 1986, 2004; Broughton, 2008; Riesthuis & Bliedung, 1991). Therefore, the act of labeling a document (say by assigning a term from a controlled vocabulary to a document) is at the same time to assign that document to the class of documents indexed by that term (all documents indexed or classified as X belong to the same class of documents). In other words, labeling a document is the same as assigning it to the class of documents indexed under that label. Automatic document classification (ADC) Automatic document classification tasks can be divided into three sorts: supervised document classification where some external mechanism (such as human feedback) provides information on the correct classification for documents, unsupervised document classification (also known as document clustering), where the classification must be done entirely without reference to external information, and semi-supervised document classification, where parts of the documents are labeled by the external mechanism. There are several software products under various license models available. Techniques Automatic document classification techniques include: Artificial neural network Concept Mining Decision trees such as ID3 or C4.5 Expectation maximization (EM) Instantaneously trained neural networks Latent semantic indexing Multiple-instance learning Naive Bayes classifier Natural language processing approaches Rough set-based classifier Soft set-based classifier Support vector machines (SVM) K-nearest neighbour algorithms tf–idf Applications Classification techniques have been applied to spam filtering, a process which tries to discern E-mail spam messages from legitimate emails email routing, sending an email sent to a general address to a specific address or mailbox depending on topic language identification, automatically determining the language of a text genre classification, automatically determining the genre of a text readability assessment, automatically determining the degree of readability of a text, either to find suitable materials for different age groups or reader types or as part of a larger text simplification system sentiment analysis, determining the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. health-related classification using social media in public health surveillance article triage, selecting articles that are relevant for manual literature curation, for example as is being done as the first step to generate manually curated annotation databases in biology See also Categorization Classification (disambiguation) Compound term processing Concept-based image indexing Content-based image retrieval Decimal section numbering Document Document retrieval Document clustering Information retrieval Knowledge organization Knowledge Organization System Library classification Machine learning Native Language Identification String metrics Subject (documents) Subject indexing Supervised learning, unsupervised learning Text mining, web mining, concept mining References Further reading Fabrizio Sebastiani. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1–47, 2002. Stefan Büttcher, Charles L. A. Clarke, and Gordon V. Cormack. Information Retrieval: Implementing and Evaluating Search Engines . MIT Press, 2010. External links Introduction to document classification Bibliography on Automated Text Categorization Bibliography on Query Classification Text Classification analysis page Learning to Classify Text - Chap. 6 of the book Natural Language Processing with Python (available online) TechTC - Technion Repository of Text Categorization Datasets David D. Lewis's Datasets BioCreative III ACT (article classification task) dataset Data mining Information science Knowledge representation Machine learning Natural language processing
Document classification
[ "Technology", "Engineering" ]
1,314
[ "Artificial intelligence engineering", "Natural language processing", "Natural language and computing", "Machine learning" ]
1,331,526
https://en.wikipedia.org/wiki/Enflurane
Enflurane (2-chloro-1,1,2-trifluoroethyl difluoromethyl ether) is a halogenated ether. Developed by Ross Terrell in 1963, it was first used clinically in 1966. It was increasingly used for inhalational anesthesia during the 1970s and 1980s but is no longer in common use. Enflurane is a structural isomer of isoflurane. It vaporizes readily, but is a liquid at room temperature. Physical properties Pharmacology The exact mechanism of the action of general anaesthetics has not been delineated. Enflurane acts as a positive allosteric modulator of the GABAA, glycine, and 5-HT3 receptors, and as a negative allosteric modulator of the AMPA, kainate, and NMDA receptors, as well as of nicotinic acetylcholine receptors. Side effects Clinically, enflurane produces a dose-related depression of myocardial contractility with an associated decrease in myocardial oxygen consumption. Between 2% and 5% of the inhaled dose is oxidised in the liver, producing fluoride ions and difluoromethoxy-difluoroacetic acid. This is significantly higher than the metabolism of its structural isomer isoflurane. Enflurane also lowers the threshold for seizures, and should especially not be used on people with epilepsy. Like all potent inhalation anaesthetic agents it is a known trigger of malignant hyperthermia. Like the other potent inhalation agents it relaxes the uterus in pregnant women which is associated with more blood loss at delivery or other procedures on the gravid uterus. The obsolete (as an anaesthetic) agent methoxyflurane had a nephrotoxic effect and caused acute kidney injury, usually attributed to the liberation of fluoride ions from its metabolism. Enflurane is similarly metabolised but the liberation of fluoride results in a lower plasma level and enflurane related kidney failure seemed unusual if seen at all. Occupational safety The U.S. National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) for exposure to waste anaesthetic gas of 2 ppm (15.1 mg/m3) over a 60-minute period. Symptoms of occupational exposure to enflurane include eye irritation, central nervous system depression, analgesia, anesthesia, convulsions, and respiratory depression. References 5-HT3 agonists AMPA receptor antagonists NMDA receptor antagonists Kainate receptor antagonists General anesthetics Ethers Organochlorides Organofluorides GABAA receptor positive allosteric modulators Nicotinic antagonists Glycine receptor agonists Fluranes Difluoromethoxy compounds
Enflurane
[ "Chemistry" ]
617
[ "Organic compounds", "Functional groups", "Ethers" ]
1,331,587
https://en.wikipedia.org/wiki/B%20%E2%88%92%20L
In particle physics, B − L (pronounced "bee minus ell") is a quantum number which is the difference between the baryon number () and the lepton number () of a quantum system. Details This quantum number is the charge of a global/gauge U(1) symmetry in some Grand Unified Theory models, called . Unlike baryon number alone or lepton number alone, this hypothetical symmetry would not be broken by chiral anomalies or gravitational anomalies, as long as this symmetry is global, which is why this symmetry is often invoked. If exists as a symmetry, then for the seesaw mechanism to work has to be spontaneously broken to give the neutrinos a nonzero mass. The anomalies that would break baryon number conservation and lepton number conservation individually cancel in such a way that is always conserved. One hypothetical example is proton decay where a proton () would decay into a pion () and positron (). The weak hypercharge is related to via where X charge (not to be confused with the X boson) is the conserved quantum number associated with the global U(1) symmetry Grand Unified Theory. See also Baryogenesis Leptogenesis Majoron Proton decay X and Y bosons X (charge) Leptoquark References Conservation laws Flavour (particle physics)
B − L
[ "Physics" ]
279
[ "Equations of physics", "Conservation laws", "Particle physics", "Particle physics stubs", "Symmetry", "Physics theorems" ]
1,331,655
https://en.wikipedia.org/wiki/Dew%20pond
A dew pond is an artificial pond usually sited on the top of a hill, intended for watering livestock. Dew ponds are used in areas where a natural supply of surface water may not be readily available. The name dew pond (sometimes cloud pond or mist pond) is first found in the Journal of the Royal Agricultural Society in 1865. Despite the name, their primary source of water is believed to be rainfall rather than dew or mist. Construction They are usually shallow, saucer-shaped and lined with puddled clay, chalk or marl on an insulating straw layer over a bottom layer of chalk or lime. To deter earthworms from their natural tendency of burrowing upwards, which in a short while would make the clay lining porous, a layer of soot would be incorporated or lime mixed with the clay. The clay is usually covered with straw to prevent cracking by the sun and a final layer of chalk rubble or broken stone to protect the lining from the hoofs of sheep or cattle. To retain more of the rainfall, the clay layer could be extended across the catchment area of the pond. If the pond's temperature is kept low, evaporation (a major water loss) may be significantly reduced, thus maintaining the collected rainwater. According to researcher Edward Martin, this may be attained by building the pond in a hollow, where cool air is likely to gather, or by keeping the surrounding grass long to enhance heat radiation. As the water level in the basin falls, a well of cool, moist air tends to form over the surface, restricting evaporation. A method of constructing the base layer using chalk puddle was described in The Field 14 December 1907. A Sussex farmer born in 1850 tells how he and his forefathers made dew ponds: The initial supply of water after construction has to be provided by the builders, using artificial means. A preferred method was to arrange to finish the excavation in winter, so that any fallen snow could be collected and heaped into the centre of the pond to await melting. History The mystery of dew ponds has drawn the interest of many historians and scientists, but until recent times there has been little agreement on their early origins. It was widely believed that the technique for building dew ponds has been understood from the earliest times, as Kipling tells us in Puck of Pook's Hill: "…the Flint Men made the Dewpond under Chanctonbury Ring." The two Chanctonbury Hill dew ponds were dated, from flint tools excavated nearby and similarity to other dated earthworks, to the neolithic period. Landscape archaeology too seemed to demonstrate that they were used by the inhabitants of the nearby hill fort (probably from an earlier date than that of the surviving late Bronze Age structure) for watering cattle. A more prosaic assessment from Maud Cunnington, an archaeologist from Wiltshire, while not ruling out a prehistoric origin, describes such positive interpretations of the available evidence as no more than “flights of fancy”. A strong claim to antiquity may, however, be made for at least one Wiltshire dew pond: A land deed dated 825 CE mentions Oxenmere () at Milk Hill, Wiltshire, showing that dew ponds were in use during the Saxon period. The parliamentary enclosures of the mid eighteenth to mid nineteenth centuries caused many new upland ponds to be made, as access to traditional sources of drinking water for livestock was cut off. The suggestion has also been made that the nursery rhyme about Jack and Jill may refer to collecting water from a dew pond at the top of a hill, rather than from a well. The naturalist Gilbert White, writing in 1788, noted that during extended periods of summer drought the artificial ponds on the downs above his native Selborne, Hampshire, retained their water, despite supplying flocks of sheep, while larger ponds in the valley below had dried up. In 1877 H. P. Slade observed that this was because the lower ponds have debris washed into them from surface water drainage, making them shallow, but the higher ones do not: the smaller volume of water is depleted more rapidly. Later observations demonstrated that during a night of favourable dew formation a typical increase in water level of some two or three inches was possible. However, there remains controversy about the means of replenishment of dew ponds. Experiments conducted in 1885 to determine the origin of the water found that dew forms not from dampness in the air but from moisture in the ground directly beneath the site of the condensation: dew, therefore, was ruled out as a source of replenishment. Other scientists have pointed out that the 1885 experiments failed to take into account the insulating effect of the straw and the cooling effect of the damp clay: the combined effect would be to keep the pond at a lower temperature than the surrounding earth and thus able to condense a disproportionate share of moisture. In turn these conclusions were disproved in the 1930s, when it was pointed out that the heat-retaining quality of water (its thermal capacity) was many times greater than that of earth, and therefore the air above a pond in summer would be the last place to attract condensation. The deciding factor, it was concluded, is the extent of the saucer-shaped basin extending beyond the pond itself: the large basin would collect more rainfall than a pond created without such a surrounding feature. Dew ponds are still common on the downlands of southern England, the North Derbyshire and Staffordshire moorlands, and in Nottinghamshire. Measuring dew production The first scientific experiments to measure and correlate the rate of dew deposit with evaporation were made by Harry Pool Slade of Aston Upthorpe, Berkshire, between June 1876 and February 1877, at a dew pond on Aston Upthorpe Downs (). Slade measured overnight dew deposit (by weighing cotton wool when dry and after overnight exposure), evaporation from copper pans beside the pond, the depletion of the pond, and relative humidity. He found that on days with heavy overnight dewfall the level of water in the pond was not replenished but invariably diminished. In situ measurements of evaporation and condensation were taken at the Helmfleeth dew pond in Poppenbüll municipality (Eiderstedt Peninsula in Schleswig-Holstein, Germany) using meteorological measuring instruments and a floating evaporation pan after Brockamp & Werner (1970). These measurements proved the dew formation on the basis of temperature changes and the weather conditions. The Helmfleeth dew pond is part of the water supply for a marsh area and is still in use today. Reproductions of historical dew ponds In 2014, the traditional technique was verified by means of modern building material at reproductions of dew ponds in East Friesland. In this context, various techniques were tried in two terrestrial hollows. Commercially available PVC-film was used for the sealing and foam glass gravel for the insulation. The construction was carried out by craftsmen and the climatological analysis by Werner and Coldewey. See also Air well (condenser) Rainwater harvesting References Bibliography (Note: link is 1907 ed.) Journal articles Dewponds in specific locations Further reading External links Articles Building instructions from Popular Science Article about dew ponds in Ascension Island Images Dew pond images at Geograph Appropriate technology Ecological techniques Irrigation Ponds Water supply Water conservation
Dew pond
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
1,475
[ "Ecological techniques", "Water supply", "Hydrology", "Environmental engineering" ]
1,331,789
https://en.wikipedia.org/wiki/Lepton%20number
In particle physics, lepton number (historically also called lepton charge) is a conserved quantum number representing the difference between the number of leptons and the number of antileptons in an elementary particle reaction. Lepton number is an additive quantum number, so its sum is preserved in interactions (as opposed to multiplicative quantum numbers such as parity, where the product is preserved instead). The lepton number is defined by where is the number of leptons and is the number of antileptons. Lepton number was introduced in 1953 to explain the absence of reactions such as in the Cowan–Reines neutrino experiment, which instead observed . This process, inverse beta decay, conserves lepton number, as the incoming antineutrino has lepton number −1, while the outgoing positron (antielectron) also has lepton number −1. Lepton flavor conservation In addition to lepton number, lepton family numbers are defined as the electron number, for the electron and the electron neutrino; the muon number, for the muon and the muon neutrino; and the tau number, for the tauon and the tau neutrino. Prominent examples of lepton flavor conservation are the muon decays and . In these decay reactions, the creation of an electron is accompanied by the creation of an electron antineutrino, and the creation of a positron is accompanied by the creation of an electron neutrino. Likewise, a decaying negative muon results in the creation of a muon neutrino, while a decaying positive muon results in the creation of a muon antineutrino. Finally, the weak decay of a lepton into a lower-mass lepton always results in the production of a neutrino-antineutrino pair: . One neutrino carries through the lepton number of the decaying heavy lepton, (a tauon in this example, whose faint residue is a tau neutrino) and an antineutrino that cancels the lepton number of the newly created, lighter lepton that replaced the original. (In this example, a muon antineutrino with that cancels the muon's . Violations of the lepton number conservation laws Lepton flavor is only approximately conserved, and is notably not conserved in neutrino oscillation. However, both the total lepton number and lepton flavor are still conserved in the Standard Model. Numerous searches for physics beyond the Standard Model incorporate searches for lepton number or lepton flavor violation, such as the hypothetical decay . Experiments such as MEGA and SINDRUM have searched for lepton number violation in muon decays to electrons; MEG set the current branching limit of order and plans to lower to limit to after 2016. Some theories beyond the Standard Model, such as supersymmetry, predict branching ratios of order to . The Mu2e experiment, in construction as of 2017, has a planned sensitivity of order . Because the lepton number conservation law in fact is violated by chiral anomalies, there are problems applying this symmetry universally over all energy scales. However, the quantum number is commonly conserved in Grand Unified Theory models. If neutrinos turn out to be Majorana fermions, neither individual lepton numbers, nor the total lepton number nor would be conserved, e.g. in neutrinoless double beta decay, where two neutrinos colliding head-on might actually annihilate, similar to the (never observed) collision of a neutrino and antineutrino. Reversed signs convention Some authors prefer to use lepton numbers that match the signs of the charges of the leptons involved, following the convention in use for the sign of weak isospin and the sign of strangeness quantum number (for quarks), both of which conventionally have the otherwise arbitrary sign of the quantum number match the sign of the particles' electric charges. When following the electric-charge-sign convention, the lepton number (shown with an over-bar here, to reduce confusion) of an electron, muon, tauon, and any neutrino counts as the lepton number of the positron, antimuon, antitauon, and any antineutrino counts as When this reversed-sign convention is observed, the baryon number is left unchanged, but the difference is replaced with a sum: , whose number value remains unchanged, since and See also Baryon number References Conservation laws Particle physics Leptons Flavour (particle physics) he:מספר לפטוני
Lepton number
[ "Physics" ]
962
[ "Equations of physics", "Conservation laws", "Particle physics", "Symmetry", "Physics theorems" ]
1,333,167
https://en.wikipedia.org/wiki/VAN%20method
The VAN method – named after P. Varotsos, K. Alexopoulos and K. Nomicos, authors of the 1981 papers describing it – measures low frequency electric signals, termed "seismic electric signals" (SES), by which Varotsos and several colleagues claimed to have successfully predicted earthquakes in Greece. Both the method itself and the manner by which successful predictions were claimed have been severely criticized. Supporters of VAN have responded to the criticism but the critics have not retracted their views. Since 2001, the VAN group has introduced a concept they call "natural time", applied to the analysis of their precursors. Initially it is applied on SES to distinguish them from noise and relate them to a possible impending earthquake. In case of verification (classification as "SES activity"), natural time analysis is additionally applied to the general subsequent seismicity of the area associated with the SES activity, in order to improve the time parameter of the prediction. The method treats earthquake onset as a critical phenomenon. After 2006, VAN say that all alarms related to SES activity have been made public by posting at arxiv.org. One such report was posted on Feb. 1, 2008, two weeks before the strongest earthquake in Greece during the period 1983-2011. This earthquake occurred on February 14, 2008, with magnitude (Mw) 6.9. VAN's report was also described in an article in the newspaper Ethnos on Feb. 10, 2008. However, Gerassimos Papadopoulos complained that the VAN reports were confusing and ambiguous, and that "none of the claims for successful VAN predictions is justified", but this complaint was answered on the same issue. Description of the VAN method Prediction of earthquakes with this method is based on the detection, recording and evaluation of seismic electric signals or SES. These electrical signals have a fundamental frequency component of 1 Hz or less and an amplitude the logarithm of which scales with the magnitude of the earthquake. According to VAN proponents, SES are emitted by rocks under stresses caused by plate-tectonic forces. There are three types of reported electric signal: Electric signals that occur shortly before a major earthquake. Signals of this type were recorded 6.5 hours before the 1995 Kobe earthquake in Japan, for example. Electric signals that occur some time before a major earthquake. A gradual variation in the Earth's electric field some time before an earthquake. Several hypotheses have been proposed to explain SES: Stress-related phenomena: Seismic electric signals are perhaps attributed to the piezoelectric behaviour of some minerals, especially quartz, or to effects related to the behavior of crystallographic defects under stress or strain. Series of SES, termed SES activities (which are recorded before major earthquakes), may appear a few weeks to a few months before an earthquake when the mechanical stress reaches a critical value. The generation of electric signals by minerals under high stress leading to fracture has been confirmed with laboratory experiments. Thermoelectric phenomena: Alternately, Chinese researchers proposed a mechanism which relies on the thermoelectric effect in magnetite. Groundwater phenomena: Three mechanisms have been proposed relying on the presence of groundwater in generating SES. The electrokinetic effect is associated with the motion of groundwater during a change in pore pressure. The seismic dynamo effect is associated with the motion of ions in groundwater relative to the geomagnetic field as a seismic wave creates displacement. Circular polarization would be characteristic of the seismic dynamo effect, and this has been observed both for artificial and natural seismic events. A radon ionization effect, caused by radon release and then subsequent ionization of material in groundwater, may also be active. The main isotope of radon is radioactive with a half-life of 3.9 days, and the nuclear decay of radon is known to have an ionizing effect on air. Many publications have reported increased radon concentration in the vicinity of some active tectonic faults a few weeks prior to strong seismic events. However, a strong correlation between radon anomalies and seismic events has not been demonstrated. While the electrokinetic effect may be consistent with signal detection tens or hundreds of kilometers away, the other mechanisms require a second mechanism to account for propagation: Signal transmission along faults: In one model, seismic electric signals propagate with relatively low attenuation along tectonic faults, due to the increased electrical conductivity caused either by the intrusion of ground water into the fault zone(s) or by the ionic characteristics of the minerals. Rock circuit: In the defect model, the presence of charge carriers and holes can be modeled as making an extensive circuit. Seismic electric signals are detected at stations which consist of pairs of electrodes (oriented NS and EW) inserted into the ground, with amplifiers and filters. The signals are then transmitted to the VAN scientists in Athens where they are recorded and evaluated. Currently the VAN team operates 9 stations, while in the past (until 1989) they could afford up to 17. The VAN team claimed that they were able to predict earthquakes of magnitude larger than 5, with an uncertainty of 0.7 units of magnitude, within a radius of 100 km, and in time window ranging from several hours to a few weeks. Several papers confirmed this success rate, leading to statistically significant conclusion. For example, there were eight M ≥ 5.5 earthquakes in Greece from January 1, 1984 through September 10, 1995, and the VAN network forecast six of these. The VAN method has also been used in Japan, but in early attempts success comparable to that achieved in Greece was "difficult" to attain. A preliminary investigation of seismic electric signals in France led to encouraging results. Earthquake prediction using "natural time" analysis Since 2001 the VAN team has attempted to improve the accuracy of the estimation of the time of the forthcoming earthquake. To that end, they introduced the concept of natural time, a time series analysis technique which puts weight on a process based on the ordering of events. Two terms characterize each event, the "natural time" , and the energy . is defined as , where k is an integer (the -th event) and is the total number of events in the time sequence of data. A related term, , is the ratio , which describes the fractional energy released. They introduce a critical term , the "variance in natural time", which puts extra weight on the energy term : where and Their current method deems SES valid when = 0.070. Once the SES are deemed valid, a second analysis is started in which the subsequent seismic (rather than electric) events are noted, and the region is divided up as a Venn diagram with at least two seismic events per overlapping rectangle. When the distribution of for the rectangular regions has its maximum at = 0.070, a critical seismic event is imminent, i.e. it will occur in a few days to one week or so, and a report is issued. Results The VAN team claim that out of seven mainshocks with magnitude Mw>=6.0 from 2001 through 2010 in the region of latitude N 36° to N 41° and longitude E 19° to E 27°, all but one could be classified with relevant SES activity identified and reported in advance through natural time analysis. Additionally, they assert that the occurrence time of four of these mainshocks with magnitude Mw>=6.4 were identified to within "a narrow range, a few days to around one week or so." These reports are inserted in papers housed in arXiv, and new reports are made and uploaded there. For example, a report preceding the strongest earthquake in Greece during the period 1983-2011, which occurred on February 14, 2008, with magnitude (Mw) 6.9, was publicized in arXiv almost two weeks before, on February 1, 2008. A description of the updated VAN method was collected in a book published by Springer in 2011, titled "Natural Time Analysis: The New View of Time." Natural time analysis also claims that the physical connection of SES activities with earthquakes is as follows: Taking the view that the earthquake occurrence is a phase-change (critical phenomenon), where the new phase is the mainshock occurrence, the above-mentioned variance term κ is the corresponding order parameter. The κ value calculated for a window comprising a number of seismic events comparable to the average number of earthquakes occurring within a few months, fluctuates when the window is sliding through a seismic catalogue. The VAN team claims that these κ fluctuations exhibit a minimum a few months before a mainshock occurrence and in addition this minimum occurs simultaneously with the initiation of the corresponding SES activity, and that this is the first time in the literature that such a simultaneous appearance of two precursory phenomena in independent datasets of different geophysical observables (electrical measurements, seismicity) has been observed. Furthermore, the VAN team claims that their natural time analysis of the seismic catalogue of Japan during the period from January 1, 1984 until the occurrence of the magnitude 9.0 Tohoku earthquake on March 11, 2011, revealed that such clear minima of the κ fluctuations appeared before all major earthquakes with magnitude 7.6 or larger. The deepest of these minima was said to occur on January 5, 2011, i.e., almost two months before the Tohoku earthquake occurrence. Finally, by dividing the Japanese region into small areas, the VAN team states that some small areas show minimum of the κ fluctuations almost simultaneously with the large area covering the whole Japan and such small areas clustered within a few hundred kilometers from the actual epicenter of the impending major earthquake. Criticisms of VAN Historically, the usefulness of the VAN method for prediction of earthquakes had been a matter of debate. Both positive and negative criticism on an older conception of the VAN method is summarized in the 1996 book "A Critical Review of VAN", edited by Sir James Lighthill. A critical review of the statistical methodology was published by Y. Y. Kagan of UCLA in 1997. Note that these criticisms predate the time series analysis methods introduced by the VAN group in 2001. The main points of the criticism were: Predictive success Critics say that the VAN method is hindered by a lack of statistical testing of the validity of the hypothesis because the researchers keep changing the parameters (the moving the goalposts) technique). VAN has claimed to have observed at a recording station in Athens a perfect record of a one-to-one correlation between SESs and earthquake of magnitude ≥ 2.9 which occurred 7 hours later in all of Greece. However, Max Wyss said that the list of earthquake used for the correlation was false. Although VAN stated in their article that the list of earthquakes was that of the Bulletin of the National Observatory of Athens (NOA), Wyss found that 37% of the earthquakes actually listed in the bulletin, including the largest one, were not in the list used by VAN for issuing their claim. In addition, 40% of the earthquake which VAN claimed had occurred were not in the NOA bulletin. Examining the probability of chance correlation of another set of 22 claims of successful predictions by VAN of M > 4.0 from January 1, 1987 through November 30, 1989 it was found that 74% were false, 9% correlated by chance, and for 14% the correlation was uncertain. No single event correlated at a probability greater than 85%, whereas the level required in statistics for accepting a hypothesis test as positive would more commonly be 95%. In response to Wyss' analysis of the NOA findings, VAN said that the criticisms were based on misunderstandings. VAN said that the calculations suggested by Wyss would lead to a paradox, i.e., to probability values larger than unity, when applied to an ideal earthquake prediction method. Other independent evaluations said that VAN obtained statistically significant results. Mainstream seismologists remain unconvinced by any of VAN's rebuttals. In 2011 the ICEF concluded that the optimistic prediction capability claimed by VAN could not be validated. Most seismologists consider VAN to have been "resoundingly debunked". Uyeda and others in 2011, however, supported the use of the technique. In 2018, the statistical significance of the method was revisited by the VAN group employing modern techniques, such as event coincidence analysis (ECA) and receiver operating characteristic (ROC), which they interpreted to show that SES exhibit precursory information far beyond chance. Proposed SES propagation mechanism An analysis of the propagation properties of SES in the Earth’s crust showed that it is impossible that signals with the amplitude reported by VAN could have been generated by small earthquakes and transmitted over the several hundred kilometers between the epicenter and the receiving station. In effect, if the mechanism is based on piezoelectricity or electrical charging of crystal deformations with the signal traveling along faults, then none of the earthquakes which VAN claimed were preceded by SES generated an SES themselves. VAN answered that such an analysis of the SES propagation properties is based on a simplified model of horizontally layered Earth and that this differs greatly from the real situation since Earth's crust contains inhomogeneities. When the latter are taken into account, for example by considering that the faults are electrically appreciably more conductive than the surrounding medium, VAN believes that electric signals transmitted at distances of the order of one hundred kilometers between the epicenter and the receiving station have amplitudes comparable to those reported by VAN. Electromagnetic compatibility issues VAN’s publications are further weakened by failure to address the problem of eliminating the many and strong sources of change in the magneto-electric field measured by them, such as telluric currents from weather, and electromagnetic interference (EMI) from man-made signals. One critical paper (Pham et al 1998) clearly correlates an SES used by the VAN group with digital radio transmissions made from a military base. In a subsequent paper, VAN said that such noise coming from digital radio transmitters of the military database has been clearly distinguished from true SES by following the criteria developed by VAN. Further work in Greece by Pham et al in 2002 has tracked SES-like "anomalous transient electric signals" back to specific human sources, and found that such signals are not excluded by the criteria used by VAN to identify SES. In 2003, modern methods of statistical physics, i.e., detrended fluctuation analysis (DFA), multifractal DFA and wavelet transform revealed that SES are clearly distinguished from those produced by human sources, since the former signals exhibit very strong long range correlations, while the latter signals do not. A work published in 2020 examined the statistical significance of the minima of the fluctuations of the order parameter κ1 of seismicity by event coincidence analysis as a possible precursor to strong earthquakes in both regional and global level. The results show that these minima are indeed statistically significant earthquake precursors. In particular, in the regional studies the time lag was found to be fully compatible with the finding that these mimima are simultaneous with the initiation of SES activities, thus the distinction of the latter precursory signals from those produced by human sources is evident. Public policy Finally, one requirement for any earthquake prediction method is that, in order for any prediction to be useful, it must predict a forthcoming earthquake within a reasonable time-frame, epicenter and magnitude. If the prediction is too vague, no feasible decision (such as to evacuate the population of a certain area for a given period of time) can be made. In practice, the VAN group issued a series of telegrams in the 1980s. During the same time frame, the technique also missed major earthquakes, in the sense that "for earthquakes with Mb≥5.0, the ratio of the predicted to the total number of earthquakes is 6/12 (50%) and the success rate of the prediction is also 6/12 (50%) with the probability gain of a factor of 4. With a confidence level of 99.8%, the possibility of this success rate being explained by a random model of earthquake occurrence taking into account the regional factor which includes high seismicity in the prediction area, can be rejected". This study concludes that "the statistical examination of the SES predictions proved high rates of success prediction and predicted events with high probability gain. This suggests a physical connection between SES and subsequent earthquakes, at least for an event of magnitude of Ms≥5". Predictions from the early VAN method led to public criticism and the cost associated with false alarms generated ill will. In 2016 the Union of Greek Physicists honored P. Varotsos for his work on VAN with a prize delivered by the President of Greece. Updated VAN method A review of the updated VAN method in 2020 says that it suffers from an abundance of false positives and is therefore not usable as a prediction protocol. VAN group answered by pinpointing misunderstandings in the specific reasoning. See also Earthquake prediction Seismology Seismo-electromagnetics Notes References External links Nature debates. Is the reliable prediction of individual earthquakes a realistic scientific goal? VAN earthquake prediction method. P. Varotsos' bibliography (the principal source of "VAN" papers). Earthquake and seismic risk mitigation
VAN method
[ "Engineering" ]
3,534
[ "Structural engineering", "Earthquake and seismic risk mitigation" ]
1,334,123
https://en.wikipedia.org/wiki/Quantum%20Darwinism
Quantum Darwinism is a theory meant to explain the emergence of the classical world from the quantum world as due to a process of Darwinian natural selection induced by the environment interacting with the quantum system; where the many possible quantum states are selected against in favor of a stable pointer state. It was proposed in 2003 by Wojciech Zurek and a group of collaborators including Ollivier, Poulin, Paz and Blume-Kohout. The development of the theory is due to the integration of a number of Zurek's research topics pursued over the course of 25 years, including pointer states, einselection and decoherence. A study in 2010 is claimed to provide preliminary supporting evidence of quantum Darwinism with scars of a quantum dot "becoming a family of mother-daughter states" indicating they could "stabilize into multiple pointer states"; additionally, a similar kind of scene has been suggested with perturbation-induced scarring in disordered quantum dots (see scars). However, the claimed evidence is also subject to the circularity criticism by Ruth Kastner (see Implications below). Basically, the de facto phenomenon of decoherence that underlies the claims of Quantum Darwinism may not really arise in a unitary-only dynamics. Thus, even if there is decoherence, this does not show that macroscopic pointer states naturally emerge without some form of collapse. Implications Along with Zurek's related theory of envariance (invariance due to quantum entanglement), quantum Darwinism seeks to explain how the classical world emerges from the quantum world and proposes to answer the quantum measurement problem, the main interpretational challenge for quantum theory. The measurement problem arises because the quantum state vector, the source of all knowledge concerning quantum systems, evolves according to the Schrödinger equation into a linear superposition of different states, predicting paradoxical situations such as "Schrödinger's cat"; situations never experienced in our classical world. Quantum theory has traditionally treated this problem as being resolved by a non-unitary transformation of the state vector at the time of measurement into a definite state. It provides an extremely accurate means of predicting the value of the definite state that will be measured in the form of a probability for each possible measurement value. The physical nature of the transition from the quantum superposition of states to the definite classical state measured is not explained by the traditional theory but is usually assumed as an axiom and was at the basis of the debate between Niels Bohr and Albert Einstein concerning the completeness of quantum theory. Quantum Darwinism seeks to explain the transition of quantum systems from the vast potentiality of superposed states to the greatly reduced set of pointer states as a selection process, einselection, imposed on the quantum system through its continuous interactions with the environment. Recent developments, such as Karl Seelig's 2024 publication Twilight Dimension Model: The Origin of Time (ISBN-13: 979-8302487568), propose mechanisms complementing Quantum Darwinism. The Twilight Dimension Model suggests that the selection of pointer states could emerge from higher-dimensional systems projecting into our observable reality, providing additional theoretical support for the Darwinian processes in quantum systems.All quantum interactions, including measurements, but much more typically interactions with the environment such as with the sea of photons in which all quantum systems are immersed, lead to decoherence or the manifestation of the quantum system in a particular basis dictated by the nature of the interaction in which the quantum system is involved. In the case of interactions with its environment Zurek and his collaborators have shown that a preferred basis into which a quantum system will decohere is the pointer basis underlying predictable classical states. It is in this sense that the pointer states of classical reality are selected from quantum reality and exist in the macroscopic realm in a state able to undergo further evolution. However, the 'einselection' program depends on assuming a particular division of the universal quantum state into 'system' + 'environment', with the different degrees of freedom of the environment posited as having mutually random phases. This phase randomness does not arise from within the quantum state of the universe on its own, and Ruth Kastner has pointed out that this limits the explanatory power of the Quantum Darwinism program. Zurek replies to Kastner's criticism in Classical selection and quantum Darwinism. As a quantum system's interactions with its environment results in the recording of many redundant copies of information regarding its pointer states, this information is available to numerous observers able to achieve consensual agreement concerning their information of the quantum state. This aspect of einselection, called by Zurek 'Environment as a Witness', results in the potential for objective knowledge. Darwinian significance Perhaps of equal significance to the light this theory shines on quantum explanations is its identification of a Darwinian process operating as the selective mechanism establishing our classical reality. As numerous researchers have made clear any system employing a Darwinian process will evolve. As argued by the thesis of Universal Darwinism, Darwinian processes are not confined to biology but are all following the simple Darwinian algorithm: Reproduction/Heredity; the ability to make copies and thereby produce descendants. Selection; A process that preferentially selects one trait over another trait, leading to one trait being more numerous after sufficient generations. Variation; differences in heritable traits that affect "Fitness" or the ability to survive and reproduce leading to differential survival. Quantum Darwinism appears to conform to this algorithm and thus is aptly named: The Twilight Dimension Model by Karl Seelig aligns with this view, positing that the emergence of stable pointer states involves constraints from a higher-order framework, thereby refining the selection mechanism inherent in Quantum Darwinism. Numerous copies are made of pointer states Successive interactions between pointer states and their environment reveal them to evolve and those states to survive which conform to the predictions of classical physics within the macroscopic world. This happens in a continuous, predictable manner; that is descendants inherit many of their traits from ancestor states. The analogy to the Variation principle of "simple Darwinism" does not exist since the Pointer states do not mutate and the selection by the environment is among the pointer states preferred by the environment (e.g. location states). From this view quantum Darwinism provides a Darwinian explanation at the basis of our reality, explaining the unfolding or evolution of our classical macroscopic world. Notes References S. Haroche, J.-M. Raimond, Exploring the Quantum: Atoms, Cavities, and Photons, Oxford University Press (2006), , p. 77 M. Schlosshauer, Decoherence and the Quantum-to-Classical Transition, Springer 2007, , Chapter 2.9, p. 85. External links Universal Darwinism: Quantum Darwinism Nature.com: Natural selection acts on the quantum world Quantum Darwinism and the Nature of Reality Zurek's Reply to Kastner's Criticism (2015) Darwinism Quantum measurement Phenomenology
Quantum Darwinism
[ "Physics" ]
1,439
[ "Quantum measurement", "Quantum mechanics" ]
26,994,195
https://en.wikipedia.org/wiki/Secondary%20plot%20%28kinetics%29
In enzyme kinetics, a secondary plot uses the intercept or slope from several Lineweaver–Burk plots to find additional kinetic constants. For example, when a set of v by [S] curves from an enzyme with a ping–pong mechanism (varying substrate A, fixed substrate B) are plotted in a Lineweaver–Burk plot, a set of parallel lines will be produced. The following Michaelis–Menten equation relates the initial reaction rate v0 to the substrate concentrations [A] and [B]: The y-intercept of this equation is equal to the following: The y-intercept is determined at several different fixed concentrations of substrate B (and varying substrate A). The y-intercept values are then plotted versus 1/[B] to determine the Michaelis constant for substrate B, , as shown in the Figure to the right. The slope is equal to divided by and the intercept is equal to 1 over . Secondary plot in inhibition studies A secondary plot may also be used to find a specific inhibition constant, KI. For a competitive enzyme inhibitor, the apparent Michaelis constant is equal to the following: The slope of the Lineweaver-Burk plot is therefore equal to: If one creates a secondary plot consisting of the slope values from several Lineweaver-Burk plots of varying inhibitor concentration [I], the competitive inhbition constant may be found. The slope of the secondary plot divided by the intercept is equal to 1/KI. This method allows one to find the KI constant, even when the Michaelis constant and vmax values are not known. References Enzyme kinetics
Secondary plot (kinetics)
[ "Chemistry" ]
333
[ "Chemical kinetics", "Enzyme kinetics" ]
26,998,219
https://en.wikipedia.org/wiki/Ginzburg%20criterion
Mean field theory gives sensible results as long as one is able to neglect fluctuations in the system under consideration. The Ginzburg criterion tells quantitatively when mean field theory is valid. It also gives the idea of an upper critical dimension, a dimensionality of the system above which mean field theory gives proper results, and the critical exponents predicted by mean field theory match exactly with those obtained by numerical methods. Example: Ising model If is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point. Quantitatively, this means that Using this in the Landau theory, which is identical to the mean field theory for the Ising model, the value of the upper critical dimension comes out to be 4. If the dimension of the space is greater than 4, the mean-field results are good and self-consistent. But for dimensions less than 4, the predictions are less accurate. For instance, in one dimension, the mean field approximation predicts a phase transition at finite temperatures for the Ising model, whereas the exact analytic solution in one dimension has none (except for and ). Example: Classical Heisenberg model In the classical Heisenberg model of magnetism, the order parameter has a higher symmetry, and it has violent directional fluctuations which are more important than the size fluctuations. They overtake to the Ginzburg temperature interval over which fluctuations modify the mean-field description thus replacing the criterion by another, more relevant one. Footnotes References Statistical mechanics Physical quantities
Ginzburg criterion
[ "Physics", "Mathematics" ]
321
[ "Physical phenomena", "Physical quantities", "Quantity", "Statistical mechanics", "Physical properties" ]
26,998,547
https://en.wikipedia.org/wiki/Degrees%20of%20freedom%20%28physics%20and%20chemistry%29
In physics and chemistry, a degree of freedom is an independent physical parameter in the chosen parameterization of a physical system. More formally, given a parameterization of a physical system, the number of degrees of freedom is the smallest number of parameters whose values need to be known in order to always be possible to determine the values of all parameters in the chosen parameterization. In this case, any set of such parameters are called degrees of freedom. The location of a particle in three-dimensional space requires three position coordinates. Similarly, the direction and speed at which a particle moves can be described in terms of three velocity components, each in reference to the three dimensions of space. So, if the time evolution of the system is deterministic (where the state at one instant uniquely determines its past and future position and velocity as a function of time), such a system has six degrees of freedom. If the motion of the particle is constrained to a lower number of dimensions – for example, the particle must move along a wire or on a fixed surface – then the system has fewer than six degrees of freedom. On the other hand, a system with an extended object that can rotate or vibrate can have more than six degrees of freedom. In classical mechanics, the state of a point particle at any given time is often described with position and velocity coordinates in the Lagrangian formalism, or with position and momentum coordinates in the Hamiltonian formalism. In statistical mechanics, a degree of freedom is a single scalar number describing the microstate of a system. The specification of all microstates of a system is a point in the system's phase space. In the 3D ideal chain model in chemistry, two angles are necessary to describe the orientation of each monomer. It is often useful to specify quadratic degrees of freedom. These are degrees of freedom that contribute in a quadratic function to the energy of the system. Depending on what one is counting, there are several different ways that degrees of freedom can be defined, each with a different value. Thermodynamic degrees of freedom for gases By the equipartition theorem, internal energy per mole of gas equals , where is absolute temperature and the specific heat at constant volume is cv = (f)(R/2). R = 8.314 J/(K mol) is the universal gas constant, and "f" is the number of thermodynamic (quadratic) degrees of freedom, counting the number of ways in which energy can occur. Any atom or molecule has three degrees of freedom associated with translational motion (kinetic energy) of the center of mass with respect to the x, y, and z axes. These are the only degrees of freedom for a monoatomic species, such as noble gas atoms. For a structure consisting of two or more atoms, the whole structure also has rotational kinetic energy, where the whole structure turns about an axis. A linear molecule, where all atoms lie along a single axis, such as any diatomic molecule and some other molecules like carbon dioxide (CO2), has two rotational degrees of freedom, because it can rotate about either of two axes perpendicular to the molecular axis. A nonlinear molecule, where the atoms do not lie along a single axis, like water (H2O), has three rotational degrees of freedom, because it can rotate around any of three perpendicular axes. In special cases, such as adsorbed large molecules, the rotational degrees of freedom can be limited to only one. A structure consisting of two or more atoms also has vibrational energy, where the individual atoms move with respect to one another. A diatomic molecule has one molecular vibration mode: the two atoms oscillate back and forth with the chemical bond between them acting as a spring. A molecule with atoms has more complicated modes of molecular vibration, with vibrational modes for a linear molecule and modes for a nonlinear molecule. As specific examples, the linear CO2 molecule has 4 modes of oscillation, and the nonlinear water molecule has 3 modes of oscillation Each vibrational mode has two energy terms: the kinetic energy of the moving atoms and the potential energy of the spring-like chemical bond(s). Therefore, the number of vibrational energy terms is modes for a linear molecule and is modes for a nonlinear molecule. Both the rotational and vibrational modes are quantized, requiring a minimum temperature to be activated. The "rotational temperature" to activate the rotational degrees of freedom is less than 100 K for many gases. For N2 and O2, it is less than 3 K. The "vibrational temperature" necessary for substantial vibration is between 103 K and 104 K, 3521 K for N2 and 2156 K for O2. Typical atmospheric temperatures are not high enough to activate vibration in N2 and O2, which comprise most of the atmosphere. (See the next figure.) However, the much less abundant greenhouse gases keep the troposphere warm by absorbing infrared from the Earth's surface, which excites their vibrational modes. Much of this energy is reradiated back to the surface in the infrared through the "greenhouse effect." Because room temperature (≈298 K) is over the typical rotational temperature but lower than the typical vibrational temperature, only the translational and rotational degrees of freedom contribute, in equal amounts, to the heat capacity ratio. This is why ≈ for monatomic gases and ≈ for diatomic gases at room temperature. Since the air is dominated by diatomic gases (with nitrogen and oxygen contributing about 99%), its molar internal energy is close to = (5/2), determined by the 5 degrees of freedom exhibited by diatomic gases. See the graph at right. For 140 K < < 380 K, cv differs from (5/2) d by less than 1%. Only at temperatures well above temperatures in the troposphere and stratosphere do some molecules have enough energy to activate the vibrational modes of N2 and O2. The specific heat at constant volume, cv, increases slowly toward (7/2) as temperature increases above T = 400 K, where cv is 1.3% above (5/2) d = 717.5 J/(K kg). Counting the minimum number of co-ordinates to specify a position One can also count degrees of freedom using the minimum number of coordinates required to specify a position. This is done as follows: For a single particle we need 2 coordinates in a 2-D plane to specify its position and 3 coordinates in 3-D space. Thus its degree of freedom in a 3-D space is 3. For a body consisting of 2 particles (ex. a diatomic molecule) in a 3-D space with constant distance between them (let's say d) we can show (below) its degrees of freedom to be 5. Let's say one particle in this body has coordinate and the other has coordinate with unknown. Application of the formula for distance between two coordinates results in one equation with one unknown, in which we can solve for . One of , , , , , or can be unknown. Contrary to the classical equipartition theorem, at room temperature, the vibrational motion of molecules typically makes negligible contributions to the heat capacity. This is because these degrees of freedom are frozen because the spacing between the energy eigenvalues exceeds the energy corresponding to ambient temperatures (). Independent degrees of freedom The set of degrees of freedom of a system is independent if the energy associated with the set can be written in the following form: where is a function of the sole variable . example: if and are two degrees of freedom, and is the associated energy: If , then the two degrees of freedom are independent. If , then the two degrees of freedom are not independent. The term involving the product of and is a coupling term that describes an interaction between the two degrees of freedom. For from 1 to , the value of the th degree of freedom is distributed according to the Boltzmann distribution. Its probability density function is the following: , In this section, and throughout the article the brackets denote the mean of the quantity they enclose. The internal energy of the system is the sum of the average energies associated with each of the degrees of freedom: Quadratic degrees of freedom A degree of freedom is quadratic if the energy terms associated with this degree of freedom can be written as , where is a linear combination of other quadratic degrees of freedom. example: if and are two degrees of freedom, and is the associated energy: If , then the two degrees of freedom are not independent and non-quadratic. If , then the two degrees of freedom are independent and non-quadratic. If , then the two degrees of freedom are not independent but are quadratic. If , then the two degrees of freedom are independent and quadratic. For example, in Newtonian mechanics, the dynamics of a system of quadratic degrees of freedom are controlled by a set of homogeneous linear differential equations with constant coefficients. Quadratic and independent degree of freedom are quadratic and independent degrees of freedom if the energy associated with a microstate of the system they represent can be written as: Equipartition theorem In the classical limit of statistical mechanics, at thermodynamic equilibrium, the internal energy of a system of quadratic and independent degrees of freedom is: Here, the mean energy associated with a degree of freedom is: Since the degrees of freedom are independent, the internal energy of the system is equal to the sum of the mean energy associated with each degree of freedom, which demonstrates the result. Generalizations The description of a system's state as a point in its phase space, although mathematically convenient, is thought to be fundamentally inaccurate. In quantum mechanics, the motion degrees of freedom are superseded with the concept of wave function, and operators which correspond to other degrees of freedom have discrete spectra. For example, intrinsic angular momentum operator (which corresponds to the rotational freedom) for an electron or photon has only two eigenvalues. This discreteness becomes apparent when action has an order of magnitude of the Planck constant, and individual degrees of freedom can be distinguished. References Physical quantities Dimension
Degrees of freedom (physics and chemistry)
[ "Physics", "Mathematics" ]
2,098
[ "Geometric measurement", "Physical phenomena", "Physical quantities", "Quantity", "Dimension", "Theory of relativity", "Physical properties" ]
26,998,617
https://en.wikipedia.org/wiki/Field%20%28physics%29
In science, a field is a physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in space and time. An example of a scalar field is a weather map, with the surface temperature described by assigning a number to each point on the map. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field. In the modern framework of the quantum field theory, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical "true vacuum". This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics. Richard Feynman said, "The fact that the electromagnetic field can possess momentum and energy makes it very real, and [...] a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have." In practice, the strength of most fields diminishes with distance, eventually becoming undetectable. For instance the strength of many relevant classical fields, such as the gravitational field in Newton's theory of gravity or the electrostatic field in classical electromagnetism, is inversely proportional to the square of the distance from the source (i.e. they follow Gauss's law). A field can be classified as a scalar field, a vector field, a spinor field or a tensor field according to whether the represented physical quantity is a scalar, a vector, a spinor, or a tensor, respectively. A field has a consistent tensorial character wherever it is defined: i.e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field: specifying its value at a point in spacetime requires three numbers, the components of the gravitational field vector at that point. Moreover, within each category (scalar, vector, tensor), a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In this theory an equivalent representation of field is a field particle, for instance a boson. History To Isaac Newton, his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. When looking at the motion of many bodies all interacting with each other, such as the planets in the Solar System, dealing with the force between each pair of bodies separately rapidly becomes computationally inconvenient. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces. This quantity, the gravitational field, gave at each point in space the total gravitational acceleration which would be felt by a small object at that point. This did not change the physics in any way: it did not matter if all the gravitational forces on an object were calculated individually and then added together, or if all the contributions were first added together as a gravitational field and then applied to an object. The development of the independent concept of a field truly began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became much more natural to take the field approach and express these laws in terms of electric and magnetic fields; in 1845 Michael Faraday became the first to coin the term "magnetic field". And Lord Kelvin provided a formal definition for a field in 1851. The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields, called electromagnetic waves, propagated at a finite speed. Consequently, the forces on charges and currents no longer just depended on the positions and velocities of other charges and currents at the same time, but also on their positions and velocities in the past. Maxwell, at first, did not adopt the modern concept of a field as a fundamental quantity that could independently exist. Instead, he supposed that the electromagnetic field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the observed velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no experimental evidence of such an effect was ever found; the situation was resolved by the introduction of the special theory of relativity by Albert Einstein in 1905. This theory changed the way the viewpoints of moving observers were related to each other. They became related to each other in such a way that velocity of electromagnetic waves in Maxwell's theory would be the same for all observers. By doing away with the need for a background medium, this development opened the way for physicists to start thinking about fields as truly independent entities. In the late 1920s, the new rules of quantum mechanics were first applied to the electromagnetic field. In 1927, Paul Dirac used quantum fields to successfully explain how the decay of an atom to a lower quantum state led to the spontaneous emission of a photon, the quantum of the electromagnetic field. This was soon followed by the realization (following the work of Pascual Jordan, Eugene Wigner, Werner Heisenberg, and Wolfgang Pauli) that all particles, including electrons and protons, could be understood as the quanta of some quantum field, elevating fields to the status of the most fundamental objects in nature. That said, John Wheeler and Richard Feynman seriously considered Newton's pre-field concept of action at a distance (although they set it aside because of the ongoing utility of the field concept for research in general relativity and quantum electrodynamics). Classical fields There are several examples of classical fields. Classical field theories remain useful wherever quantum properties do not arise, and can be active areas of research. Elasticity of materials, fluid dynamics and Maxwell's equations are cases in point. Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described. Newtonian gravitation A classical field theory describing gravity is Newtonian gravitation, which describes the gravitational force as a mutual interaction between two masses. Any body with mass M is associated with a gravitational field g which describes its influence on other bodies with mass. The gravitational field of M at a point r in space corresponds to the ratio between force F that M exerts on a small or negligible test mass m located at r and the test mass itself: Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M. According to Newton's law of universal gravitation, F(r) is given by where is a unit vector lying along the line joining M and m and pointing from M to m. Therefore, the gravitational field of M is The experimental observation that inertial mass and gravitational mass are equal to an unprecedented level of accuracy leads to the identity that gravitational field strength is identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity. Because the gravitational force F is conservative, the gravitational field g can be rewritten in terms of the gradient of a scalar function, the gravitational potential Φ(r): Electromagnetism Michael Faraday first realized the importance of a field as a physical quantity, during his investigations into magnetism. He realized that electric and magnetic fields are not only fields of force which dictate the motion of particles, but also have an independent physical reality because they carry energy. These ideas eventually led to the creation, by James Clerk Maxwell, of the first unified field theory in physics with the introduction of equations for the electromagnetic field. The modern versions of these equations are called Maxwell's equations. Electrostatics A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E so that . Using this and Coulomb's law tells us that the electric field due to a single charged particle is The electric field is conservative, and hence can be described by a scalar potential, V(r): Magnetostatics A steady current I flowing along a path ℓ will create a field B, that exerts a force on nearby moving charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is where B(r) is the magnetic field, which is determined from I by the Biot–Savart law: The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r): Electrodynamics In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to ρ and J. Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations At the end of the 19th century, the electromagnetic field was understood as a collection of two vector fields in space. Nowadays, one recognizes this as a single antisymmetric 2nd-rank tensor field in spacetime. Gravitation in general relativity Einstein's theory of gravity, called general relativity, is another example of a field theory. Here the principal field is the metric tensor, a symmetric 2nd-rank tensor field in spacetime. This replaces Newton's law of universal gravitation. Waves as fields Waves can be constructed as physical fields, due to their finite propagation speed and causal nature when a simplified physical model of an isolated closed system is set . They are also subject to the inverse-square law. For electromagnetic waves, there are optical fields, and terms such as near- and far-field limits for diffraction. In practice though, the field theories of optics are superseded by the electromagnetic field theory of Maxwell Gravity waves are waves in the surface of water, defined by a height field. Fluid dynamics Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid, if the density , pressure , deviatoric stress tensor of the fluid, as well as external body forces b, are all given. The flow velocity u is the vector field to solve for. Elasticity Linear elasticity is defined in terms of constitutive equations between tensor fields, where are the components of the 3x3 Cauchy stress tensor, the components of the 3x3 infinitesimal strain and is the elasticity tensor, a fourth-rank tensor with 81 components (usually 21 independent components). Thermodynamics and transport equations Assuming that the temperature T is an intensive quantity, i.e., a single-valued, continuous and differentiable function of three-dimensional space (a scalar field), i.e., that , then the temperature gradient is a vector field defined as . In thermal conduction, the temperature field appears in Fourier's law, where q is the heat flux field and k the thermal conductivity. Temperature and pressure gradients are also important for meteorology. Quantum fields It is now believed that quantum mechanics should underlie all physical phenomena, so that a classical field theory should, at least in principle, permit a recasting in quantum mechanical terms; success yields the corresponding quantum field theory. For example, quantizing classical electrodynamics gives quantum electrodynamics. Quantum electrodynamics is arguably the most successful scientific theory; experimental data confirm its predictions to a higher precision (to more significant digits) than any other theory. The two other fundamental quantum field theories are quantum chromodynamics and the electroweak theory. In quantum chromodynamics, the color field lines are coupled at short distances by gluons, which are polarized by the field and line up with it. This effect increases within a short distance (around 1 fm from the vicinity of the quarks) making the color force increase within a short distance, confining the quarks within hadrons. As the field lines are pulled together tightly by gluons, they do not "bow" outwards as much as an electric field between electric charges. These three quantum field theories can all be derived as special cases of the so-called standard model of particle physics. General relativity, the Einsteinian field theory of gravity, has yet to be successfully quantized. However an extension, thermal field theory, deals with quantum field theory at finite temperatures, something seldom considered in quantum field theory. In BRST theory one deals with odd fields, e.g. Faddeev–Popov ghosts. There are different descriptions of odd classical fields both on graded manifolds and supermanifolds. As above with classical fields, it is possible to approach their quantum counterparts from a purely mathematical view using similar techniques as before. The equations governing the quantum fields are in fact PDEs (specifically, relativistic wave equations (RWEs)). Thus one can speak of Yang–Mills, Dirac, Klein–Gordon and Schrödinger fields as being solutions to their respective equations. A possible problem is that these RWEs can deal with complicated mathematical objects with exotic algebraic properties (e.g. spinors are not tensors, so may need calculus for spinor fields), but these in theory can still be subjected to analytical methods given appropriate mathematical generalization. Field theory Field theory usually refers to a construction of the dynamics of a field, i.e. a specification of how a field changes with time or with respect to other independent physical variables on which the field depends. Usually this is done by writing a Lagrangian or a Hamiltonian of the field, and treating it as a classical or quantum mechanical system with an infinite number of degrees of freedom. The resulting field theories are referred to as classical or quantum field theories. The dynamics of a classical field are usually specified by the Lagrangian density in terms of the field components; the dynamics can be obtained by using the action principle. It is possible to construct simple fields without any prior knowledge of physics using only mathematics from multivariable calculus, potential theory and partial differential equations (PDEs). For example, scalar PDEs might consider quantities such as amplitude, density and pressure fields for the wave equation and fluid dynamics; temperature/concentration fields for the heat/diffusion equations. Outside of physics proper (e.g., radiometry and computer graphics), there are even light fields. All these previous examples are scalar fields. Similarly for vectors, there are vector PDEs for displacement, velocity and vorticity fields in (applied mathematical) fluid dynamics, but vector calculus may now be needed in addition, being calculus for vector fields (as are these three quantities, and those for vector PDEs in general). More generally problems in continuum mechanics may involve for example, directional elasticity (from which comes the term tensor, derived from the Latin word for stretch), complex fluid flows or anisotropic diffusion, which are framed as matrix-tensor PDEs, and then require matrices or tensor fields, hence matrix or tensor calculus. The scalars (and hence the vectors, matrices and tensors) can be real or complex as both are fields in the abstract-algebraic/ring-theoretic sense. In a general setting, classical fields are described by sections of fiber bundles and their dynamics is formulated in the terms of jet manifolds (covariant classical field theory). In modern physics, the most often studied fields are those that model the four fundamental forces which one day may lead to the Unified Field Theory. Symmetries of fields A convenient way of classifying a field (classical or quantum) is by the symmetries it possesses. Physical symmetries are usually of two types: Spacetime symmetries Fields are often classified by their behaviour under transformations of spacetime. The terms used in this classification are: scalar fields (such as temperature) whose values are given by a single variable at each point of space. This value does not change under transformations of space. vector fields (such as the magnitude and direction of the force at each point in a magnetic field) which are specified by attaching a vector to each point of space. The components of this vector transform between themselves contravariantly under rotations in space. Similarly, a dual (or co-) vector field attaches a dual vector to each point of space, and the components of each dual vector transform covariantly. tensor fields, (such as the stress tensor of a crystal) specified by a tensor at each point of space. Under rotations in space, the components of the tensor transform in a more general way which depends on the number of covariant indices and contravariant indices. spinor fields (such as the Dirac spinor) arise in quantum field theory to describe particles with spin which transform like vectors except for one of their components; in other words, when one rotates a vector field 360 degrees around a specific axis, the vector field turns to itself; however, spinors would turn to their negatives in the same case. Internal symmetries Fields may have internal symmetries in addition to spacetime symmetries. In many situations, one needs fields which are a list of spacetime scalars: (φ1, φ2, ... φN). For example, in weather prediction these may be temperature, pressure, humidity, etc. In particle physics, the color symmetry of the interaction of quarks is an example of an internal symmetry, that of the strong interaction. Other examples are isospin, weak isospin, strangeness and any other flavour symmetry. If there is a symmetry of the problem, not involving spacetime, under which these components transform into each other, then this set of symmetries is called an internal symmetry. One may also make a classification of the charges of the fields under internal symmetries. Statistical field theory Statistical field theory attempts to extend the field-theoretic paradigm toward many-body systems and statistical mechanics. As above, it can be approached by the usual infinite number of degrees of freedom argument. Much like statistical mechanics has some overlap between quantum and classical mechanics, statistical field theory has links to both quantum and classical field theories, especially the former with which it shares many methods. One important example is mean field theory. Continuous random fields Classical fields as above, such as the electromagnetic field, are usually infinitely differentiable functions, but they are in any case almost always twice differentiable. In contrast, generalized functions are not continuous. When dealing carefully with classical fields at finite temperature, the mathematical methods of continuous random fields are used, because thermally fluctuating classical fields are nowhere differentiable. Random fields are indexed sets of random variables; a continuous random field is a random field that has a set of functions as its index set. In particular, it is often mathematically convenient to take a continuous random field to have a Schwartz space of functions as its index set, in which case the continuous random field is a tempered distribution. We can think about a continuous random field, in a (very) rough way, as an ordinary function that is almost everywhere, but such that when we take a weighted average of all the infinities over any finite region, we get a finite result. The infinities are not well-defined; but the finite values can be associated with the functions used as the weight functions to get the finite values, and that can be well-defined. We can define a continuous random field well enough as a linear map from a space of functions into the real numbers. See also Conformal field theory Covariant Hamiltonian field theory Field strength Lagrangian and Eulerian specification of a field Scalar field theory Velocity field Notes References Further reading Landau, Lev D. and Lifshitz, Evgeny M. (1971). Classical Theory of Fields (3rd ed.). London: Pergamon. . Vol. 2 of the Course of Theoretical Physics. External links Particle and Polymer Field Theories Mathematical physics Physical quantities
Field (physics)
[ "Physics", "Mathematics" ]
4,419
[ "Physical phenomena", "Physical quantities", "Applied mathematics", "Quantity", "Theoretical physics", "Mathematical physics", "Physical properties" ]
22,606,070
https://en.wikipedia.org/wiki/Chain%20shuttling%20polymerization
Chain shuttling polymerization is a dual-catalyst method for producing block copolymers with alternating or variable tacticity. The desired effect of this method is to generate hybrid polymers that bear the properties of both polymer chains, such as a high melting point accompanied by high elasticity. It is a relatively new method, the first instance of its use being reported by Arriola et al. in May 2006. Olefin polymerization Olefin polymers (such as polypropylene and polyethylene) have seen widespread use in the plastics industry in the past 50 years. A way to enhance the properties of these olefin polymers was first discovered by the scientists Karl Ziegler and Giulio Natta. Ziegler discovered the original Titanium based catalyst essential for olefin polymerization, while Natta used the catalyst to alter and control the stereochemistry (tacticity) of the olefin polymers (hence Ziegler–Natta catalyst). By controlling the tacticity of the polymer, a chain can, for example, either be semi crystalline or amorphous, rigid or elastic, heat resistant or have a low glass transition temperature. Much research since has been dedicated to predicting and creating polymers based on this work. Living polymerization is the term coined to describe the use of specially made catalysts (often involving transition metal centers) in olefin polymerization, since the polymer chains self-propagate in the presence of the catalyst until intentionally terminated. Living polymerization, however, produces only one type of tacticity per catalyst. While the specific tacticity can be controlled by altering the type of catalyst used, creating a block copolymer requires that the polymerization be terminated, the catalyst destroyed, and that the chain re-propagate using another catalyst that produces the desired stereochemistry. Such manipulations are usually difficult, however. Method Chain shuttling polymerization makes use of two catalysts and a chain shuttling agent (CSA) to generate copolymers of alternating tacticity. Catalyst 1 (Cat1) propagates a polyolefin of a desired tacticity. Catalyst 2 (Cat2) generates another chain of a different tacticity. The two chains are allowed to co-propagate in a single reactor in the same living polymer fashion as before. To alternate the tacticity, a CSA will transfer the polymer chain from its respective catalyst. The CSA can then bind to Cat2 and attach the chain to Cat2. When the chain attaches to Cat2, the polymerization of that chain continues, except it now propagates with the tacticity dictated by Cat2, not Cat1. The general result is that the chain will alternate between two different tacticities. As the forward and reverse reactions occur, the polymer chain is “shuttled” back and forth between the two catalysts and a block copolymer is formed. The shuttling of chains back and forth from catalysts via a CSA can be viewed as a competing chemical equilibrium. Note that the forward and reverse reactions of CSA binding and leaving either Cat1 or Cat2 are possible. This competition means that a chain can leave Cat1 via a CSA and the reattach to Cat1, polymerizing the same tacticity. The rate at which the reattachment of Cat1 occurs can be controlled by altering the relative concentrations of Cat1, Cat2 and CSA. For example, if one wanted to produce a polymer with the properties mainly resulting from the use of Cat1 and only wanted to influence its properties slightly by the presence of Cat2, a greater concentration of Cat1 would be used than for Cat2. The rate of alternation between tacticity can be controlled by altering the concentration of CSA relative to Cat1 and Cat2; having a higher concentration of CSA means that the chains will shuttle back and forth more rapidly, creating shorter units of alternating tacticity. Advantages The first clear advantage of chain shuttling is that one can design copolymers with more desirable traits. A polymer that is normally semi crystalline and rigid can be altered so that it has a lower glass transition temperature. An amorphous, elastic polymer membrane can be altered to have a higher melting point. The technique opens the door for tailor-made polymers to be widely accessible and simple to make inexpensively. References Polymer chemistry
Chain shuttling polymerization
[ "Chemistry", "Materials_science", "Engineering" ]
879
[ "Materials science", "Polymer chemistry" ]
22,610,613
https://en.wikipedia.org/wiki/Cross-coupling%20reaction
In organic chemistry, a cross-coupling reaction is a reaction where two different fragments are joined. Cross-couplings are a subset of the more general coupling reactions. Often cross-coupling reactions require metal catalysts. One important reaction type is this: (R, R' = organic fragments, usually aryl; M = main group center such as Li or MgX; X = halide) These reactions are used to form carbon–carbon bonds but also carbon-heteroatom bonds. Cross-coupling reaction are a subset of coupling reactions. Richard F. Heck, Ei-ichi Negishi, and Akira Suzuki were awarded the 2010 Nobel Prize in Chemistry for developing palladium-catalyzed coupling reactions. Mechanism Many mechanisms exist reflecting the myriad types of cross-couplings, including those that do not require metal catalysts. Often, however, cross-coupling refers to a metal-catalyzed reaction of a nucleophilic partner with an electrophilic partner. In such cases, the mechanism generally involves reductive elimination of R-R' from LnMR(R') (L = spectator ligand). This intermediate LnMR(R') is formed in a two step process from a low valence precursor LnM. The oxidative addition of an organic halide (RX) to LnM gives LnMR(X). Subsequently, the second partner undergoes transmetallation with a source of R'−. The final step is reductive elimination of the two coupling fragments to regenerate the catalyst and give the organic product. Unsaturated substrates, such as C(sp)−X and C(sp2)−X bonds, couple more easily, in part because they add readily to the catalyst. Catalysts Catalysts are often based on palladium, which is frequently selected due to high functional group tolerance. Organopalladium compounds are generally stable towards water and air. Palladium catalysts can be problematic for the pharmaceutical industry, which faces extensive regulation regarding heavy metals. Many pharmaceutical chemists attempt to use coupling reactions early in production to minimize metal traces in the product. Heterogeneous catalysts based on Pd are also well developed. Copper-based catalysts are also common, especially for coupling involving heteroatom-C bonds. Iron-, cobalt-, and nickel-based catalysts have been investigated. Leaving groups The leaving group X in the organic partner is usually a halide, although triflate, tosylate, pivalate esters, and other pseudohalides have been used. Chloride is an ideal group due to the low cost of organochlorine compounds. Frequently, however, C–Cl bonds are too inert, and bromide or iodide leaving groups are required for acceptable rates. The main group metal in the organometallic partner usually is an electropositive element such as tin, zinc, silicon, or boron. Carbon–carbon cross-coupling Many cross-couplings entail forming carbon–carbon bonds. The restrictions on carbon atom geometry mainly inhibit β-hydride elimination when complexed to the catalyst. Carbon–heteroatom coupling Many cross-couplings entail forming carbon–heteroatom bonds (heteroatom = S, N, O). A popular method is the Buchwald–Hartwig reaction: Miscellaneous reactions Palladium-catalyzes the cross-coupling of aryl halides with fluorinated arene. The process is unusual in that it involves C–H functionalisation at an electron deficient arene. Applications Cross-coupling reactions are important for the production of pharmaceuticals, examples being montelukast, eletriptan, naproxen, varenicline, and resveratrol. with Suzuki coupling being most widely used. Some polymers and monomers are also prepared in this way. Reviews References Organometallic chemistry Carbon-carbon bond forming reactions Catalysis
Cross-coupling reaction
[ "Chemistry" ]
828
[ "Catalysis", "Carbon-carbon bond forming reactions", "Coupling reactions", "Organic reactions", "Chemical kinetics", "Organometallic chemistry" ]
22,615,327
https://en.wikipedia.org/wiki/Structural%20engineering%20theory
Structural engineering depends upon a detailed knowledge of loads, physics and materials to understand and predict how structures support and resist self-weight and imposed loads. To apply the knowledge successfully structural engineers will need a detailed knowledge of mathematics and of relevant empirical and theoretical design codes. They will also need to know about the corrosion resistance of the materials and structures, especially when those structures are exposed to the external environment. The criteria which govern the design of a structure are either serviceability (criteria which define whether the structure is able to adequately fulfill its function) or strength (criteria which define whether a structure is able to safely support and resist its design loads). A structural engineer designs a structure to have sufficient strength and stiffness to meet these criteria. Loads imposed on structures are supported by means of forces transmitted through structural elements. These forces can manifest themselves as tension (axial force), compression (axial force), shear, and bending, or flexure (a bending moment is a force multiplied by a distance, or lever arm, hence producing a turning effect or torque). Strength Strength depends upon material properties. The strength of a material depends on its capacity to withstand axial stress, shear stress, bending, and torsion. The strength of a material is measured in force per unit area (newtons per square millimetre or N/mm², or the equivalent megapascals or MPa in the SI system and often pounds per square inch psi in the United States Customary Units system). A structure fails the strength criterion when the stress (force divided by area of material) induced by the loading is greater than the capacity of the structural material to resist the load without breaking, or when the strain (percentage extension) is so great that the element no longer fulfills its function (yield). See also: Stiffness Stiffness depends upon material properties and geometry. The stiffness of a structural element of a given material is the product of the material's Young's modulus and the element's second moment of area. Stiffness is measured in force per unit length (newtons per millimetre or N/mm), and is equivalent to the 'force constant' in Hooke's Law. The deflection of a structure under loading is dependent on its stiffness. The dynamic response of a structure to dynamic loads (the natural frequency of a structure) is also dependent on its stiffness. In a structure made up of multiple structural elements where the surface distributing the forces to the elements is rigid, the elements will carry loads in proportion to their relative stiffness - the stiffer an element, the more load it will attract. This means that load/stiffness ratio, which is deflection, remains same in two connected (jointed) elements. In a structure where the surface distributing the forces to the elements is flexible (like a wood-framed structure), the elements will carry loads in proportion to their relative tributary areas. A structure is considered to fail the chosen serviceability criteria if it is insufficiently stiff to have acceptably small deflection or dynamic response under loading. The inverse of stiffness is flexibility. Safety factors The safe design of structures requires a design approach which takes account of the statistical likelihood of the failure of the structure. Structural design codes are based upon the assumption that both the loads and the material strengths vary with a normal distribution. The job of the structural engineer is to ensure that the chance of overlap between the distribution of loads on a structure and the distribution of material strength of a structure is acceptably small (it is impossible to reduce that chance to zero). It is normal to apply a partial safety factor to the loads and to the material strengths, to design using 95th percentiles (two standard deviations from the mean). The safety factor applied to the load will typically ensure that in 95% of times the actual load will be smaller than the design load, while the factor applied to the strength ensures that 95% of times the actual strength will be higher than the design strength. The safety factors for material strength vary depending on the material and the use it is being put to and on the design codes applicable in the country or region. A more sophisticated approach of modeling structural safety is to rely on structural reliability, in which both loads and resistances are modeled as probabilistic variables. However, using this approach requires detailed modeling of the distribution of loads and resistances. Furthermore, its calculations are more computation intensive. Load cases A load case is a combination of different types of loads with safety factors applied to them. A structure is checked for strength and serviceability against all the load cases it is likely to experience during its lifetime. Typical load cases for design for strength (ultimate load cases; ULS) are: 1.2 x Dead Load + 1.6 x Live Load 1.2 x Dead Load + 1.2 x Live Load + 1.2 x Wind Load A typical load case for design for serviceability (characteristic load cases; SLS) is: 1.0 x Dead Load + 1.0 x Live Load Different load cases would be used for different loading conditions. For example, in the case of design for fire a load case of 1.0 x Dead Load + 0.8 x Live Load may be used, as it is reasonable to assume everyone has left the building if there is a fire. In multi-story buildings it is normal to reduce the total live load depending on the number of stories being supported, as the probability of maximum load being applied to all floors simultaneously is negligibly small. It is not uncommon for large buildings to require hundreds of different load cases to be considered in the design. Newton's laws of motion The most important natural laws for structural engineering are Newton's Laws of Motion Newton's first law states that every body perseveres in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by force impressed. Newton's second law states that the rate of change of momentum of a body is proportional to the resultant force acting on the body and is in the same direction. Mathematically, F=ma (force = mass x acceleration). Newton's third law states that all forces occur in pairs, and these two forces are equal in magnitude and opposite in direction. With these laws it is possible to understand the forces on a structure and how that structure will resist them. The Third Law requires that for a structure to be stable all the internal and external forces must be in equilibrium. This means that the sum of all internal and external forces on a free-body diagram must be zero: : the vectorial sum of the forces acting on the body equals zero. This translates to Σ H = 0: the sum of the horizontal components of the forces equals zero; Σ V = 0: the sum of the vertical components of forces equals zero; : the sum of the moments (about an arbitrary point) of all forces equals zero. Statical determinacy A structural engineer must understand the internal and external forces of a structural system consisting of structural elements and nodes at their intersections. A statically determinate structure can be fully analysed using only consideration of equilibrium, from Newton's Laws of Motion. A statically indeterminate structure has more unknowns than equilibrium considerations can supply equations for (see simultaneous equations). Such a system can be solved using consideration of equations of compatibility between geometry and deflections in addition to equilibrium equations, or by using virtual work. If a system is made up of bars, pin joints and support reactions, then it cannot be statically determinate if the following relationship does not hold: Even if this relationship does hold, a structure can be arranged in such a way as to be statically indeterminate. Elasticity Much engineering design is based on the assumption that materials behave elastically. For most materials this assumption is incorrect, but empirical evidence has shown that design using this assumption can be safe. Materials that are elastic obey Hooke's Law, and plasticity does not occur. For systems that obey Hooke's Law, the extension produced is directly proportional to the load: where x is the distance that the spring has been stretched or compressed away from the equilibrium position, which is the position where the spring would naturally come to rest [usually in meters], F is the restoring force exerted by the material [usually in newtons], and k is the force constant (or spring constant). This is the stiffness of the spring. The constant has units of force per unit length (usually in newtons per metre) Plasticity Some design is based on the assumption that materials will behave plastically. A plastic material is one which does not obey Hooke's Law, and therefore deformation is not proportional to the applied load. Plastic materials are ductile materials. Plasticity theory can be used for some reinforced concrete structures assuming they are underreinforced, meaning that the steel reinforcement fails before the concrete does. Plasticity theory states that the point at which a structure collapses (reaches yield) lies between an upper and a lower bound on the load, defined as follows: If, for a given external load, it is possible to find a distribution of moments that satisfies equilibrium requirements, with the moment not exceeding the yield moment at any location, and if the boundary conditions are satisfied, then the given load is a lower bound on the collapse load. If, for a small increment of displacement the internal work done by the structure, assuming that the moment at every plastic hinge is equal to the yield moment and that the boundary conditions are satisfied, is equal to the external work done by the given load for that same small increment of displacement, then that load is an upper bound on the collapse load. If the correct collapse load is found, the two methods will give the same result for the collapse load. Plasticity theory depends upon a correct understanding of when yield will occur. A number of different models for stress distribution and approximations to the yield surface of plastic materials exist: Mohr's circle Von Mises yield criterion Henri Tresca Euler–Bernoulli beam equation The Euler–Bernoulli beam equation defines the behaviour of a beam element (see below). It is based on five assumptions: Continuum mechanics is valid for a bending beam. The stress at a cross section varies linearly in the direction of bending, and is zero at the centroid of every cross section. The bending moment at a particular cross section varies linearly with the second derivative of the deflected shape at that location. The beam is composed of an isotropic material. The applied load is orthogonal to the beam's neutral axis and acts in a unique plane. A simplified version of Euler–Bernoulli beam equation is: Here is the deflection and is a load per unit length. is the elastic modulus and is the second moment of area, the product of these giving the flexural rigidity of the beam. This equation is very common in engineering practice: it describes the deflection of a uniform, static beam. Successive derivatives of have important meanings: is the deflection. is the slope of the beam. is the bending moment in the beam. is the shear force in the beam. A bending moment manifests itself as a tension force and a compression force, acting as a couple in a beam. The stresses caused by these forces can be represented by: where is the stress, is the bending moment, is the distance from the neutral axis of the beam to the point under consideration and is the second moment of area. Often the equation is simplified to the moment divided by the section modulus , which is . This equation allows a structural engineer to assess the stress in a structural element when subjected to a bending moment. Buckling When subjected to compressive forces it is possible for structural elements to deform significantly due to the destabilising effect of that load. The effect can be initiated or exacerbated by possible inaccuracies in manufacture or construction. The Euler buckling formula defines the axial compression force which will cause a strut (or column) to fail in buckling. where = maximum or critical force (vertical load on column), = modulus of elasticity, = area moment of inertia, or second moment of area = unsupported length of column, = column effective length factor, whose value depends on the conditions of end support of the column, as follows. For both ends pinned (hinged, free to rotate), = 1.0. For both ends fixed, = 0.50. For one end fixed and the other end pinned, 0.70. For one end fixed and the other end free to move laterally, = 2.0. This value is sometimes expressed for design purposes as a critical buckling stress. where = maximum or critical stress = the least radius of gyration of the cross section Other forms of buckling include lateral torsional buckling, where the compression flange of a beam in bending will buckle, and buckling of plate elements in plate girders due to compression in the plane of the plate. See also Structural analysis Structural engineering software References Castigliano, Carlo Alberto (translator: Andrews, Ewart S.) (1966). The Theory of Equilibrium of Elastic Systems and Its Applications. Dover Publications. Dym, Clive L. (1997). Structural Modeling and Analysis. Cambridge University Press. . Dugas, René (1988). A History of Mechanics. Courier Dover Publications. . Hewson, Nigel R. (2003). Prestressed Concrete Bridges: Design and Construction. Thomas Telford. . Heyman, Jacques (1998). Structural Analysis: A Historical Approach. Cambridge University Press. . Heyman, Jacques (1999). The Science of Structural Engineering. Imperial College Press. . Hognestad, E. A Study of Combined Bending and Axial Load in Reinforced Concrete Members. University of Illinois, Engineering Experiment Station, Bulletin Series N. 399. Jennings, Alan (2004) Structures: From Theory to Practice. Taylor & Francis. . Leonhardt, A. (1964). Vom Caementum zum Spannbeton, Band III (From Cement to Prestressed Concrete). Bauverlag GmbH. MacNeal, Richard H. (1994). Finite Elements: Their Design and Performance. Marcel Dekker. . Mörsch, E. (Stuttgart, 1908). Der Eisenbetonbau, seine Theorie und Anwendung, (Reinforced Concrete Construction, its Theory and Application). Konrad Wittwer, 3rd edition. Nedwell, P.J.; Swamy, R.N.(ed) (1994). Ferrocement:Proceedings of the Fifth International Symposium. Taylor & Francis. . Newton, Isaac; Leseur, Thomas; Jacquier, François (1822). Philosophiæ Naturalis Principia Mathematica. Oxford University. Nilson, Arthur H.; Darwin, David; Dolan, Charles W. (2004). Design of Concrete Structures. McGraw-Hill Professional. . Rozhanskaya, Mariam; Levinova, I. S. (1996). "Statics" in Morelon, Régis & Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science, vol. 2-3, Routledge. Schlaich, J., K. Schäfer, M. Jennewein (1987). "Toward a Consistent Design of Structural Concrete". PCI Journal, Special Report, Vol. 32, No. 3. Scott, Richard (2001). In the Wake of Tacoma: Suspension Bridges and the Quest for Aerodynamic Stability. ASCE Publications. . Turner, J.; Clough, R.W.; Martin, H.C.; Topp, L.J. (1956). "Stiffness and Deflection of Complex Structures". Journal of Aeronautical Science Issue 23. Virdi, K.S. (2000). Abnormal Loading on Structures: Experimental and Numerical Modelling. Taylor & Francis. . Structural engineering
Structural engineering theory
[ "Engineering" ]
3,319
[ "Structural engineering", "Civil engineering", "Construction" ]
22,616,176
https://en.wikipedia.org/wiki/Analytical%20regularization
In physics and applied mathematics, analytical regularization is a technique used to convert boundary value problems which can be written as Fredholm integral equations of the first kind involving singular operators into equivalent Fredholm integral equations of the second kind. The latter may be easier to solve analytically and can be studied with discretization schemes like the finite element method or the finite difference method because they are pointwise convergent. In computational electromagnetics, it is known as the method of analytical regularization. It was first used in mathematics during the development of operator theory before acquiring a name. Method Analytical regularization proceeds as follows. First, the boundary value problem is formulated as an integral equation. Written as an operator equation, this will take the form with representing boundary conditions and inhomogeneities, representing the field of interest, and the integral operator describing how Y is given from X based on the physics of the problem. Next, is split into , where is invertible and contains all the singularities of and is regular. After splitting the operator and multiplying by the inverse of , the equation becomes or which is now a Fredholm equation of the second type because by construction is compact on the Hilbert space of which is a member. In general, several choices for will be possible for each problem. References , Paperpack (also available online). Read Chapter 8 for Analytic Regularization. External links E-Polarized Wave Scattering from Infinitely Thin and Finitely Width Strip Systems Diffraction Electromagnetism Applied mathematics Computational electromagnetics Fredholm theory
Analytical regularization
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
312
[ "Electromagnetism", "Physical phenomena", "Computational electromagnetics", "Spectrum (physical sciences)", "Applied mathematics", "Computational physics", "Diffraction", "Crystallography", "Fundamental interactions", "Spectroscopy" ]
6,538,017
https://en.wikipedia.org/wiki/Restriction%20map
A restriction map is a map of known restriction sites within a sequence of DNA. Restriction mapping requires the use of restriction enzymes. In molecular biology, restriction maps are used as a reference to engineer plasmids or other relatively short pieces of DNA, and sometimes for longer genomic DNA. There are other ways of mapping features on DNA for longer length DNA molecules, such as mapping by transduction. One approach in constructing a restriction map of a DNA molecule is to sequence the whole molecule and to run the sequence through a computer program that will find the recognition sites that are present for every restriction enzyme known. Before sequencing was automated, it would have been prohibitively expensive to sequence an entire DNA strand. To find the relative positions of restriction sites on a plasmid, a technique involving single and double restriction digests is used. Based on the sizes of the resultant DNA fragments the positions of the sites can be inferred. Restriction mapping is a very useful technique when used for determining the orientation of an insert in a cloning vector, by mapping the position of an off-center restriction site in the insert. Method The experimental procedure first requires a sample of purified plasmid DNA for each digest to be run. Digestion is then performed with each enzyme(s) chosen. The resulting samples are subsequently run on an electrophoresis gel, typically on agarose gel. The first step following the completion of electrophoresis is to add up the sizes of the fragments in each lane. The sum of the individual fragments should equal the size of the original fragment, and each digest's fragments should also sum up to be the same size as each other. If fragment sizes do not properly add up, there are two likely problems. In one case, some of the smaller fragments may have run off the end of the gel. This frequently occurs if the gel is run too long. A second possible source of error is that the gel was not dense enough and therefore was unable to resolve fragments close in size. This leads to a lack of separation of fragments which were close in size. If all of the digests produce fragments that add up one may infer the position of the REN (restriction endonuclease) sites by placing them in spots on the original DNA fragment that would satisfy the fragment sizes produced by all three digests Rapid Denaturation and Renaturation of a crude DNA preparation by alkaline lysis of the cells and subsequent neutralization In this technique the cells are lysed in alkaline conditions. The DNA in the mixture is denatured (strands separated) by disrupting the hydrogen bonds between the two strands. The large genomic DNA is subject to tangling and staying denatured when the pH is lowered during the neutralization. In other words, the strands come back together in a disordered fashion, basepairing randomly. The circular supercoiled plasmids' strands will stay relatively closely aligned and will renature correctly. Therefore, the genomic DNA will form an insoluble aggregate and the supercoiled plasmids will be left in solution. This can be followed by phenol extraction to remove proteins and other molecules. Then the DNA can be subjected to ethanol precipitation to concentrate the sample. See also Vector NTI, bioinformatics software used among other things to predict restriction sites on a DNA vector RFLP, method used to differentiate exceedingly similar genomes, among other things References Genetics Molecular biology
Restriction map
[ "Chemistry", "Biology" ]
712
[ "Biochemistry", "Genetics", "Molecular biology" ]
6,543,509
https://en.wikipedia.org/wiki/Symposium%20on%20Logic%20in%20Computer%20Science
The ACM–IEEE Symposium on Logic in Computer Science (LICS) is an annual academic conference on the theory and practice of computer science in relation to mathematical logic. Extended versions of selected papers of each year's conference appear in renowned international journals such as Logical Methods in Computer Science and ACM Transactions on Computational Logic. History LICS was originally sponsored solely by the IEEE, but as of the 2014 founding of the ACM Special Interest Group on Logic and Computation LICS has become the flagship conference of SIGLOG, under the joint sponsorship of ACM and IEEE. From the third installment in 1988 until 2013, the cover page of the conference proceedings has featured an artwork entitled Irrational Tiling by Logical Quantifiers, by Alvy Ray Smith. Since 1995, each year the Kleene award is given to the best student paper. In addition, since 2006, the LICS Test-of-Time Award is given annually to one among the twenty-year-old LICS papers that have best met the test of time. LICS Awards Test-of-Time Award Each year, since 2006, the LICS Test-of-Time Award recognizes those articles from LICS proceedings 20 years earlier, which have become influential. 2006 Leo Bachmair, Nachum Dershowitz, Jieh Hsiang, "Orderings for Equational Proofs" E. Allen Emerson, Chin-Laung Lei, "Efficient Model Checking in Fragments of the Propositional Mu-Calculus (Extended Abstract)" Moshe Y. Vardi, Pierre Wolper, "An Automata-Theoretic Approach to Automatic Program Verification (Preliminary Report)" 2007 Samson Abramsky, "Domain theory in Logical Form" Robert Harper, Furio Honsell, Gordon D. Plotkin, "A Framework for Defining Logics" 2008 Martin Abadi, Leslie Lamport, "The existence of refinement mappings" 2009 Eugenio Moggi, "Computational lambda-calculus and monads" 2010 Rajeev Alur, Costas Courcoubetis, David L. Dill, "Model-checking for real-time systems" Jerry R. Burch, Edmund Clarke, Kenneth L. McMillan, David L. Dill, James Hwang, "Symbolic model checking: 10^20 states and beyond" Max Dauchet, Sophie Tison, "The theory of ground rewrite systems is decidable" Peter Freyd, "Recursive types reduced to inductive types" 2011 Patrice Godefroid, Pierre Wolper, "A partial approach to model checking" Joshua Hodas, Dale Miller, "Logic programming in a fragment of intuitionistic linear logic" Dexter Kozen, "A completeness theorem for Kleene algebras and the algebra of regular events" 2012 Thomas Henzinger, Xavier Nicollin, Joseph Sifakis, Sergio Yovine, "Symbolic model checking for real-time systems" Jean-Pierre Talpin, Pierre Jouvelot, "The type and effect discipline" 2013 Leo Bachmair, Harald Ganzinger, Uwe Waldmann, "Set constraints are the monadic class" André Joyal, Mogens Nielson, Glynn Winskel, "Bisimulation and open maps" Benjamin C. Pierce, Davide Sangiorgi, "Typing and subtyping for mobile processes" 2014 , Thomas Streicher, "The groupoid model refutes uniqueness of identity proofs" Dale Miller, "A multiple-conclusion meta-logic" 2015 Igor Walukiewicz, "Completeness of Kozen's Axiomatisation of the Propositional Mu-Calculus" 2016 Parosh A. Abdulla, Karlis Cerans, Bengt Jonsson, Yih-Kuen Tsay, "General decidability theorems for infinite-state systems" Iliano Cervesato, Frank Pfenning, "A Linear Logical Framework" 2017 Richard Blute, Josée Desharnais, Abbas Edalat, Prakash Panangaden, "Bisimulation for Labelled Markov Processes" Daniele Turi, Gordon D. Plotkin, "Towards a Mathematical Operational Semantics" 2018 Martín Abadi, Cédric Fournet, Georges Gonthier, "Secure Implementation of Channel Abstractions" Samson Abramsky, Kohei Honda, Guy McCusker, "A Fully Abstract Game Semantics for General References" 2019 Marcelo P. Fiore, Gordon D. Plotkin, Daniele Turi, "Abstract Syntax and Variable Binding" Murdoch Gabbay, Andrew M. Pitts, "A New Approach to Abstract Syntax Involving Binders" 2020 Luca de Alfaro, Thomas A. Henzinger, "Concurrent Omega-Regular Games" Hiroshi Nakano, "A Modality for Recursion" 2021 Aaron Stump;, Clark W. Barrett, David L. Dill, Jeremy R. Levitt, "A Decision Procedure for an Extensional Theory of Arrays" Hongwei Xi, "Dependent Types for Program Termination Verification" Kleene award At each conference the Kleene award, in honour of S.C. Kleene, is given for the best student paper. See also The list of computer science conferences contains other academic conferences in computer science. Notes External links LICS home page Theoretical computer science conferences Logic conferences Logic in computer science IEEE conferences
Symposium on Logic in Computer Science
[ "Mathematics" ]
1,097
[ "Mathematical logic", "Logic in computer science" ]
6,545,531
https://en.wikipedia.org/wiki/Piper%20Bravo
Piper Bravo is a North Sea oil production platform originally operated by Occidental Petroleum (Caledonia) Ltd, and now owned by Repsol Sinopec Energy UK. Piper Bravo is an eight-legged fixed steel jacket supported platform, located 193 kilometres northeast of Aberdeen in the Piper oilfield in the central North Sea. It stands in 145 metres of water. It was installed in 1992, and commenced production in February 1993. It replaced the Piper Alpha platform which exploded in July 1988 killing 167 men. It is located approximately 120 metres from the wreck buoy marking the remains of its predecessor (at ). References Oil platforms off Scotland Natural gas platforms Oil and gas industry in Scotland North Sea energy 1992 establishments in Scotland
Piper Bravo
[ "Chemistry", "Engineering" ]
144
[ "Structural engineering", "Petroleum", "Natural gas platforms", "Petroleum stubs" ]
6,547,312
https://en.wikipedia.org/wiki/Sonophoresis
Sonophoresis also known as phonophoresis, is a method that utilizes ultrasound to enhance the delivery of topical medications through the stratum corneum, to the epidermis and dermis. Sonophoresis allows for the enhancement of the permeability of the skin along with other modalities, such as iontophoresis, to deliver drugs with lesser side effects. Currently, sonophoresis is used widely in transdermal drug delivery, but has potential applications in other sectors of drug delivery, such as the delivery of drugs to the eye and brain. Historical advancements Sonophoresis, also known as phonophoresis, was dated back all the way to the 1950s in its first mention in a published report. This report showcased that a hydrocodone injection yielded better outcomes for bursitis when combined with an ultrasound massage. Following this, a series of publications from several investigators showed the increased therapeutic effect when combining ultrasound with hydrocortisone injections for various other disease states, further demonstrating the novelty of sonophoresis. However, while some researchers provided evidence that ultrasound had a positive effect on the transdermal permeation of drugs, others contradicted this information by displaying research that showed no quantitative effect using ultrasound. These early studies mainly investigated the combination of therapeutics with high-frequency sonophoresis (HFS), which can be categorized into frequencies greater than 0.7 MHz. High frequency sonophoresis usually includes a range between 0.7 – 16 MHz. Studies evolved and HFS was continually studied for four decades until a greater understanding of a mechanism of action, cavitation, was discovered. Cavitational effects are inversely proportional to the frequency of the ultrasound applied, which led to further studies of low-frequency sonophoresis (LFS) for use in transdermal drug delivery due to studies showing greater efficacy in enhancing skin permeability in comparison to HFS. Low-frequency sonophoresis usually includes a range between 20 and 100 kHz. For this reason, currently HFS focuses on topical applications for penetration through the stratum corneum, whereas LFS focuses on transdermal drug delivery applications. Background Ultrasonic sonicators generate ultrasound waves, which is a longitudinal compression wave, by converting electrical energy into mechanical energy by deformation of piezoelectric crystals in response to an electric field. The frequency of the waves generated by this method can range from 20 kHz up to 3 MHz. The ultrasound waves generated from this device allow for penetration through biological tissue by molecular oscillation of the biological tissue they travel through. The amplitude of the wave can be modified by manipulating the displacement of the ultrasound horn for each half cycle as they are proportional. The primary purpose of phonophoresis is to assist in transdermal drug delivery, usually with the help of a coupling agent or medium. Transdermal drug delivery sometimes does not permeate the skin to reach a targeted area within the body because of the stratum corneum layer of the skin, a layer that prevents foreign substances from penetrating the body. Transdermal drug delivery is patient-compliance, usually avoids digestive system degradation, and has the ability to use drugs with short half-lives. Mechanisms of action While increased skin permeability is seen through sonophoresis, the precise mechanisms to describe sonophoresis are yet to be fully discovered. However, there are several important mechanisms that have been identified that contribute to the phenomenon of sonophoresis. Cavitation Cavitation is generally determined to be the dominant mechanism that drives sonophoresis. It can be described as the distortion, expansion, and contraction of gas bubbles in a liquid medium. The frequency of the ultrasound waves helps determine the bubble parameters, such as size and shape. There exist two types of cavitation, stable and transient. Stable cavitation is when cavitation bubbles persist over many acoustic pressure cycles without collapsing. On the other hand, transient cavitation is where these cavitation bubbles uncontrollably and rapidly grow and decay over many acoustic pressure cycles. However, while cavitation is considered the primary mechanism for sonophoresis, the gas bubbles that contribute to cavitation are generated by a process termed rectified diffusion. Rectified diffusion Rectified diffusion is the process where cavitation bubbles experience growth. The growth of these bubbles occurs by encountering a negative pressure half cycle, expanding the gas inside the bubble. Similarly, the gas bubble will dramatically decay in size when encountering the other positive half of the pressure cycle. There are further factors that manipulate the oscillation of the bubbles’ size, such as temperature and composition of the gas and liquid phases. Depending on the dramatization of the oscillation from previously mentioned factors, stable or transient cavitation occurs. A rapid process will lead to transient cavitation bubbles, whereas a slower process will lead to stable cavitation bubbles. Thermal effects An important consideration when transferring energy to a patient would be the thermal energy generated from heating of the biological tissue due to energy losses from the ultrasound waves. It has been shown that increases in temperature can increase skin permeability through several factors. Two factors are increased kinetic energy and diffusivity of drugs, which allow for compounds to pass through the stratum corneum. Moreover, hair follicles and sweat glands are dilated, allowing for more points of entry for compounds. The enhanced circulation of blood that comes as a result of increased temperature from ultrasound parameters also allows for better diffusion of compounds. While the intensity and duty cycle of the ultrasound are directly proportional to the corresponding thermal effects, surprisingly thermal effects are not a considerable mechanism for HFS in ranges from 1 – 2 degrees Celsius. However, once larger temperature changes are observed, such as an excess of 10 degrees Celsius, permeant transport was increased. When it comes to LFS, thermal effects are an important consideration on the side of safety. Thermal effects need to be minimized at higher amplitudes, as burns and necrosis of tissues can occur due to exposure to high, sustained temperatures. A simple solution to counteract sustained exposure to high temperatures is to periodically replace the coupling agent every so often. Synergistic combination with other enhancement techniques While sonophoresis alone is able to increase the permeability of skin by several factors depending on the procedure and the drug being delivered, a synergistic combination of sonophoresis with other enhancers, such as iontophoresis and electroporation, has shown greater enhancement as well as increased safety in reduction of individual enhancer parameters. Iontophoresis Iontophoresis is similar to sonophoresis as it is a method for transdermal drug delivery but does so by applying a voltage gradient across the skin. Since there are differences in pathways between iontophoresis and sonophoresis, a combination of these two methods allows for greater enhancement. For example, Le et al. displayed, for the case of heparin, that a combination of iontophoresis and sonophoresis resulted in a 56-fold enhancement of heparin flux in comparison to sonophoresis having a 3-fold enhancement and iontophoresis having a 15-fold enhancement. Electroporation Electroporation allows the cell membrane to open up after applying an electric field. By applying short, high voltage pulses to the stratum corneum, the lipid structure will become disorganized and allow enhancement of drug delivery. There are currently very few reports of the combination of these modalities being used together. However, in these reports, there is mention that the transdermal enhancement created by the combination was greater than the sum of the individual enhancers, suggesting that electroporation and sonophoresis work together synergistically. Treatment Treatment methods Phonophoresis can be performed using two main methods: The first is simultaneous treatment, where the drug can be applied at the same time as the ultrasound. The second method is pretreatment, where the ultrasound is used briefly before drug delivery. This is to ensure that the skin is permeable prior to the drug being applied. When using an ultrasound, cavities will develop due to the pressure change. Stable cavitation describes the repetitive oscillations of a cavity bubble, while inertial cavitation describes the collapse of a cavity bubble. If the developed cavities fall apart, the effect on the stratum corneum lipids will increase the permeability of the skin. These areas of increased permeability are often called localized transport regions, where there is lower electrical resistivity. One potential method is to use cavitation seed at the surface of the skin. Another potential method is to use ultrasound-responsive liquid-core nuclei (URLN). Frequency Low-frequency ultrasound is seen as the optimal level of ultrasound frequency. This is typically characterized as 20 to 100 kHz (sometimes 18 to 100 kHz). Low frequency makes cavitation more likely. For reference, high frequency ultrasound is typically in the range of 1 to 3 MHz. Coupling agents The drug should be able to work together with the coupling agent. In a 2019 study, they used the drug diclofenac in coordination with thiocolchioside gel to treat patients who suffer from acute lower back pain. An application of a drug serving as a coupling agent is the use of piroxicam gel mixtures and dexamethasone sodium phosphate gel mixtures to treat patients who suffer from carpal tunnel syndrome. Applications Physical conditions Various conditions that can be addressed include cervical spine pain, acute lower back pain, carpal tunnel syndrome, muscle injury, rheumatoid arthritis, and venous thrombosis. Examples of drugs that have been used with sonophoresis include hydrocortisone, mannitol, dexamethasone, and lidocaine. Several products have been marketed to use phonophoresis for transdermal drug delivery. Other uses A potential future application of phonophoresis is to use it with vaccines, as phonophoresis is considered a less painful alternative to needles. Another potential use is in cancer therapeutics; one such application that has been explored is the delivery of cisplatin for patients who have cervical cancer. Genetic skin diseases and wound healing may be assisted by phonophoresis. Future potential and other applications Regarding high frequency sonophoresis (HFS), the future potential is very similar to its usage in the past. Many of the treatments involving HFS are topical and regional. Commonly used drugs in these topical applications include anti-inflammatory medications such as cortisol and dexamethasone. However, there has been a notable shift towards using non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen and ketoprofen. NSAIDs commonly cause gastrointestinal side effects such as nausea and heartburn, which can all be bypassed by delivering NSAIDs using sonophoresis. With its credible safety and useful ability in penetrating the stratum corneum, HFS remains an incredibly versatile option for delivering drugs topically. Low frequency sonophoresis (LFS), on the other hand, has a variety of applications that can be built upon in the future. Since LFS is not restricted by its ability to deliver molecules of varying sizes, drugs such as proteins, nanoparticles, and vaccines are all possible targets. Ocular delivery In previous literature, it has been demonstrated that ocular delivery of drugs can be achieved with high efficacy and minimal invasion. With 20 kHz ultrasound waves at an average temporal intensity of 2 W/cm^2applied every second, the permeability of drugs with varying lipophilicity were investigated, such as atenolol and carteolol, increased by 2.6 and 2.8-fold respectively. Topical gene therapy Topical gene therapy is another area for investigation in combination with sonophoresis. Since there exists a need to enhance gene transfer into cells, sonophoresis has the ability to achieve higher transfection rate through acoustic cavitation. Additionally, there is the advancement of using microbubbles with a contrast agent to diagnostically image the brain, as LFS and cavitation allows for disruption of the blood brain barrier. Gene therapy using ultrasound and microbubbles is also being investigated for ocular disease. In cardiovascular disease, for example, the efficiency of gene therapy can be improved by ultrasound targeted microbubble destruction where a gene-loaded microbubble can be burst to release its contents. Challenges Research being done on sonophoresis is poorly standardized. For example, the emission of ultrasound waves further away from the source results in a greater beam area, which drastically changes the ultrasound energy at the targeted area. More challenges surround the cost of the actual ultrasound devices used in sonophoresis in more clinical settings. There still yet exists a low-cost device with high efficacy. Additionally the precise mechanisms as to how sonophoresis works is currently yet to be discovered. Further research into the mechanisms, and the dominant mechanisms, can allow for better optimization of sonophoresis parameters, which will increase the efficacy of treatments. Areas of sonophoretic research include the application of various drugs, dual-frequency sonophoresis, combined transdermal drug delivery techniques, and the use of nanoparticles to carry drugs. At an optimal frequency, phonophoresis will be painless and have minimal to no risk. The heat that is emitted from ultrasound use can also be damaging to the surface of the skin, and cavitation can potentially lead to tissue damage. Nanoparticle toxicity is another potential risk. References Medical ultrasonography Drug delivery devices
Sonophoresis
[ "Chemistry" ]
2,849
[ "Pharmacology", "Drug delivery devices" ]
6,548,232
https://en.wikipedia.org/wiki/Sabin%20%28unit%29
In acoustics, the sabin (or more precisely the square foot sabin) is a unit of sound absorption, used for expressing the total effective absorption for the interior of a room. Sound absorption can be expressed in terms of the percentage of energy absorbed compared with the percentage reflected. It can also be expressed as a coefficient, with a value of 1.00 representing a material which absorbs 100% of the energy, and a value of 0.00 meaning all the sound is reflected. The concept of a unit for absorption was first suggested by American physicist Wallace Clement Sabine, the founder of the field of architectural acoustics. He defined the "open-window unit" as the absorption of of open window. The unit was renamed the sabin after Sabine, and it is now defined as "the absorption due to unit area of a totally absorbent surface". Sabins may be calculated with either imperial or metric units. One square foot of 100% absorbing material has a value of one imperial sabin, and 1 square metre of 100% absorbing material has a value of one metric sabin. The total absorption in metric sabins for a room containing many types of surface is given by where are the areas of the surfaces in the room (in m2), and are the absorption coefficients of the surfaces. Sabins are used in calculating the reverberation time of concert halls, lecture theatres, and recording studios. References Sources External links Understanding sabins from NetWell Noise Control Units of measurement Sound measurements
Sabin (unit)
[ "Physics", "Mathematics" ]
309
[ "Quantity", "Sound measurements", "Physical quantities", "Units of measurement" ]
6,548,283
https://en.wikipedia.org/wiki/Antigen%20presentation
Antigen presentation is a vital immune process that is essential for T cell immune response triggering. Because T cells recognize only fragmented antigens displayed on cell surfaces, antigen processing must occur before the antigen fragment can be recognized by a T-cell receptor. Specifically, the fragment, bound to the major histocompatibility complex (MHC), is transported to the surface of the antigen-presenting cell, a process known as presentation. If there has been an infection with viruses or bacteria, the antigen-presenting cell will present an endogenous or exogenous peptide fragment derived from the antigen by MHC molecules. There are two types of MHC molecules which differ in the behaviour of the antigens: MHC class I molecules (MHC-I) bind peptides from the cell cytosol, while peptides generated in the endocytic vesicles after internalisation are bound to MHC class II (MHC-II). Cellular membranes separate these two cellular environments - intracellular and extracellular. Each T cell can only recognize tens to hundreds of copies of a unique sequence of a single peptide among thousands of other peptides presented on the same cell, because an MHC molecule in one cell can bind to quite a large range of peptides. Predicting which (fragments of) antigens will be presented to the immune system by a certain MHC/HLA type is difficult, but the technology involved is improving. Presentation of intracellular antigens: Class I Cytotoxic T cells (also known as Tc, killer T cell, or cytotoxic T-lymphocyte (CTL)) express CD8 co-receptors and are a population of T cells that are specialized for inducing programmed cell death of other cells. Cytotoxic T cells regularly patrol all body cells to maintain the organismal homeostasis. Whenever they encounter signs of disease, caused for example by the presence of viruses or intracellular bacteria or a transformed tumor cell, they initiate processes to destroy the potentially harmful cell. All nucleated cells in the body (along with platelets) display class I major histocompatibility complex (MHC-I molecules). Antigens generated endogenously within these cells are bound to MHC-I molecules and presented on the cell surface. This antigen presentation pathway enables the immune system to detect transformed or infected cells displaying peptides from modified-self (mutated) or foreign proteins. In the presentation process, these proteins are mainly degraded into small peptides by cytosolic proteases in the proteasome, but there are also other cytoplasmic proteolytic pathways. Then, peptides are distributed to the endoplasmic reticulum (ER) via the action of heat shock proteins and the transporter associated with antigen processing (TAP) which translocates the cytosolic peptides into the ER lumen in an ATP-dependent transport mechanism. There are several ER chaperones involved in MHC-I assembly, such as calnexin, calreticulin, Erp57, protein disulfide isomerase (PDI), and tapasin. Specifically, the complex of TAP, tapasin, MHS Class 1, ERp57, and calreticulin is called the peptide-loading complex (PLC). Peptides are loaded to MHC-I peptide binding groove between two alpha helices at the bottom of the α1 and α2 domains of the MHC class I molecule. After releasing from tapasin, peptide-MHC-I complexes (pMHC-I) exit the ER and are transported to the cell surface by exocytic vesicles. Naïve anti-viral T cells (CD8+) cannot directly eliminate transformed or infected cells. They have to be activated by the pMHC-I complexes of antigen-presenting cells (APCs). Here, antigen can be presented directly (as described above) or indirectly (cross-presentation) from virus-infected and non-infected cells. After the interaction between pMHC-I and TCR, in presence of co-stimulatory signals and/or cytokines, T cells are activated, migrate to the peripheral tissues and kill the target cells (infected or damaged cells) by inducing cytotoxicity. Cross-presentation is a special case in which MHC-I molecules are able to present extracellular antigens, usually displayed only by MHC-II molecules. This ability appears in several APCs, mainly plasmacytoid dendritic cells in tissues that stimulate CD8+ T cells directly. This process is essential when APCs are not directly infected, triggering local antiviral and anti-tumor immune responses immediately without trafficking the APCs in the local lymph nodes. Presentation of extracellular antigens: Class II Antigens from the extracellular space and sometimes also endogenous ones, are enclosed into endocytic vesicles and presented on the cell surface by MHC-II molecules to the helper T cells expressing CD4 molecule. Only APCs such as dendritic cells, B cells or macrophages express MHC-II molecules on their surface in substantial quantity, so expression of MHC-II molecules is more cell-specific than MHC-I. APCs usually internalise exogenous antigens by endocytosis, but also by pinocytosis, macroautophagy, endosomal microautophagy or chaperone-mediated autophagy. In the first case, after internalisation, the antigens are enclosed in vesicles called endosomes. There are three compartments involved in this antigen presentation pathway: early endosomes, late endosomes or endolysosomes and lysosomes, where antigens are hydrolized by lysosome-associated enzymes (acid-dependent hydrolases, glycosidases, proteases, lipases). This process is favored by gradual reduction of the pH. The main proteases in endosomes are cathepsins and the result is the degradation of the antigens into oligopeptides. MHC-II molecules are transported from the ER to the MHC class II loading compartment together with the protein invariant chain (Ii, CD74). A non classical MHC-II molecule (HLA-DO and HLA-DM) catalyses the exchange of part of the CD74 (CLIP peptide) with the peptide antigen. Peptide-MHC-II complexes (pMHC-II) are transported to the plasma membrane and the processed antigen is presented to the helper T cells in the lymph nodes. APCs undergo a process of maturation while migrating, via chemotactic signals, to lymphoid tissues, in which they lose the phagocytic capacity and develop an increased ability to communicate with T-cells by antigen-presentation. As well as in CD8+ cytotoxic T cells, APCs need pMHC-II and additional costimulatory signals to fully activate naive T helper cells. Alternative pathway of endogenous antigen processing and presentation over MHC-II molecules exists in medullary thymic epithelial cells (mTEC) via the process of autophagy. It is important for the process of central tolerance of T cells in particular the negative selection of autoreactive clones. Random gene expression of the whole genome is achieved via the action of AIRE and a self-digestion of the expressed molecules presented on both MHC-I and MHC-II molecules. Presentation of native intact antigens to B cells B-cell receptors on the surface of B cells bind to intact native and undigested antigens of a structural nature, rather than to a linear sequence of a peptide which has been digested into small fragments and presented by MHC molecules. Large complexes of intact antigen are presented in lymph nodes to B cells by follicular dendritic cells in the form of immune complexes. Some APCs expressing comparatively lower levels of lysosomal enzymes are thus less likely to digest the antigen they have captured before presenting it to B cells. See also Immune system Immunology Immunological synapse Trogocytosis References External links ImmPort - Gene summaries, ontologies, pathways, protein/protein interactions and more for genes involved in antigen processing and presentation Immune system HIV/AIDS
Antigen presentation
[ "Biology" ]
1,742
[ "Immune system", "Organ systems" ]
21,121,213
https://en.wikipedia.org/wiki/Desmond%20%28software%29
Desmond is a software package developed at D. E. Shaw Research to perform high-speed molecular dynamics simulations of biological systems on conventional computer clusters. The code uses novel parallel algorithms and numerical methods to achieve high performance on platforms containing multiple processors, but may also be executed on a single computer. The core and source code are available at no cost for non-commercial use by universities and other not-for-profit research institutions, and have been used in the Folding@home distributed computing project. Desmond is available as commercial software through Schrödinger, Inc. Molecular dynamics program Desmond supports algorithms typically used to perform fast and accurate molecular dynamics. Long-range electrostatic energy and forces can be calculated using particle mesh Ewald-based methods. Constraints can be enforced using the M-SHAKE algorithm. These methods can be used together with time-scale splitting (RESPA-based) integration schemes. Desmond can compute energies and forces for many standard fixed-charged force fields used in biomolecular simulations, and is also compatible with polarizable force fields based on the Drude formalism. A variety of integrators and support for various ensembles have been implemented in the code, including methods for temperature control (Andersen thermostat, Nosé-Hoover, and Langevin) and pressure control (Berendsen, Martyna-Tobias-Klein, and Langevin). The code also supports methods for restraining atomic positions and molecular configurations; allows simulations to be carried out using a variety of periodic cell configurations; and has facilities for accurate checkpointing and restart. Desmond can also be used to perform absolute and relative free energy calculations (e.g., free energy perturbation). Other simulation methods (such as replica exchange) are supported through a plug-in-based infrastructure, which also allows users to develop their own simulation algorithms and models. Desmond is also available in a graphics processing unit (GPU) accelerated version that is about 60-80 times faster than the central processing unit (CPU) version. Related software tools Along with the molecular dynamics program, the Desmond software also includes tools for minimizing and energy analysis, both of which can be run efficiently in a parallel environment. Force fields parameters can be assigned using a template-based parameter assignment tool called Viparr. It currently supports several versions of the CHARMM, Amber and OPLS force fields, and a range of different water models. Desmond is integrated with a molecular modeling environment (Maestro, developed by Schrödinger, Inc.) for setting up simulations of biological and chemical systems, and is compatible with Visual Molecular Dynamics (VMD) for trajectory viewing and analysis. See also Comparison of software for molecular mechanics modeling References External links Desmond Users Group (deleted) Schrödinger Desmond Product Page Molecular dynamics software Force fields (chemistry)
Desmond (software)
[ "Chemistry" ]
571
[ "Molecular dynamics software", "Computational chemistry software", "Molecular dynamics", "Computational chemistry", "Force fields (chemistry)" ]
21,121,900
https://en.wikipedia.org/wiki/Caterpillar%20777
The Caterpillar 777 is a 100-ton haul truck, typically used in open pit mining, manufactured by Caterpillar Inc. The first model of Caterpillar 777 was introduced in 1974. Its diesel engine is capable of putting out . The 777D, introduced in 1996, was powered by a diesel. References Dump trucks Off-road vehicles Mining equipment Caterpillar Inc. trucks Articles containing video clips Vehicles introduced in 1974
Caterpillar 777
[ "Engineering" ]
91
[ "Engineering vehicles", "Dump trucks", "Mining equipment" ]
21,122,915
https://en.wikipedia.org/wiki/Caterpillar%20789
The Caterpillar 789 dump truck is a model of haul trucks, typically used in open pit mining, manufactured by Caterpillar Inc. The 789 has a capacity of 177 tonnes, and its engine can produce 1770 horsepower. While some competing products use hybrid drive, the Caterpillar 789 has an entirely mechanical drive-train. The first Caterpillar 789 trucks were introduced in 1986. The vehicle's controls are designed to work much like those of an ordinary truck. However, they have multiple mechanisms for the driver to slow or stop the vehicle. Next to the traditional brake pedal there is a second foot pedal to activate a secondary braking system. There is a lever to the right of the steering wheel, called the "retarding brake", used for less intense braking. Finally, the vehicle's drive-train will automatically change gears, while descending inclines, to help the driver keep its speed under control. Because the vehicle provides the driver with a limited view, with many blind spots, they are equipped with multiple proximity sensors, and closed circuit TVs cameras. The 789 averages . It has six tires, each of which are in diameter and cost $50,000 to replace. Malcolm Nance, the author of Defeating ISIS: Who They Are, How They Fight, What They Believe, described ISIS suicide bombers driving Caterpillar 789 trucks loaded with explosives into fortifications. References Dump trucks Caterpillar Inc. trucks Vehicles introduced in 1986
Caterpillar 789
[ "Engineering" ]
300
[ "Engineering vehicles", "Dump trucks" ]
21,123,070
https://en.wikipedia.org/wiki/Aryl%20radical
An aryl radical in organic chemistry is a reactive intermediate and an arene compound incorporating one free radical carbon atom as part of the ring structure. As such it is the radical counterpart of the arenium ion. The parent compound is the phenyl radical . Aryl radicals are intermediates in certain organic reactions. Synthesis Aryl radicals can be obtained via aryl diazonium salts. Alternatives for these salts are certain aryl triazenes and aryl hydrazines. Aryl bromides and iodides can be converted to aryl radicals via tributyltin hydride and related compounds and silyl hydrides. Aryl halides can also be converted via electrochemical cathodic reduction The mushroom Stephanospora caroticolor is suspected to generate an aryl radical as part of its biological chemical defence mechanism. Spectroscopy The parent phenyl radical has been identified by electron paramagnetic resonance and UV spectroscopy. Reactions Aryl radicals are very reactive and are found in many different reactions. Hydrogen-atom abstraction is considered a side reaction. Several reactions of synthetic utility in which aryl radicals feature are: Halogen transfer reaction with electron-deficient alkenes in the Meerwein arylation biaryl couplings Sandmeyer reactions addition to iminium ions addition to sulfur dioxide References Free radicals
Aryl radical
[ "Chemistry", "Biology" ]
277
[ "Senescence", "Free radicals", "Biomolecules" ]
21,127,428
https://en.wikipedia.org/wiki/Zooflagellate
In some older systems of classification, Zoomastigophora is a phylum (more commonly known as zooflagellates) within the kingdom Protista. Organisms within this group have a spherical, elongated body with a single central nucleus. They are single-celled, heterotrophic eukaryotes and may form symbiotic relationships with other organisms, including Trichomonas. Some species are parasitic, causing diseases such as the African Sleeping Sickness, caused by the zooflagellate Trypanosoma brucei. Zooflagellates have one or more flagella but do not have plastids or cell walls. A few are mutualistic, such as those that live in the guts of termites and aid the bacteria present in breaking down wood. References Eukaryote phyla Obsolete eukaryote taxa
Zooflagellate
[ "Biology" ]
177
[ "Eukaryotes", "Tree of life (biology)", "Protists", "Eukaryote stubs" ]
143,357
https://en.wikipedia.org/wiki/Medical%20ultrasound
Medical ultrasound includes diagnostic techniques (mainly imaging techniques) using ultrasound, as well as therapeutic applications of ultrasound. In diagnosis, it is used to create an image of internal body structures such as tendons, muscles, joints, blood vessels, and internal organs, to measure some characteristics (e.g., distances and velocities) or to generate an informative audible sound. The usage of ultrasound to produce visual images for medicine is called medical ultrasonography or simply sonography, or echography. The practice of examining pregnant women using ultrasound is called obstetric ultrasonography, and was an early development of clinical ultrasonography. The machine used is called an ultrasound machine, a sonograph or an echograph. The visual image formed using this technique is called an ultrasonogram, a sonogram or an echogram. Ultrasound is composed of sound waves with frequencies greater than 20,000 Hz, which is the approximate upper threshold of human hearing. Ultrasonic images, also known as sonograms, are created by sending pulses of ultrasound into tissue using a probe. The ultrasound pulses echo off tissues with different reflection properties and are returned to the probe which records and displays them as an image. A general-purpose ultrasonic transducer may be used for most imaging purposes but some situations may require the use of a specialized transducer. Most ultrasound examination is done using a transducer on the surface of the body, but improved visualization is often possible if a transducer can be placed inside the body. For this purpose, special-use transducers, including transvaginal, endorectal, and transesophageal transducers are commonly employed. At the extreme, very small transducers can be mounted on small diameter catheters and placed within blood vessels to image the walls and disease of those vessels. Types The imaging mode refers to probe and machine settings that result in specific dimensions of the ultrasound image. Several modes of ultrasound are used in medical imaging: A-mode: Amplitude mode refers to the mode in which the amplitude of the transducer voltage is recorded as a function of two-way travel time of an ultrasound pulse. A single pulse is transmitted through the body and scatters back to the same transducer element. The voltage amplitudes recorded correlate linearly to acoustic pressure amplitudes. A-mode is one-dimensional. B-mode: In brightness mode, an array of transducer elements scans a plane through the body resulting in a two-dimensional image. Each pixel value of the image correlates to voltage amplitude registered from the backscattered signal. The dimensions of B-mode images are voltage as a function of angle and two-way time. M-mode: In motion mode, A-mode pulses are emitted in succession. The backscattered signal is converted to lines of bright pixels, whose brightness linearly correlates to backscattered voltage amplitudes. Each next line is plotted adjacent to the previous, resulting in an image that looks like a B-mode image. The M-mode image dimensions are however voltage as a function of two-way time and recording time. This mode is an ultrasound analogy to streak video recording in high-speed photography. As moving tissue transitions produce backscattering, this can be used to determine the displacement of specific organ structures, most commonly the heart. Most machines convert two-way time to imaging depth using as assumed speed of sound of 1540 m/s. As the actual speed of sound varies greatly in different tissue types, an ultrasound image is therefore not a true tomographic representation of the body. Three-dimensional imaging is done by combining B-mode images, using dedicated rotating or stationary probes. This has also been referred to as C-mode. An imaging technique refers to a method of signal generation and processing that results in a specific application. Most imaging techniques are operating in B-mode. Doppler sonography: This imaging technique makes use of the Doppler effect in detection and measuring moving targets, typically blood. Harmonic imaging: backscattered signal from tissue is filtered to comprise only frequency content of at least twice the centre frequency of the transmitted ultrasound. Harmonic imaging used for perfusion detection when using ultrasound contrast agents and for the detection of tissue harmonics. Common pulse schemes for the creation of harmonic response without the need of real-time Fourier analysis are pulse inversion and power modulation. B-flow is an imaging technique that digitally highlights moving reflectors (mainly red blood cells) while suppressing the signals from the surrounding stationary tissue. It aims to visualize flowing blood and surrounding stationary tissues simultaneously. It is thus an alternative or complement to Doppler ultrasonography in visualizing blood flow. Therapeutic ultrasound aimed at a specific tumor or calculus is not an imaging mode. However, for positioning a treatment probe to focus on a specific region of interest, A-mode and B-mode are typically used, often during treatment. Advantages and drawbacks Compared to other medical imaging modalities, ultrasound has several advantages. It provides images in real-time, is portable, and can consequently be brought to the bedside. It is substantially lower in cost than other imaging strategies. Drawbacks include various limits on its field of view, the need for patient cooperation, dependence on patient physique, difficulty imaging structures obscured by bone, air or gases, and the necessity of a skilled operator, usually with professional training. Uses Sonography (ultrasonography) is widely used in medicine. It is possible to perform both diagnosis and therapeutic procedures, using ultrasound to guide interventional procedures such as biopsies or to drain collections of fluid, which can be both diagnostic and therapeutic. Sonographers are medical professionals who perform scans which are traditionally interpreted by radiologists, physicians who specialize in the application and interpretation of medical imaging modalities, or by cardiologists in the case of cardiac ultrasonography (echocardiography). Sonography is effective for imaging soft tissues of the body. Superficial structures such as muscle, tendon, testis, breast, thyroid and parathyroid glands, and the neonatal brain are imaged at higher frequencies (7–18 MHz), which provide better linear (axial) and horizontal (lateral) resolution. Deeper structures such as liver and kidney are imaged at lower frequencies (1–6 MHz) with lower axial and lateral resolution as a price of deeper tissue penetration. Anesthesiology In anesthesiology, ultrasound is commonly used to guide the placement of needles when injecting local anesthetic solutions in the proximity of nerves identified within the ultrasound image (nerve block). It is also used for vascular access such as cannulation of large central veins and for difficult arterial cannulation. Transcranial Doppler is frequently used by neuro-anesthesiologists for obtaining information about flow-velocity in the basal cerebral vessels. Angiology (vascular) In angiology or vascular medicine, duplex ultrasound (B Mode imaging combined with Doppler flow measurement) is used to diagnose arterial and venous disease. This is particularly important in potential neurologic problems, where carotid ultrasound is commonly used for assessing blood flow and potential or suspected stenosis in the carotid arteries, while transcranial Doppler is used for imaging flow in the intracerebral arteries. Intravascular ultrasound (IVUS) uses a specially designed catheter with a miniaturized ultrasound probe attached to its distal end, which is then threaded inside a blood vessel. The proximal end of the catheter is attached to computerized ultrasound equipment and allows the application of ultrasound technology, such as a piezoelectric transducer or capacitive micromachined ultrasonic transducer, to visualize the endothelium of blood vessels in living individuals. In the case of the common and potentially, serious problem of blood clots in the deep veins of the leg, ultrasound plays a key diagnostic role, while ultrasonography of chronic venous insufficiency of the legs focuses on more superficial veins to assist with planning of suitable interventions to relieve symptoms or improve cosmetics. Cardiology (heart) Echocardiography is an essential tool in cardiology, assisting in evaluation of heart valve function, such as stenosis or insufficiency, strength of cardiac muscle contraction, and hypertrophy or dilatation of the main chambers. (ventricle and atrium) Emergency medicine Point of care ultrasound has many applications in emergency medicine. These include differentiating cardiac from pulmonary causes of acute breathlessness, and the Focused Assessment with Sonography for Trauma (FAST) exam, extended to include assessment for significant hemoperitoneum or pericardial tamponade after trauma (EFAST). Other uses include assisting with differentiating causes of abdominal pain such as gallstones and kidney stones. Emergency Medicine Residency Programs have a substantial history of promoting the use of bedside ultrasound during physician training. Gastroenterology/Colorectal surgery Both abdominal and endoanal ultrasound are frequently used in gastroenterology and colorectal surgery. In abdominal sonography, the major organs of the abdomen such as the pancreas, aorta, inferior vena cava, liver, gall bladder, bile ducts, kidneys, and spleen may be imaged. However, sound waves may be blocked by gas in the bowel and attenuated to differing degrees by fat, sometimes limiting diagnostic capabilities. The appendix can sometimes be seen when inflamed (e.g.: appendicitis) and ultrasound is the initial imaging choice, avoiding radiation if possible, although it frequently needs to be followed by other imaging methods such as CT. Endoanal ultrasound is used particularly in the investigation of anorectal symptoms such as fecal incontinence or obstructed defecation. It images the immediate perianal anatomy and is able to detect occult defects such as tearing of the anal sphincter. Hepatology Ultrasonography of liver tumors allows for both detection and characterization. Ultrasound imaging studies are often obtained during the evaluation process of Fatty liver disease. Ultrasonography reveals a "bright" liver with increased echogenicity. Pocket-sized ultrasound devices might be used as point-of-care screening tools to diagnose liver steatosis. Gynecology and obstetrics Gynecologic ultrasonography examines female pelvic organs (specifically the uterus, ovaries, and fallopian tubes) as well as the bladder, adnexa, and pouch of Douglas. It uses transducers designed for approaches through the lower abdominal wall, curvilinear and sector, and specialty transducers such as transvaginal ultrasound. Obstetrical sonography was originally developed in the late 1950s and 1960s by Sir Ian Donald and is commonly used during pregnancy to check the development and presentation of the fetus. It can be used to identify many conditions that could be potentially harmful to the mother and/or baby possibly remaining undiagnosed or with delayed diagnosis in the absence of sonography. It is currently believed that the risk of delayed diagnosis is greater than the small risk, if any, associated with undergoing an ultrasound scan. However, its use for non-medical purposes such as fetal "keepsake" videos and photos is discouraged. Obstetric ultrasound is primarily used to: Date the pregnancy (gestational age) Confirm fetal viability Determine location of fetus, intrauterine vs ectopic Check the location of the placenta in relation to the cervix Check for the number of fetuses (multiple pregnancy) Check for major physical abnormalities. Assess fetal growth (for evidence of intrauterine growth restriction (IUGR)) Check for fetal movement and heartbeat. Determine the sex of the baby According to the European Committee of Medical Ultrasound Safety (ECMUS) Nonetheless, care should be taken to use low power settings and avoid pulsed wave scanning of the fetal brain unless specifically indicated in high risk pregnancies. Figures released for the period 2005–2006 by the UK Government (Department of Health) show that non-obstetric ultrasound examinations constituted more than 65% of the total number of ultrasound scans conducted. Hemodynamics (blood circulation) Blood velocity can be measured in various blood vessels, such as middle cerebral artery or descending aorta, by relatively inexpensive and low risk ultrasound Doppler probes attached to portable monitors. These provide non-invasive or transcutaneous (non-piercing) minimal invasive blood flow assessment. Common examples are transcranial Doppler, esophageal Doppler and suprasternal Doppler. Otolaryngology (head and neck) Most structures of the neck, including the thyroid and parathyroid glands, lymph nodes, and salivary glands, are well-visualized by high-frequency ultrasound with exceptional anatomic detail. Ultrasound is the preferred imaging modality for thyroid tumors and lesions, and its use is important in the evaluation, preoperative planning, and postoperative surveillance of patients with thyroid cancer. Many other benign and malignant conditions in the head and neck can be differentiated, evaluated, and managed with the help of diagnostic ultrasound and ultrasound-guided procedures. Neonatology In neonatology, transcranial Doppler can be used for basic assessment of intracerebral structural abnormalities, suspected hemorrhage, ventriculomegaly or hydrocephalus and anoxic insults (periventricular leukomalacia). It can be performed through the soft spots in the skull of a newborn infant (Fontanelle) until these completely close at about 1 year of age by which time they have formed a virtually impenetrable acoustic barrier to ultrasound. The most common site for cranial ultrasound is the anterior fontanelle. The smaller the fontanelle, the more the image is compromised. Lung ultrasound has been found to be useful in diagnosing common neonatal respiratory diseases such as transient tachypnea of the newborn, respiratory distress syndrome, congenital pneumonia, meconium aspiration syndrome, and pneumothorax. A neonatal lung ultrasound score, first described by Brat et al., has been found to highly correlate with oxygenation in the newborn. Ophthalmology () In ophthalmology and optometry, there are two major forms of eye exam using ultrasound: A-scan ultrasound biometry, is commonly referred to as an A-scan (amplitude scan). A-mode provides data on the length of the eye, which is a major determinant in common sight disorders, especially for determining the power of an intraocular lens after cataract extraction. B-scan ultrasonography, or B-scan-Brightness scan, is a B-mode scan that produces a cross-sectional view of the eye and the orbit. It is an essential tool in ophthalmology for diagnosing and managing a wide array of conditions affecting the posterior segment of the eye.It is non invasive and uses frequency 10-15 MHz.It is often used in conjunction with other imaging techniques (like OCT or fluorescein angiography) for a more comprehensive evaluation of ocular conditions. Pulmonology (lungs) Ultrasound is used to assess the lungs in a variety of settings including critical care, emergency medicine, trauma surgery, as well as general medicine. This imaging modality is used at the bedside or examination table to evaluate a number of different lung abnormalities as well as to guide procedures such as thoracentesis, (drainage of pleural fluid (effusion)), needle aspiration biopsy, and catheter placement. Although air present in the lungs does not allow good penetration of ultrasound waves, interpretation of specific artifacts created on the lung surface can be used to detect abnormalities. Lung ultrasound basics The Normal Lung Surface: The lung surface is composed of visceral and parietal pleura. These two surfaces are typically pushed together and make up the pleural line, which is the basis of lung (or pleural) ultrasound. This line is visible less than a centimeter below the rib line in most adults. On ultrasound, it is visualized as a hyperechoic (bright white) horizontal line if the ultrasound probe is applied perpendicularly to the skin. Artifacts: Lung ultrasound relies on artifacts, which would otherwise be considered a hindrance in imaging. Air blocks the ultrasound beam and thus visualizing healthy lung tissue itself with this mode of imaging is not practical. Consequently, physicians and sonographers have learned to recognize patterns that ultrasound beams create when imaging healthy versus diseased lung tissue. Three commonly seen and utilized artifacts in lung ultrasound include lung sliding, A-lines, and B-lines. §  Lung Sliding: The presence of lung sliding, which indicates the shimmering of the pleural line that occurs with movement of the visceral and parietal pleura against one another with respiration (sometimes described as 'ants marching'), is the most important finding in normal aerated lung. Lung sliding indicates both that the lung is present at the chest wall and that the lung is functioning. §  A-lines: When the ultrasound beam makes contact with the pleural line, it is reflected back creating a bright white horizontal line. The subsequent reverberation artifacts that appear as equally spaced horizontal lines deep to the pleura are A-lines. Ultimately, A-lines are a reflection of the ultrasound beam from the pleura with the space between A-lines corresponding to the distance between the parietal pleura and the skin surface. A-lines indicate the presence of air, which means that these artifacts can be present in normal healthy lung (and also in patients with pneumothorax). §  B-lines: B-lines are also reverberation artifacts. They are visualized as hyperechoic vertical lines extending from the pleura to the edge of the ultrasound screen. These lines are sharply defined and laser-like and typically do not fade as they progress down the screen. A few B-lines that move along with the sliding pleura can be seen in normal lung due to acoustic impedance differences between water and air. However, excessive B-lines (three or more) are abnormal and are typically indicative of underlying lung pathology. Lung pathology assessed with ultrasound Pulmonary edema: Lung ultrasound has been shown to be very sensitive for the detection of pulmonary edema. It allows for improvement in diagnosis and management of critically ill patients, particularly when used in combination with echocardiography. The sonographic feature that is present in pulmonary edema is multiple B-lines. B-lines can occur in a healthy lung; however, the presence of 3 or more in the anterior or lateral lung regions is always abnormal. In pulmonary edema, B-lines indicate an increase in the amount of water contained in the lungs outside of the pulmonary vasculature. B-lines can also be present in a number of other conditions including pneumonia, pulmonary contusion, and lung infarction. Additionally, it is important to note that there are multiple types of interactions between the pleural surface and the ultrasound wave that can generate artifacts with some similarity to B-lines but which do not have pathologic significance. Pneumothorax: In clinical settings when pneumothorax is suspected, lung ultrasound can aid in diagnosis. In pneumothorax, air is present between the two layers of the pleura and lung sliding on ultrasound is therefore absent. The negative predictive value for lung sliding on ultrasound is reported as 99.2–100% – briefly, if lung sliding is present, a pneumothorax is effectively ruled out. The absence of lung sliding, however, is not necessarily specific for pneumothorax as there are other conditions that also cause this finding including acute respiratory distress syndrome, lung consolidations, pleural adhesions, and pulmonary fibrosis. Pleural effusion: Lung ultrasound is a cost-effective, safe, and non-invasive imaging method that can aid in the prompt visualization and diagnosis of pleural effusions. Effusions can be diagnosed by a combination of physical exam, percussion, and auscultation of the chest. However, these exam techniques can be complicated by a variety of factors including the presence of mechanical ventilation, obesity, or patient positioning, all of which reduce the sensitivity of the physical exam. Consequently, lung ultrasound can be an additional tool to augment plain chest Xray and chest CT. Pleural effusions on ultrasound appear as structural images within the thorax rather than an artifact. They will typically have four distinct borders including the pleural line, two rib shadows, and a deep border. In critically ill patients with pleural effusion, ultrasound may guide procedures including needle insertion, thoracentesis, and chest-tube insertion. Lung cancer staging: In pulmonology, endobronchial ultrasound (EBUS) probes are applied to standard flexible endoscopic probes and used by pulmonologists to allow for direct visualization of endobronchial lesions and lymph nodes prior to transbronchial needle aspiration. Among its many uses, EBUS aids in lung cancer staging by allowing for lymph node sampling without the need for major surgery. COVID-19: Lung ultrasound has proved useful in the diagnosis of COVID-19 especially in cases where other investigations are not available. Urinary tract Ultrasound is routinely used in urology to determine the amount of fluid retained in a patient's bladder. In a pelvic sonogram, images include the uterus and ovaries or urinary bladder in females. In males, a sonogram will provide information about the bladder, prostate, or testicles (for example to urgently distinguish epididymitis from testicular torsion). In young males, it is used to distinguish more benign testicular masses (varicocele or hydrocele) from testicular cancer, which is curable but must be treated to preserve health and fertility. There are two methods of performing pelvic sonography – externally or internally. The internal pelvic sonogram is performed either transvaginally (in a woman) or transrectally (in a man). Sonographic imaging of the pelvic floor can produce important diagnostic information regarding the precise relationship of abnormal structures with other pelvic organs and it represents a useful hint to treat patients with symptoms related to pelvic prolapse, double incontinence and obstructed defecation. It is also used to diagnose and, at higher frequencies, to treat (break up) kidney stones or kidney crystals (nephrolithiasis). Penis and scrotum Scrotal ultrasonography is used in the evaluation of testicular pain, and can help identify solid masses. Ultrasound is an excellent method for the study of the penis, such as indicated in trauma, priapism, erectile dysfunction or suspected Peyronie's disease. Musculoskeletal Musculoskeletal ultrasound is used to examine tendons, muscles, nerves, ligaments, soft tissue masses, and bone surfaces. It is helpful in diagnosing ligament sprains, muscles strains and joint pathology. It is an alternative or supplement to x-ray imaging in detecting fractures of the wrist, elbow and shoulder for patients up to 12 years (Fracture sonography). Quantitative ultrasound is an adjunct musculoskeletal test for myopathic disease in children; estimates of lean body mass in adults; proxy measures of muscle quality (i.e., tissue composition) in older adults with sarcopenia Ultrasound can also be used for needle guidance in muscle or joint injections, as in ultrasound-guided hip joint injection. Kidneys In nephrology, ultrasonography of the kidneys is essential in the diagnosis and management of kidney-related diseases. The kidneys are easily examined, and most pathological changes are distinguishable with ultrasound. It is an accessible, versatile, relatively economic, and fast aid for decision-making in patients with renal symptoms and for guidance in renal intervention. Using B-mode imaging, assessment of renal anatomy is easily performed, and US is often used as image guidance for renal interventions. Furthermore, novel applications in renal US have been introduced with contrast-enhanced ultrasound (CEUS), elastography and fusion imaging. However, renal US has certain limitations, and other modalities, such as CT (CECT) and MRI, should be considered for supplementary imaging in assessing renal disease. Venous access Intravenous access, for the collection of blood samples to assist in diagnosis or laboratory investigation including blood culture, or for administration of intravenous fluids for fluid maintenance of replacement or blood transfusion in sicker patients, is a common medical procedure. The need for intravenous access occurs in the outpatient laboratory, in the inpatient hospital units, and most critically in the Emergency Room and Intensive Care Unit. In many situations, intravenous access may be required repeatedly or over a significant time period. In these latter circumstances, a needle with an overlying catheter is introduced into the vein and the catheter is then inserted securely into the vein while the needle is withdrawn. The chosen veins are most frequently selected from the arm, but in challenging situations, a deeper vein from the neck (external jugular vein) or upper arm (subclavian vein) may need to be used. There are many reasons why the selection of a suitable vein may be problematic. These include, but are not limited to, obesity, previous injury to veins from inflammatory reaction to previous 'blood draws', previous injury to veins from recreational drug use. In these challenging situations, the insertion of a catheter into a vein has been greatly assisted by the use of ultrasound. The ultrasound unit may be 'cart-based' or 'handheld' using a linear transducer with a frequency of 10 to 15 megahertz. In most circumstances, choice of vein will be limited by the requirement that the vein is within 1.5 cms. from the skin surface. The transducer may be placed longitudinally or transversely over the chosen vein. Ultrasound training for intravenous cannulation is offered in most ultrasound training programs. Mechanism The creation of an image from sound has three steps – transmitting a sound wave, receiving echoes, and interpreting those echoes. Producing a sound wave A sound wave is typically produced by a piezoelectric transducer encased in a plastic housing. Strong, short electrical pulses from the ultrasound machine drive the transducer at the desired frequency. The frequencies can vary between 1 and 18 MHz, though frequencies up to 50–100 megahertz have been used experimentally in a technique known as biomicroscopy in special regions, such as the anterior chamber of the eye. Older technology transducers focused their beam with physical lenses. Contemporary technology transducers use digital antenna array techniques (piezoelectric elements in the transducer produce echoes at different times) to enable the ultrasound machine to change the direction and depth of focus. Near the transducer, the width of the ultrasound beam almost equals to the width of the transducer, after reaching a distance from the transducer (near zone length or Fresnel zone), the beam width narrows to half of the transducer width, and after that the width increases (far zone length or Fraunhofer's zone), where the lateral resolution decreases. Therefore, the wider the transducer width and the higher the frequency of ultrasound, the longer the Fresnel zone, and the lateral resolution can be maintained at a greater depth from the transducer. Ultrasound waves travel in pulses. Therefore, a shorter pulse length requires higher bandwidth (greater number of frequencies) to constitute the ultrasound pulse. As stated, the sound is focused either by the shape of the transducer, a lens in front of the transducer, or a complex set of control pulses from the ultrasound scanner, in the beamforming or spatial filtering technique. This focusing produces an arc-shaped sound wave from the face of the transducer. The wave travels into the body and comes into focus at a desired depth. Materials on the face of the transducer enable the sound to be transmitted efficiently into the body (often a rubbery coating, a form of impedance matching). In addition, a water-based gel is placed between the patient's skin and the probe to facilitate ultrasound transmission into the body. This is because air causes total reflection of ultrasound; impeding the transmission of ultrasound into the body. The sound wave is partially reflected from the layers between different tissues or scattered from smaller structures. Specifically, sound is reflected anywhere where there are acoustic impedance changes in the body: e.g. blood cells in blood plasma, small structures in organs, etc. Some of the reflections return to the transducer. Receiving the echoes The return of the sound wave to the transducer results in the same process as sending the sound wave, in reverse. The returned sound wave vibrates the transducer and the transducer turns the vibrations into electrical pulses that travel to the ultrasonic scanner where they are processed and transformed into a digital image. Forming the image To make an image, the ultrasound scanner must determine two characteristics from each received echo: How long it took the echo to be received from when the sound was transmitted. (Time and distance are equivalent.) How strong the echo was. Once the ultrasonic scanner determines these two, it can locate which pixel in the image to illuminate and with what intensity. Transforming the received signal into a digital image may be explained by using a blank spreadsheet as an analogy. First picture a long, flat transducer at the top of the sheet. Send pulses down the 'columns' of the spreadsheet (A, B, C, etc.). Listen at each column for any return echoes. When an echo is heard, note how long it took for the echo to return. The longer the wait, the deeper the row (1,2,3, etc.). The strength of the echo determines the brightness setting for that cell (white for a strong echo, black for a weak echo, and varying shades of grey for everything in between.) When all the echoes are recorded on the sheet, a greyscale image has been accomplished. In modern ultrasound systems, images are derived from the combined reception of echoes by multiple elements, rather than a single one. These elements in the transducer array work together to receive signals, a process essential for optimizing the ultrasonic beam's focus and producing detailed images. One predominant method for this is "delay-and-sum" beamforming. The time delay applied to each element is calculated based on the geometrical relationship between the imaging point, the transducer, and receiver positions. By integrating these time-adjusted signals, the system pinpoints focus onto specific tissue regions, enhancing image resolution and clarity. The utilization of multiple element reception combined with the delay-and-sum principles underpins the high-quality images characteristic of contemporary ultrasound scans. Displaying the image Images from the ultrasound scanner are transferred and displayed using the DICOM standard. Normally, very little post processing is applied. Sound in the body Ultrasonography (sonography) uses a probe containing multiple acoustic transducers to send pulses of sound into a material. Whenever a sound wave encounters a material with a different density (acoustical impedance), some of the sound wave is scattered but part is reflected back to the probe and is detected as an echo. The time it takes for the echo to travel back to the probe is measured and used to calculate the depth of the tissue interface causing the echo. The greater the difference between acoustic impedances, the larger the echo is. If the pulse hits gases or solids, the density difference is so great that most of the acoustic energy is reflected and it becomes impossible to progress further. The frequencies used for medical imaging are generally in the range of 1 to 18 MHz Higher frequencies have a correspondingly smaller wavelength, and can be used to make more detailed sonograms. However, the attenuation of the sound wave is increased at higher frequencies, so penetration of deeper tissues necessitates a lower frequency (3–5 MHz). Penetrating deep into the body with sonography is difficult. Some acoustic energy is lost each time an echo is formed, but most of it (approximately ) is lost from acoustic absorption. (See Acoustic attenuation for further details on modeling of acoustic attenuation and absorption.) The speed of sound varies as it travels through different materials, and is dependent on the acoustical impedance of the material. However, the sonographic instrument assumes that the acoustic velocity is constant at 1540 m/s. An effect of this assumption is that in a real body with non-uniform tissues, the beam becomes somewhat de-focused and image resolution is reduced. To generate a 2-D image, the ultrasonic beam is swept. A transducer may be swept mechanically by rotating or swinging or a 1-D phased array transducer may be used to sweep the beam electronically. The received data is processed and used to construct the image. The image is then a 2-D representation of the slice into the body. 3-D images can be generated by acquiring a series of adjacent 2-D images. Commonly a specialized probe that mechanically scans a conventional 2-D image transducer is used. However, since the mechanical scanning is slow, it is difficult to make 3D images of moving tissues. Recently, 2-D phased array transducers that can sweep the beam in 3-D have been developed. These can image faster and can even be used to make live 3-D images of a beating heart. Doppler ultrasonography is used to study blood flow and muscle motion. The different detected speeds are represented in color for ease of interpretation, for example leaky heart valves: the leak shows up as a flash of unique color. Colors may alternatively be used to represent the amplitudes of the received echoes. Expansions An additional expansion of ultrasound is bi-planar ultrasound, in which the probe has two 2D planes perpendicular to each other, providing more efficient localization and detection. Furthermore, an omniplane probe can rotate 180° to obtain multiple images. In 3D ultrasound, many 2D planes are digitally added together to create a 3-dimensional image of the object. Doppler ultrasonography Doppler ultrasonography employs the Doppler effect to assess whether structures (usually blood) are moving towards or away from the probe, and their relative velocity. By calculating the frequency shift of a particular sample volume, flow in an artery or a jet of blood flow over a heart valve, its speed and direction can be determined and visualized, as an example. Color Doppler is the measurement of velocity by color scale. Color Doppler images are generally combined with gray scale (B-mode) images to display duplex ultrasonography images. Uses include: Doppler echocardiography is the use of Doppler ultrasonography to examine the heart. An echocardiogram can, within certain limits, produce accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. Velocity measurements allow assessment of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output and E/A ratio (a measure of diastolic dysfunction). Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related measurements of interest. Transcranial Doppler (TCD) and transcranial color Doppler (TCCD), measure the velocity of blood flow through the brain's blood vessels through the cranium. They are useful in the diagnosis of emboli, stenosis, vasospasm from a subarachnoid hemorrhage (bleeding from a ruptured aneurysm), and other problems. Doppler fetal monitors use the Doppler effect to detect the fetal heartbeat during prenatal care. These are hand-held, and some models also display the heart rate in beats per minute (BPM). Use of this monitor is sometimes known as Doppler auscultation. The Doppler fetal monitor is commonly referred to simply as a Doppler or fetal Doppler and provides information similar to that provided by a fetal stethoscope. Contrast ultrasonography (ultrasound contrast imaging) A contrast medium for medical ultrasonography is a formulation of encapsulated gaseous microbubbles to increase echogenicity of blood, discovered by Dr. Raymond Gramiak in 1968 and named contrast-enhanced ultrasound. This contrast medical imaging modality is used throughout the world, for echocardiography in particular in the United States and for ultrasound radiology in Europe and Asia. Microbubbles-based contrast media is administered intravenously into the patient blood stream during the ultrasonography examination. Due to their size, the microbubbles remain confined in blood vessels without extravasating towards the interstitial fluid. An ultrasound contrast media is therefore purely intravascular, making it an ideal agent to image organ microvasculature for diagnostic purposes. A typical clinical use of contrast ultrasonography is detection of a hypervascular metastatic tumor, which exhibits a contrast uptake (kinetics of microbubbles concentration in blood circulation) faster than healthy biological tissue surrounding the tumor. Other clinical applications using contrast exist, as in echocardiography to improve delineation of left ventricle for visualizing contractibility of heart muscle after a myocardial infarction. Finally, applications in quantitative perfusion (relative measurement of blood flow) have emerged for identifying early patient response to anti-cancerous drug treatment (methodology and clinical study by Dr. Nathalie Lassau in 2011), enabling the best oncological therapeutic options to be determined. In oncological practice of medical contrast ultrasonography, clinicians use 'parametric imaging of vascular signatures' invented by Dr. Nicolas Rognin in 2010. This method is conceived as a cancer aided diagnostic tool, facilitating characterization of a suspicious tumor (malignant versus benign) in an organ. This method is based on medical computational science to analyze a time sequence of ultrasound contrast images, a digital video recorded in real-time during patient examination. Two consecutive signal processing steps are applied to each pixel of the tumor: calculation of a vascular signature (contrast uptake difference with respect to healthy tissue surrounding the tumor); automatic classification of the vascular signature into a unique parameter, the latter coded in one of the four following colors: green for continuous hyper-enhancement (contrast uptake higher than healthy tissue one), blue for continuous hypo-enhancement (contrast uptake lower than healthy tissue one), red for fast hyper-enhancement (contrast uptake before healthy tissue one) or yellow for fast hypo-enhancement (contrast uptake after healthy tissue one). Once signal processing in each pixel is completed, a color spatial map of the parameter is displayed on a computer monitor, summarizing all vascular information of the tumor in a single image called a parametric image (see last figure of press article as clinical examples). This parametric image is interpreted by clinicians based on predominant colorization of the tumor: red indicates a suspicion of malignancy (risk of cancer), green or yellow – a high probability of benignity. In the first case (suspicion of malignant tumor), the clinician typically prescribes a biopsy to confirm the diagnostic or a CT scan examination as a second opinion. In the second case (quasi-certain of benign tumor), only a follow-up is needed with a contrast ultrasonography examination a few months later. The main clinical benefits are to avoid a systemic biopsy (with inherent risks of invasive procedures) of benign tumors or a CT scan examination exposing the patient to X-ray radiation. The parametric imaging of vascular signatures method proved to be effective in humans for characterization of tumors in the liver. In a cancer screening context, this method might be potentially applicable to other organs such as breast or prostate. Molecular ultrasonography (ultrasound molecular imaging) The current future of contrast ultrasonography is in molecular imaging with potential clinical applications expected in cancer screening to detect malignant tumors at their earliest stage of appearance. Molecular ultrasonography (or ultrasound molecular imaging) uses targeted microbubbles originally designed by Dr Alexander Klibanov in 1997; such targeted microbubbles specifically bind or adhere to tumoral microvessels by targeting biomolecular cancer expression (overexpression of certain biomolecules that occurs during neo-angiogenesis or inflammation in malignant tumors). As a result, a few minutes after their injection in blood circulation, the targeted microbubbles accumulate in the malignant tumor; facilitating its localization in a unique ultrasound contrast image. In 2013, the very first exploratory clinical trial in humans for prostate cancer was completed at Amsterdam in the Netherlands by Dr. Hessel Wijkstra. In molecular ultrasonography, the technique of acoustic radiation force (also used for shear wave elastography) is applied in order to literally push the targeted microbubbles towards microvessels wall; first demonstrated by Dr. Paul Dayton in 1999. This allows maximization of binding to the malignant tumor; the targeted microbubbles being in more direct contact with cancerous biomolecules expressed at the inner surface of tumoral microvessels. At the stage of scientific preclinical research, the technique of acoustic radiation force was implemented as a prototype in clinical ultrasound systems and validated in vivo in 2D and 3D imaging modes. Elastography (ultrasound elasticity imaging) Ultrasound is also used for elastography, which is a relatively new imaging modality that maps the elastic properties of soft tissue. This modality emerged in the last two decades. Elastography is useful in medical diagnoses as it can discern healthy from unhealthy tissue for specific organs/growths. For example, cancerous tumors will often be harder than the surrounding tissue, and diseased livers are stiffer than healthy ones. There are many ultrasound elastography techniques. Interventional ultrasonography Interventional ultrasonography involves biopsy, emptying fluids, intrauterine Blood transfusion (Hemolytic disease of the newborn). Thyroid cysts: High frequency thyroid ultrasound (HFUS) can be used to treat several gland conditions. The recurrent thyroid cyst that was usually treated in the past with surgery, can be treated effectively by a new procedure called percutaneous ethanol injection, or PEI. With ultrasound guided placement of a 25 gauge needle within the cyst, and after evacuation of the cyst fluid, about 50% of the cyst volume is injected back into the cavity, under strict operator visualization of the needle tip. The procedure is 80% successful in reducing the cyst to minute size. Metastatic thyroid cancer neck lymph nodes: HFUS may also be used to treat metastatic thyroid cancer neck lymph nodes that occur in patients who either refuse, or are no longer candidates, for surgery. Small amounts of ethanol are injected under ultrasound guided needle placement. A power doppler blood flow study is done prior to injection. The blood flow can be destroyed and the node rendered inactive. Power doppler visualized blood flow can be eradicated, and there may be a drop in the cancer blood marker test, thyroglobulin, TG, as the node become non-functional. Another interventional use for HFUS is to mark a cancer node prior to surgery to help locate the node cluster at the surgery. A minute amount of methylene dye is injected, under careful ultrasound guided placement of the needle on the anterior surface, but not in the node. The dye will be evident to the thyroid surgeon when opening the neck. A similar localization procedure with methylene blue, can be done to locate parathyroid adenomas. Joint injections can be guided by medical ultrasound, such as in ultrasound-guided hip joint injections. Compression ultrasonography Compression ultrasonography is when the probe is pressed against the skin. This can bring the target structure closer to the probe, increasing spatial resolution of it. Comparison of the shape of the target structure before and after compression can aid in diagnosis. It is used in ultrasonography of deep venous thrombosis, wherein absence of vein compressibility is a strong indicator of thrombosis. Compression ultrasonography has both high sensitivity and specificity for detecting proximal deep vein thrombosis in symptomatic patients. Results are not reliable when the patient is asymptomatic, for example in high risk postoperative orthopedic patients. Panoramic ultrasonography Panoramic ultrasonography is the digital stitching of multiple ultrasound images into a broader one. It can display an entire abnormality and show its relationship to nearby structures on a single image. Multiparametric ultrasonography Multiparametric ultrasonography (mpUSS) combines multiple ultrasound techniques to produce a composite result. For example, one study combined B-mode, colour Doppler, real-time elastography, and contrast-enhanced ultrasound, achieving an accuracy similar to that of multiparametric MRI. Speed-of-Sound Imaging Speed-of-sound (SoS) imaging aims to find the spatial distribution of the SoS within the tissue. The idea is to find relative delay measurements for different transmission events and solve the limited-angle tomographic reconstruction problem using delay measurements and transmission geometry. Compared to shear-wave elastography, SoS imaging has better ex-vivo tissue differentiation for benign and malignant tumors. Attributes As with all imaging modalities, ultrasonography has positive and negative attributes. Strengths Muscle, soft tissue, and bone surfaces are imaged very well including the delineation of interfaces between solid and fluid-filled spaces. "Live" images can be dynamically selected, permitting diagnosis and documentation often rapidly. Live images also permit ultrasound-guided biopsies or injections, which can be cumbersome with other imaging modalities. Organ structure can be demonstrated. There are no known long-term side effects when used according to guidelines, and discomfort is minimal. Ability to image local variations in the mechanical properties of soft tissue. Equipment is widely available and comparatively flexible. Small, easily carried scanners are available which permit bedside examinations. Transducers have become relatively inexpensive compared to other modes of investigation, such as computed X-ray tomography, DEXA or magnetic resonance imaging. Spatial resolution is better in high frequency ultrasound transducers than most other imaging modalities. Use of an ultrasound research interface can offer a relatively inexpensive, real-time, and flexible method for capturing data required for specific research purposes of tissue characterization and development of new image processing techniques. Weaknesses Sonographic devices have trouble penetrating bone. For example, sonography of the adult brain is currently very limited. Sonography performs very poorly when there is gas between the transducer and the organ of interest, due to the extreme differences in acoustic impedance. For example, overlying gas in the gastrointestinal tract often makes ultrasound scanning of the pancreas difficult. Lung imaging however can be useful in demarcating pleural effusions, detecting heart failure and pneumonia. Even in the absence of bone or air, the depth penetration of ultrasound may be limited depending on the frequency of imaging. Consequently, there might be difficulties imaging structures deep in the body, especially in obese patients. Image quality and accuracy of diagnosis is limited with obese patients and overlying subcutaneous fat attenuates the sound beam. A lower frequency transducer is required with subsequent lower resolution. The method is operator-dependent. Skill and experience is needed to acquire good-quality images and make accurate diagnoses. There is no scout image as there is with CT and MRI. Once an image has been acquired there is no exact way to tell which part of the body was imaged. 80% of sonographers experience Repetitive Strain Injuries (RSI) or so-called Work-Related Musculoskeletal Disorders (WMSD) because of bad ergonomic positions. Risks and side-effects Ultrasonography is generally considered safe imaging, with the World Health Organizations stating: "Diagnostic ultrasound is recognized as a safe, effective, and highly flexible imaging modality capable of providing clinically relevant information about most parts of the body in a rapid and cost-effective fashion". Diagnostic ultrasound studies of the fetus are generally considered to be safe during pregnancy. However, this diagnostic procedure should be performed only when there is a valid medical indication, and the lowest possible ultrasonic exposure setting should be used to gain the necessary diagnostic information under the "as low as reasonably practicable" or ALARP principle. Although there is no evidence that ultrasound could be harmful to the fetus, medical authorities typically strongly discourage the promotion, selling, or leasing of ultrasound equipment for making "keepsake fetal videos". Studies on the safety of ultrasound A meta-analysis of several ultrasonography studies published in 2000 found no statistically significant harmful effects from ultrasonography. It was noted that there is a lack of data on long-term substantive outcomes such as neurodevelopment. A study at the Yale School of Medicine published in 2006 found a small but significant correlation between prolonged and frequent use of ultrasound and abnormal neuronal migration in mice. A study performed in Sweden in 2001 has shown that subtle effects of neurological damage linked to ultrasound were implicated by an increased incidence in left-handedness in boys (a marker for brain problems when not hereditary) and speech delays. The above findings, however, were not confirmed in a follow-up study. A later study, however, performed on a larger sample of 8865 children, has established a statistically significant, albeit weak association of ultrasonography exposure and being non-right handed later in life. Regulation Diagnostic and therapeutic ultrasound equipment is regulated in the US by the Food and Drug Administration, and worldwide by other national regulatory agencies. The FDA limits acoustic output using several metrics; generally, other agencies accept the FDA-established guidelines. Currently, New Mexico, Oregon, and North Dakota are the only US states that regulate diagnostic medical sonographers. Certification examinations for sonographers are available in the US from three organizations: the American Registry for Diagnostic Medical Sonography, Cardiovascular Credentialing International and the American Registry of Radiologic Technologists. The primary regulated metrics are Mechanical Index (MI), a metric associated with the cavitation bio-effect, and Thermal Index (TI) a metric associated with the tissue heating bio-effect. The FDA requires that the machine not exceed established limits, which are reasonably conservative in an effort to maintain diagnostic ultrasound as a safe imaging modality. This requires self-regulation on the part of the manufacturer in terms of machine calibration. Ultrasound-based pre-natal care and sex screening technologies were launched in India in the 1980s. With concerns about its misuse for sex-selective abortion, the Government of India passed the Pre-natal Diagnostic Techniques Act (PNDT) in 1994 to distinguish and regulate legal and illegal uses of ultrasound equipment. The law was further amended as the Pre-Conception and Pre-natal Diagnostic Techniques (Regulation and Prevention of Misuse) (PCPNDT) Act in 2004 to deter and punish prenatal sex screening and sex selective abortion. It is currently illegal and a punishable crime in India to determine or disclose the sex of a fetus using ultrasound equipment. Use in other animals Ultrasound is also a valuable tool in veterinary medicine, offering the same non-invasive imaging that helps in the diagnosis and monitoring of conditions in animals. History After the French physicist Pierre Curie's discovery of piezoelectricity in 1880, ultrasonic waves could be deliberately generated for industry. In 1940, the American acoustical physicist Floyd Firestone devised the first ultrasonic echo imaging device, the Supersonic Reflectoscope, to detect internal flaws in metal castings. In 1941, Austrian neurologist Karl Theo Dussik, in collaboration with his brother, Friedrich, a physicist, was likely the first person to image the human body ultrasonically, outlining the ventricles of a human brain. Ultrasonic energy was first applied to the human body for medical purposes by Dr George Ludwig at the Naval Medical Research Institute, Bethesda, Maryland, in the late 1940s. English-born physicist John Wild (1914–2009) first used ultrasound to assess the thickness of bowel tissue as early as 1949; he has been described as the "father of medical ultrasound". Subsequent advances took place concurrently in several countries but it was not until 1961 that David Robinson and George Kossoff's work at the Australian Department of Health resulted in the first commercially practical water bath ultrasonic scanner. In 1963 Meyerdirk & Wright launched production of the first commercial, hand-held, articulated arm, compound contact B-mode scanner, which made ultrasound generally available for medical use. France Léandre Pourcelot, a researcher and teacher at INSA (Institut National des Sciences Appliquées), Lyon, co-published a report in 1965 at the Académie des sciences, "Effet Doppler et mesure du débit sanguin" ("Doppler effect and measure of the blood flow"), the basis of his design of a Doppler flow meter in 1967. Scotland Parallel developments in Glasgow, Scotland by Professor Ian Donald and colleagues at the Glasgow Royal Maternity Hospital (GRMH) led to the first diagnostic applications of the technique. Donald was an obstetrician with a self-confessed "childish interest in machines, electronic and otherwise", who, having treated the wife of one of the company's directors, was invited to visit the Research Department of boilermakers Babcock & Wilcox at Renfrew. He adapted their industrial ultrasound equipment to conduct experiments on various anatomical specimens and assess their ultrasonic characteristics. Together with the medical physicist Tom Brown. and fellow obstetrician John MacVicar, Donald refined the equipment to enable differentiation of pathology in live volunteer patients. These findings were reported in The Lancet on 7 June 1958 as "Investigation of Abdominal Masses by Pulsed Ultrasound" – possibly one of the most important papers published in the field of diagnostic medical imaging. At GRMH, Professor Donald and James Willocks then refined their techniques to obstetric applications including fetal head measurement to assess the size and growth of the fetus. With the opening of the new Queen Mother's Hospital in Yorkhill in 1964, it became possible to improve these methods even further. Stuart Campbell's pioneering work on fetal cephalometry led to it acquiring long-term status as the definitive method of study of foetal growth. As the technical quality of the scans was further developed, it soon became possible to study pregnancy from start to finish and diagnose its many complications such as multiple pregnancy, fetal abnormality and placenta praevia. Diagnostic ultrasound has since been imported into practically every other area of medicine. Sweden Medical ultrasonography was used in 1953 at Lund University by cardiologist Inge Edler and Gustav Ludwig Hertz's son Carl Hellmuth Hertz, who was then a graduate student at the university's department of nuclear physics. Edler had asked Hertz if it was possible to use radar to look into the body, but Hertz said this was impossible. However, he said, it might be possible to use ultrasonography. Hertz was familiar with using ultrasonic reflectoscopes of the American acoustical physicist Floyd Firestone's invention for nondestructive materials testing, and together Edler and Hertz developed the idea of applying this methodology in medicine. The first successful measurement of heart activity was made on October 29, 1953, using a device borrowed from the ship construction company Kockums in Malmö. On December 16 the same year, the method was applied to generate an echo-encephalogram (ultrasonic probe of the brain). Edler and Hertz published their findings in 1954. United States In 1962, after about two years of work, Joseph Holmes, William Wright, and Ralph Meyerdirk developed the first compound contact B-mode scanner. Their work had been supported by U.S. Public Health Services and the University of Colorado. Wright and Meyerdirk left the university to form Physionic Engineering Inc., which launched the first commercial hand-held articulated arm compound contact B-mode scanner in 1963. This was the start of the most popular design in the history of ultrasound scanners. In the late 1960s Gene Strandness and the bio-engineering group at the University of Washington conducted research on Doppler ultrasound as a diagnostic tool for vascular disease. Eventually, they developed technologies to use duplex imaging, or Doppler in conjunction with B-mode scanning, to view vascular structures in real time while also providing hemodynamic information. The first demonstration of color Doppler was by Geoff Stevenson, who was involved in the early developments and medical use of Doppler shifted ultrasonic energy. Manufacturers Major manufacturers of Medical Ultrasound Devices and Equipment are: Canon Medical Systems Corporation Esaote GE Healthcare Fujifilm Mindray Medical International Limited Koninklijke Philips N.V. Samsung Medison Siemens Healthineers See also 3D ultrasound Doppler fetal monitor Elastography Polybiography Radiographer Ultrasound computer tomography Ultrasound gel Ultrasound transmission tomography Voluson 730 Explanatory notes References External links About the discovery of medical ultrasonography on ob-ultrasound.net History of medical sonography (ultrasound) on ob-ultrasound.net Medical imaging Acoustics Medical equipment Physical therapy Australian inventions
Medical ultrasound
[ "Physics", "Biology" ]
12,026
[ "Medical equipment", "Classical mechanics", "Acoustics", "Medical technology" ]
143,431
https://en.wikipedia.org/wiki/Stereographic%20projection
In mathematics, a stereographic projection is a perspective projection of the sphere, through a specific point on the sphere (the pole or center of projection), onto a plane (the projection plane) perpendicular to the diameter through the point. It is a smooth, bijective function from the entire sphere except the center of projection to the entire plane. It maps circles on the sphere to circles or lines on the plane, and is conformal, meaning that it preserves angles at which curves meet and thus locally approximately preserves shapes. It is neither isometric (distance preserving) nor equiareal (area preserving). The stereographic projection gives a way to represent a sphere by a plane. The metric induced by the inverse stereographic projection from the plane to the sphere defines a geodesic distance between points in the plane equal to the spherical distance between the spherical points they represent. A two-dimensional coordinate system on the stereographic plane is an alternative setting for spherical analytic geometry instead of spherical polar coordinates or three-dimensional cartesian coordinates. This is the spherical analog of the Poincaré disk model of the hyperbolic plane. Intuitively, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. Because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection; it finds use in diverse fields including complex analysis, cartography, geology, and photography. Sometimes stereographic computations are done graphically using a special kind of graph paper called a stereographic net, shortened to stereonet, or Wulff net. History The origin of the stereographic projection is not known, but it is believed to have been discovered by Ancient Greek astronomers and used for projecting the celestial sphere to the plane so that the motions of stars and planets could be analyzed using plane geometry. Its earliest extant description is found in Ptolemy's Planisphere (2nd century AD), but it was ambiguously attributed to Hipparchus (2nd century BC) by Synesius (), and Apollonius's Conics () contains a theorem which is crucial in proving the property that the stereographic projection maps circles to circles. Hipparchus, Apollonius, Archimedes, and even Eudoxus (4th century BC) have sometimes been speculatively credited with inventing or knowing of the stereographic projection, but some experts consider these attributions unjustified. Ptolemy refers to the use of the stereographic projection in a "horoscopic instrument", perhaps the described by Vitruvius (1st century BC). By the time of Theon of Alexandria (4th century), the planisphere had been combined with a dioptra to form the planispheric astrolabe ("star taker"), a capable portable device which could be used for measuring star positions and performing a wide variety of astronomical calculations. The astrolabe was in continuous use by Byzantine astronomers, and was significantly further developed by medieval Islamic astronomers. It was transmitted to Western Europe during the 11th–12th century, with Arabic texts translated into Latin. In the 16th and 17th century, the equatorial aspect of the stereographic projection was commonly used for maps of the Eastern and Western Hemispheres. It is believed that already the map created in 1507 by Gualterius Lud was in stereographic projection, as were later the maps of Jean Roze (1542), Rumold Mercator (1595), and many others. In star charts, even this equatorial aspect had been utilised already by the ancient astronomers like Ptolemy. François d'Aguilon gave the stereographic projection its current name in his 1613 work Opticorum libri sex philosophis juxta ac mathematicis utiles (Six Books of Optics, useful for philosophers and mathematicians alike). In the late 16th century, Thomas Harriot proved that the stereographic projection is conformal; however, this proof was never published and sat among his papers in a box for more than three centuries. In 1695, Edmond Halley, motivated by his interest in star charts, was the first to publish a proof. He used the recently established tools of calculus, invented by his friend Isaac Newton. Definition First formulation The unit sphere in three-dimensional space is the set of points such that . Let be the "north pole", and let be the rest of the sphere. The plane runs through the center of the sphere; the "equator" is the intersection of the sphere with this plane. For any point on , there is a unique line through and , and this line intersects the plane in exactly one point , known as the stereographic projection of onto the plane. In Cartesian coordinates on the sphere and on the plane, the projection and its inverse are given by the formulas In spherical coordinates on the sphere (with the zenith angle, , and the azimuth, ) and polar coordinates on the plane, the projection and its inverse are Here, is understood to have value when = 0. Also, there are many ways to rewrite these formulas using trigonometric identities. In cylindrical coordinates on the sphere and polar coordinates on the plane, the projection and its inverse are Other conventions Some authors define stereographic projection from the north pole (0, 0, 1) onto the plane , which is tangent to the unit sphere at the south pole (0, 0, −1). This can be described as a composition of a projection onto the equatorial plane described above, and a homothety from it to the polar plane. The homothety scales the image by a factor of 2 (a ratio of a diameter to a radius of the sphere), hence the values and produced by this projection are exactly twice those produced by the equatorial projection described in the preceding section. For example, this projection sends the equator to the circle of radius 2 centered at the origin. While the equatorial projection produces no infinitesimal area distortion along the equator, this pole-tangent projection instead produces no infinitesimal area distortion at the south pole. Other authors use a sphere of radius and the plane . In this case the formulae become In general, one can define a stereographic projection from any point on the sphere onto any plane such that is perpendicular to the diameter through , and does not contain . As long as meets these conditions, then for any point other than the line through and meets in exactly one point , which is defined to be the stereographic projection of P onto E. Generalizations More generally, stereographic projection may be applied to the unit -sphere in ()-dimensional Euclidean space . If is a point of and a hyperplane in , then the stereographic projection of a point is the point of intersection of the line with . In Cartesian coordinates (, from 0 to ) on and (, from 1 to n) on , the projection from is given by Defining the inverse is given by Still more generally, suppose that is a (nonsingular) quadric hypersurface in the projective space . In other words, is the locus of zeros of a non-singular quadratic form in the homogeneous coordinates . Fix any point on and a hyperplane in not containing . Then the stereographic projection of a point in is the unique point of intersection of with . As before, the stereographic projection is conformal and invertible on a non-empty Zariski open set. The stereographic projection presents the quadric hypersurface as a rational hypersurface. This construction plays a role in algebraic geometry and conformal geometry. Properties The first stereographic projection defined in the preceding section sends the "south pole" (0, 0, −1) of the unit sphere to (0, 0), the equator to the unit circle, the southern hemisphere to the region inside the circle, and the northern hemisphere to the region outside the circle. The projection is not defined at the projection point = (0, 0, 1). Small neighborhoods of this point are sent to subsets of the plane far away from (0, 0). The closer is to (0, 0, 1), the more distant its image is from (0, 0) in the plane. For this reason it is common to speak of (0, 0, 1) as mapping to "infinity" in the plane, and of the sphere as completing the plane by adding a point at infinity. This notion finds utility in projective geometry and complex analysis. On a merely topological level, it illustrates how the sphere is homeomorphic to the one-point compactification of the plane. In Cartesian coordinates a point on the sphere and its image on the plane either both are rational points or none of them: Stereographic projection is conformal, meaning that it preserves the angles at which curves cross each other (see figures). On the other hand, stereographic projection does not preserve area; in general, the area of a region of the sphere does not equal the area of its projection onto the plane. The area element is given in coordinates by Along the unit circle, where , there is no inflation of area in the limit, giving a scale factor of 1. Near (0, 0) areas are inflated by a factor of 4, and near infinity areas are inflated by arbitrarily small factors. The metric is given in coordinates by and is the unique formula found in Bernhard Riemann's Habilitationsschrift on the foundations of geometry, delivered at Göttingen in 1854, and entitled Über die Hypothesen welche der Geometrie zu Grunde liegen. No map from the sphere to the plane can be both conformal and area-preserving. If it were, then it would be a local isometry and would preserve Gaussian curvature. The sphere and the plane have different Gaussian curvatures, so this is impossible. Circles on the sphere that do not pass through the point of projection are projected to circles on the plane. Circles on the sphere that do pass through the point of projection are projected to straight lines on the plane. These lines are sometimes thought of as circles through the point at infinity, or circles of infinite radius. These properties can be verified by using the expressions of in terms of given in : using these expressions for a substitution in the equation of the plane containing a circle on the sphere, and clearing denominators, one gets the equation of a circle, that is, a second-degree equation with as its quadratic part. The equation becomes linear if that is, if the plane passes through the point of projection. All lines in the plane, when transformed to circles on the sphere by the inverse of stereographic projection, meet at the projection point. Parallel lines, which do not intersect in the plane, are transformed to circles tangent at projection point. Intersecting lines are transformed to circles that intersect transversally at two points in the sphere, one of which is the projection point. (Similar remarks hold about the real projective plane, but the intersection relationships are different there.) The loxodromes of the sphere map to curves on the plane of the form where the parameter measures the "tightness" of the loxodrome. Thus loxodromes correspond to logarithmic spirals. These spirals intersect radial lines in the plane at equal angles, just as the loxodromes intersect meridians on the sphere at equal angles. The stereographic projection relates to the plane inversion in a simple way. Let and be two points on the sphere with projections and on the plane. Then and are inversive images of each other in the image of the equatorial circle if and only if and are reflections of each other in the equatorial plane. In other words, if: is a point on the sphere, but not a 'north pole' and not its antipode, the 'south pole' , is the image of in a stereographic projection with the projection point and is the image of in a stereographic projection with the projection point , then and are inversive images of each other in the unit circle. Wulff net Stereographic projection plots can be carried out by a computer using the explicit formulas given above. However, for graphing by hand these formulas are unwieldy. Instead, it is common to use graph paper designed specifically for the task. This special graph paper is called a stereonet or Wulff net, after the Russian mineralogist George (Yuri Viktorovich) Wulff. The Wulff net shown here is the stereographic projection of the grid of parallels and meridians of a hemisphere centred at a point on the equator (such as the Eastern or Western hemisphere of a planet). In the figure, the area-distorting property of the stereographic projection can be seen by comparing a grid sector near the center of the net with one at the far right or left. The two sectors have equal areas on the sphere. On the disk, the latter has nearly four times the area of the former. If the grid is made finer, this ratio approaches exactly 4. On the Wulff net, the images of the parallels and meridians intersect at right angles. This orthogonality property is a consequence of the angle-preserving property of the stereographic projection. (However, the angle-preserving property is stronger than this property. Not all projections that preserve the orthogonality of parallels and meridians are angle-preserving.) For an example of the use of the Wulff net, imagine two copies of it on thin paper, one atop the other, aligned and tacked at their mutual center. Let be the point on the lower unit hemisphere whose spherical coordinates are (140°, 60°) and whose Cartesian coordinates are (0.321, 0.557, −0.766). This point lies on a line oriented 60° counterclockwise from the positive -axis (or 30° clockwise from the positive -axis) and 50° below the horizontal plane . Once these angles are known, there are four steps to plotting : Using the grid lines, which are spaced 10° apart in the figures here, mark the point on the edge of the net that is 60° counterclockwise from the point (1, 0) (or 30° clockwise from the point (0, 1)). Rotate the top net until this point is aligned with (1, 0) on the bottom net. Using the grid lines on the bottom net, mark the point that is 50° toward the center from that point. Rotate the top net oppositely to how it was oriented before, to bring it back into alignment with the bottom net. The point marked in step 3 is then the projection that we wanted. To plot other points, whose angles are not such round numbers as 60° and 50°, one must visually interpolate between the nearest grid lines. It is helpful to have a net with finer spacing than 10°. Spacings of 2° are common. To find the central angle between two points on the sphere based on their stereographic plot, overlay the plot on a Wulff net and rotate the plot about the center until the two points lie on or near a meridian. Then measure the angle between them by counting grid lines along that meridian. Applications within mathematics Complex analysis Although any stereographic projection misses one point on the sphere (the projection point), the entire sphere can be mapped using two projections from distinct projection points. In other words, the sphere can be covered by two stereographic parametrizations (the inverses of the projections) from the plane. The parametrizations can be chosen to induce the same orientation on the sphere. Together, they describe the sphere as an oriented surface (or two-dimensional manifold). This construction has special significance in complex analysis. The point in the real plane can be identified with the complex number . The stereographic projection from the north pole onto the equatorial plane is then Similarly, letting be another complex coordinate, the functions define a stereographic projection from the south pole onto the equatorial plane. The transition maps between the - and -coordinates are then and , with approaching 0 as goes to infinity, and vice versa. This facilitates an elegant and useful notion of infinity for the complex numbers and indeed an entire theory of meromorphic functions mapping to the Riemann sphere. The standard metric on the unit sphere agrees with the Fubini–Study metric on the Riemann sphere. Visualization of lines and planes The set of all lines through the origin in three-dimensional space forms a space called the real projective plane. This plane is difficult to visualize, because it cannot be embedded in three-dimensional space. However, one can visualize it as a disk, as follows. Any line through the origin intersects the southern hemisphere  ≤ 0 in a point, which can then be stereographically projected to a point on a disk in the XY plane. Horizontal lines through the origin intersect the southern hemisphere in two antipodal points along the equator, which project to the boundary of the disk. Either of the two projected points can be considered part of the disk; it is understood that antipodal points on the equator represent a single line in 3 space and a single point on the boundary of the projected disk (see quotient topology). So any set of lines through the origin can be pictured as a set of points in the projected disk. But the boundary points behave differently from the boundary points of an ordinary 2-dimensional disk, in that any one of them is simultaneously close to interior points on opposite sides of the disk (just as two nearly horizontal lines through the origin can project to points on opposite sides of the disk). Also, every plane through the origin intersects the unit sphere in a great circle, called the trace of the plane. This circle maps to a circle under stereographic projection. So the projection lets us visualize planes as circular arcs in the disk. Prior to the availability of computers, stereographic projections with great circles often involved drawing large-radius arcs that required use of a beam compass. Computers now make this task much easier. Further associated with each plane is a unique line, called the plane's pole, that passes through the origin and is perpendicular to the plane. This line can be plotted as a point on the disk just as any line through the origin can. So the stereographic projection also lets us visualize planes as points in the disk. For plots involving many planes, plotting their poles produces a less-cluttered picture than plotting their traces. This construction is used to visualize directional data in crystallography and geology, as described below. Other visualization Stereographic projection is also applied to the visualization of polytopes. In a Schlegel diagram, an -dimensional polytope in is projected onto an -dimensional sphere, which is then stereographically projected onto . The reduction from to can make the polytope easier to visualize and understand. Arithmetic geometry In elementary arithmetic geometry, stereographic projection from the unit circle provides a means to describe all primitive Pythagorean triples. Specifically, stereographic projection from the north pole (0,1) onto the -axis gives a one-to-one correspondence between the rational number points on the unit circle (with ) and the rational points of the -axis. If is a rational point on the -axis, then its inverse stereographic projection is the point which gives Euclid's formula for a Pythagorean triple. Tangent half-angle substitution The pair of trigonometric functions can be thought of as parametrizing the unit circle. The stereographic projection gives an alternative parametrization of the unit circle: Under this reparametrization, the length element of the unit circle goes over to This substitution can sometimes simplify integrals involving trigonometric functions. Applications to other disciplines Cartography The fundamental problem of cartography is that no map from the sphere to the plane can accurately represent both angles and areas. In general, area-preserving map projections are preferred for statistical applications, while angle-preserving (conformal) map projections are preferred for navigation. Stereographic projection falls into the second category. When the projection is centered at the Earth's north or south pole, it has additional desirable properties: It sends meridians to rays emanating from the origin and parallels to circles centered at the origin. Planetary science The stereographic is the only projection that maps all circles on a sphere to circles on a plane. This property is valuable in planetary mapping where craters are typical features. The set of circles passing through the point of projection have unbounded radius, and therefore degenerate into lines. Crystallography In crystallography, the orientations of crystal axes and faces in three-dimensional space are a central geometric concern, for example in the interpretation of X-ray and electron diffraction patterns. These orientations can be visualized as in the section Visualization of lines and planes above. That is, crystal axes and poles to crystal planes are intersected with the northern hemisphere and then plotted using stereographic projection. A plot of poles is called a pole figure. In electron diffraction, Kikuchi line pairs appear as bands decorating the intersection between lattice plane traces and the Ewald sphere thus providing experimental access to a crystal's stereographic projection. Model Kikuchi maps in reciprocal space, and fringe visibility maps for use with bend contours in direct space, thus act as road maps for exploring orientation space with crystals in the transmission electron microscope. Geology Researchers in structural geology are concerned with the orientations of planes and lines for a number of reasons. The foliation of a rock is a planar feature that often contains a linear feature called lineation. Similarly, a fault plane is a planar feature that may contain linear features such as slickensides. These orientations of lines and planes at various scales can be plotted using the methods of the Visualization of lines and planes section above. As in crystallography, planes are typically plotted by their poles. Unlike crystallography, the southern hemisphere is used instead of the northern one (because the geological features in question lie below the Earth's surface). In this context the stereographic projection is often referred to as the equal-angle lower-hemisphere projection. The equal-area lower-hemisphere projection defined by the Lambert azimuthal equal-area projection is also used, especially when the plot is to be subjected to subsequent statistical analysis such as density contouring. Rock mechanics The stereographic projection is one of the most widely used methods for evaluating rock slope stability. It allows for the representation and analysis of three-dimensional orientation data in two dimensions. Kinematic analysis within stereographic projection is used to assess the potential for various modes of rock slope failures—such as plane, wedge, and toppling failures—which occur due to the presence of unfavorably oriented discontinuities. This technique is particularly useful for visualizing the orientation of rock slopes in relation to discontinuity sets, facilitating the assessment of the most likely failure type. For instance, plane failure is more likely when the strike of a discontinuity set is parallel to the slope, and the discontinuities dip towards the slope at an angle steep enough to allow sliding, but not steeper than the slope itself. Additionally, some authors have developed graphical methods based on stereographic projection to easily calculate geometrical correction parameters—such as those related to the parallelism between the slope and discontinuities, the dip of the discontinuity, and the relative angle between the discontinuity and the slope—for rock mass classifications in slopes, including slope mass rating (SMR) and rock mass rating. Photography Some fisheye lenses use a stereographic projection to capture a wide-angle view. Compared to more traditional fisheye lenses which use an equal-area projection, areas close to the edge retain their shape, and straight lines are less curved. However, stereographic fisheye lenses are typically more expensive to manufacture. Image remapping software, such as Panotools, allows the automatic remapping of photos from an equal-area fisheye to a stereographic projection. The stereographic projection has been used to map spherical panoramas, starting with Horace Bénédict de Saussure's in 1779. This results in effects known as a little planet (when the center of projection is the nadir) and a tube (when the center of projection is the zenith). The popularity of using stereographic projections to map panoramas over other azimuthal projections is attributed to the shape preservation that results from the conformality of the projection. See also List of map projections Astrolabe Astronomical clock Poincaré disk model, the analogous mapping of the hyperbolic plane Stereographic projection in cartography Curvilinear perspective Fisheye lens References Sources External links Stereographic Projection and Inversion from Cut-the-Knot DoITPoMS Teaching and Learning Package - "The Stereographic Projection" Videos Proof about Stereographic Projection taking circles in the sphere to circles in the plane Software Stereonet, a software tool for structural geology by Rick Allmendinger. PTCLab, the phase transformation crystallography lab Sphaerica, software tool for straightedge and compass construction on the sphere, including a stereographic projection display option Estereografica Web, a web application for stereographic projection in structural geology and fault kinematics by Ernesto Cristallini. Conformal mappings Conformal projections Crystallography Projective geometry
Stereographic projection
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,229
[ "Crystallography", "Condensed matter physics", "Materials science" ]
143,532
https://en.wikipedia.org/wiki/Pardaxin
Pardaxin is a peptide produced by the Red Sea sole (P4, P5) and the Pacific Peacock sole (P1, P2, P3) that is used as a shark repellent. It causes lysis of mammalian and bacterial cells, similar to melittin. Synthesis In the lab, pardaxin is synthesized using an automated peptide synthesizer. Alternatively, the secretions of the Red Sea sole can be collected and purified. Functions Antibacterial peptide Pardaxin has a helix-hinge-helix structure. This structure is common in peptides that act selectively on bacterial membranes and cytotoxic peptides that lyse mammalian and bacterial cells. Pardaxin shows a significantly lower hemolytic activity towards human red blood cells compared to melittin. The C-terminal tail of pardaxin is responsible for this non-selective activity against the erythrocytes and bacteria. The amphiphilic C-terminal helix is the ion-channel lining segment of the peptide. The N-terminal α-helix is important for the insertion of the peptide to the lipid bilayer of the cell. The mechanism of pardaxin is dependent on the membrane composition. Pardaxin significantly disrupts lipid bilayers composed of zwitterionic lipids, especially those composed of 1-palmitoyl-2-oleoyl-phosphatidylcholine (POPC). This suggests a carpet mechanism for cell lysis. The carpet mechanism is when a high density of peptides accumulates on the target membrane surface. The phospholipid displacement changes in fluidity, and the cellular contents leak out. The presence of anionic lipids or cholesterol was found to reduce the peptide's ability to disrupt bilayers. Shark repellent P. marmoratas and P. pavoninus release pardaxin when threatened by sharks. Pardaxin targets the gills and pharyngeal cavity of the sharks. It results in severe struggling, mouth paralysis, and temporary increase of urea leakage in the gills. This distress is caused by the attack of the cellular membrane of the gills, which causes a large influx of salt ions. Research into creating a commercial shark repellent using pardaxin was discontinued because it dilutes in the water too quickly. It is only effective if sprayed almost directly into a shark's mouth. Cancer treatment Pardaxin inhibits proliferation and induces apoptosis of human cancer cell lines. Its 33-amino acid structure contains many cationic and amphipathic amino acids. This makes it easier for it to interact with anionic membranes, such as those in tumor cells, which are inherently more acidic because of the acidic environment created by more glycolysis. Pardaxin initiates caspase-dependent and caspase-independent apoptosis in human cervical carcinoma cells. Pardaxin triggers reactive oxygen species (ROS). ROS production disrupts protein folding and induces the unfolded protein response (UPR). This causes stress on the endoplasmic reticulum, which releases calcium. This leads to an increase in mitochondrial calcium, dropping its membrane potential. The pore permeability changes, and Cytochrome c (Cyt c) is released. Cyt c activates the caspase chain that leads to apoptosis. ROS also activates the JNK pathway. JNK is phosphorylated, which leads to the phosphorylation of AP-1 (transcription factor consisting of cFOS and Cjun). This results in the activation of caspases as well. ROS also causes a caspase independent pathway that results in apoptosis. When the mitochondrial membrane potential changes, apoptosis-inducing factors (AIFs) are also released. These trigger apoptosis when they enter the nucleus, not needing to involve caspases. References Protein families Antimicrobial peptides
Pardaxin
[ "Biology" ]
827
[ "Protein families", "Protein classification" ]
144,143
https://en.wikipedia.org/wiki/Gallium%20arsenide
Gallium arsenide (GaAs) is a III-V direct band gap semiconductor with a zinc blende crystal structure. Gallium arsenide is used in the manufacture of devices such as microwave frequency integrated circuits, monolithic microwave integrated circuits, infrared light-emitting diodes, laser diodes, solar cells and optical windows. GaAs is often used as a substrate material for the epitaxial growth of other III-V semiconductors, including indium gallium arsenide, aluminum gallium arsenide and others. History Gallium arsenide was first synthesized and studied by Victor Goldschmidt in 1926 by passing arsenic vapors mixed with hydrogen over gallium(III) oxide at 600 °C. The semiconductor properties of GaAs and other III-V compounds were patented by Heinrich Welker at Siemens-Schuckert in 1951 and described in a 1952 publication. Commercial production of its monocrystals commenced in 1954, and more studies followed in the 1950s. First infrared LEDs were made in 1962. Preparation and chemistry In the compound, gallium has a +3 oxidation state. Gallium arsenide single crystals can be prepared by three industrial processes: The vertical gradient freeze (VGF) process. Crystal growth using a horizontal zone furnace in the Bridgman-Stockbarger technique, in which gallium and arsenic vapors react, and free molecules deposit on a seed crystal at the cooler end of the furnace. Liquid encapsulated Czochralski (LEC) growth is used for producing high-purity single crystals that can exhibit semi-insulating characteristics (see below). Most GaAs wafers are produced using this process. Alternative methods for producing films of GaAs include: VPE reaction of gaseous gallium metal and arsenic trichloride: 2 Ga + 2 → 2 GaAs + 3 MOCVD reaction of trimethylgallium and arsine: + → GaAs + 3 Molecular beam epitaxy (MBE) of gallium and arsenic: 4 Ga + → 4 GaAs or 2 Ga + → 2 GaAs Oxidation of GaAs occurs in air, degrading performance of the semiconductor. The surface can be passivated by depositing a cubic gallium(II) sulfide layer using a tert-butyl gallium sulfide compound such as (. Semi-insulating crystals In the presence of excess arsenic, GaAs boules grow with crystallographic defects; specifically, arsenic antisite defects (an arsenic atom at a gallium atom site within the crystal lattice). The electronic properties of these defects (interacting with others) cause the Fermi level to be pinned to near the center of the band gap, so that this GaAs crystal has very low concentration of electrons and holes. This low carrier concentration is similar to an intrinsic (perfectly undoped) crystal, but much easier to achieve in practice. These crystals are called "semi-insulating", reflecting their high resistivity of 107–109 Ω·cm (which is quite high for a semiconductor, but still much lower than a true insulator like glass). Etching Wet etching of GaAs industrially uses an oxidizing agent such as hydrogen peroxide or bromine water, and the same strategy has been described in a patent relating to processing scrap components containing GaAs where the is complexed with a hydroxamic acid ("HA"), for example: GaAs + + "HA" → "GaA" complex + + 4 This reaction produces arsenic acid. Electronics GaAs digital logic GaAs can be used for various transistor types: Metal–semiconductor field-effect transistor (MESFET) High-electron-mobility transistor (HEMT) Junction field-effect transistor (JFET) Heterojunction bipolar transistor (HBT) Metal–oxide–semiconductor field-effect transistor (MOSFET) The HBT can be used in integrated injection logic (I2L). The earliest GaAs logic gate used Buffered FET Logic (BFL). From to 1995 the main logic families used were: Source-coupled FET logic (SCFL) fastest and most complex, (used by TriQuint & Vitesse) Capacitor–diode FET logic (CDFL) (used by Cray for Cray-3) Direct-coupled FET logic (DCFL) simplest and lowest power (used by Vitesse for VLSI gate arrays) Comparison with silicon for electronics GaAs advantages Some electronic properties of gallium arsenide are superior to those of silicon. It has a higher saturated electron velocity and higher electron mobility, allowing gallium arsenide transistors to function at frequencies in excess of 250 GHz. GaAs devices are relatively insensitive to overheating, owing to their wider energy band gap, and they also tend to create less noise (disturbance in an electrical signal) in electronic circuits than silicon devices, especially at high frequencies. This is a result of higher carrier mobilities and lower resistive device parasitics. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. It is also used in the manufacture of Gunn diodes for the generation of microwaves. Another advantage of GaAs is that it has a direct band gap, which means that it can be used to absorb and emit light efficiently. Silicon has an indirect band gap and so is relatively poor at emitting light. As a wide direct band gap material with resulting resistance to radiation damage, GaAs is an excellent material for outer space electronics and optical windows in high power applications. Because of its wide band gap, pure GaAs is highly resistive. Combined with a high dielectric constant, this property makes GaAs a very good substrate for integrated circuits and unlike Si provides natural isolation between devices and circuits. This has made it an ideal material for monolithic microwave integrated circuits (MMICs), where active and essential passive components can readily be produced on a single slice of GaAs. One of the first GaAs microprocessors was developed in the early 1980s by the RCA Corporation and was considered for the Star Wars program of the United States Department of Defense. These processors were several times faster and several orders of magnitude more radiation resistant than their silicon counterparts, but were more expensive. Other GaAs processors were implemented by the supercomputer vendors Cray Computer Corporation, Convex, and Alliant in an attempt to stay ahead of the ever-improving CMOS microprocessor. Cray eventually built one GaAs-based machine in the early 1990s, the Cray-3, but the effort was not adequately capitalized, and the company filed for bankruptcy in 1995. Complex layered structures of gallium arsenide in combination with aluminium arsenide (AlAs) or the alloy AlxGa1−xAs can be grown using molecular-beam epitaxy (MBE) or using metalorganic vapor-phase epitaxy (MOVPE). Because GaAs and AlAs have almost the same lattice constant, the layers have very little induced strain, which allows them to be grown almost arbitrarily thick. This allows extremely high performance and high electron mobility HEMT transistors and other quantum well devices. GaAs is used for monolithic radar power amplifiers (but GaN can be less susceptible to heat damage). Silicon advantages Silicon has three major advantages over GaAs for integrated circuit manufacture. First, silicon is abundant and cheap to process in the form of silicate minerals. The economies of scale available to the silicon industry has also hindered the adoption of GaAs. In addition, a Si crystal has a very stable structure and can be grown to very large diameter boules and processed with very good yields. It is also a fairly good thermal conductor, thus enabling very dense packing of transistors that need to get rid of their heat of operation, all very desirable for design and manufacturing of very large ICs. Such good mechanical characteristics also make it a suitable material for the rapidly developing field of nanoelectronics. Naturally, a GaAs surface cannot withstand the high temperatures needed for diffusion; however a viable and actively pursued alternative as of the 1980s was ion implantation. The second major advantage of Si is the existence of a native oxide (silicon dioxide, SiO2), which is used as an insulator. Silicon dioxide can be incorporated onto silicon circuits easily, and such layers are adherent to the underlying silicon. SiO2 is not only a good insulator (with a band gap of 8.9 eV), but the Si-SiO2 interface can be easily engineered to have excellent electrical properties, most importantly low density of interface states. GaAs does not have a native oxide, does not easily support a stable adherent insulating layer, and does not possess the dielectric strength or surface passivating qualities of the Si-SiO2. Aluminum oxide (Al2O3) has been extensively studied as a possible gate oxide for GaAs (as well as InGaAs). The third advantage of silicon is that it possesses a higher hole mobility compared to GaAs (500 versus 400 cm2V−1s−1). This high mobility allows the fabrication of higher-speed P-channel field-effect transistors, which are required for CMOS logic. Because they lack a fast CMOS structure, GaAs circuits must use logic styles which have much higher power consumption; this has made GaAs logic circuits unable to compete with silicon logic circuits. For manufacturing solar cells, silicon has relatively low absorptivity for sunlight, meaning about 100 micrometers of Si is needed to absorb most sunlight. Such a layer is relatively robust and easy to handle. In contrast, the absorptivity of GaAs is so high that only a few micrometers of thickness are needed to absorb all of the light. Consequently, GaAs thin films must be supported on a substrate material. Silicon is a pure element, avoiding the problems of stoichiometric imbalance and thermal unmixing of GaAs. Silicon has a nearly perfect lattice; impurity density is very low and allows very small structures to be built (down to 5 nm in commercial production as of 2020). In contrast, GaAs has a very high impurity density, which makes it difficult to build integrated circuits with small structures, so the 500 nm process is a common process for GaAs. Silicon has about three times the thermal conductivity of GaAs, with less risk of local overheating in high power devices. Other applications Transistor uses Gallium arsenide (GaAs) transistors are used in the RF power amplifiers for cell phones and wireless communicating. GaAs wafers are used in laser diodes, photodetectors, and radio frequency (RF) amplifiers for mobile phones and base stations. GaAs transistors are also integral to monolithic microwave integrated circuits (MMICs), utilized in satellite communication and radar systems, as well as in low-noise amplifiers (LNAs) that enhance weak signals. Solar cells and detectors Gallium arsenide is an important semiconductor material for high-cost, high-efficiency solar cells and is used for single-crystalline thin-film solar cells and for multi-junction solar cells. The first known operational use of GaAs solar cells in space was for the Venera 3 mission, launched in 1965. The GaAs solar cells, manufactured by Kvant, were chosen because of their higher performance in high temperature environments. GaAs cells were then used for the Lunokhod rovers for the same reason. In 1970, the GaAs heterostructure solar cells were developed by the team led by Zhores Alferov in the USSR, achieving much higher efficiencies. In the early 1980s, the efficiency of the best GaAs solar cells surpassed that of conventional, crystalline silicon-based solar cells. In the 1990s, GaAs solar cells took over from silicon as the cell type most commonly used for photovoltaic arrays for satellite applications. Later, dual- and triple-junction solar cells based on GaAs with germanium and indium gallium phosphide layers were developed as the basis of a triple-junction solar cell, which held a record efficiency of over 32% and can operate also with light as concentrated as 2,000 suns. This kind of solar cell powered the Mars Exploration Rovers Spirit and Opportunity, which explored Mars' surface. Also many solar cars utilize GaAs in solar arrays, as did the Hubble Telescope. GaAs-based devices hold the world record for the highest-efficiency single-junction solar cell at 29.1% (as of 2019). This high efficiency is attributed to the extreme high quality GaAs epitaxial growth, surface passivation by the AlGaAs, and the promotion of photon recycling by the thin film design. GaAs-based photovoltaics are also responsible for the highest efficiency (as of 2022) of conversion of light to electricity, as researchers from the Fraunhofer Institute for Solar Energy Systems achieved a 68.9% efficiency when exposing a GaAs thin film photovoltaic cell to monochromatic laser light with a wavelength of 858 nanometers. Today, multi-junction GaAs cells have the highest efficiencies of existing photovoltaic cells and trajectories show that this is likely to continue to be the case for the foreseeable future. In 2022, Rocket Lab unveiled a solar cell with 33.3% efficiency based on inverted metamorphic multi-junction (IMM) technology. In IMM, the lattice-matched (same lattice parameters) materials are grown first, followed by mismatched materials. The top cell, GaInP, is grown first and lattice matched to the GaAs substrate, followed by a layer of either GaAs or GaInAs with a minimal mismatch, and the last layer has the greatest lattice mismatch. After growth, the cell is mounted to a secondary handle and the GaAs substrate is removed. A main advantage of the IMM process is that the inverted growth according to lattice mismatch allows a path to higher cell efficiency. Complex designs of AlxGa1−xAs-GaAs devices using quantum wells can be sensitive to infrared radiation (QWIP). GaAs diodes can be used for the detection of X-rays. Future outlook of GaAs solar cells Despite GaAs-based photovoltaics being the clear champions of efficiency for solar cells, they have relatively limited use in today's market. In both world electricity generation and world electricity generating capacity, solar electricity is growing faster than any other source of fuel (wind, hydro, biomass, and so on) for the last decade. However, GaAs solar cells have not currently been adopted for widespread solar electricity generation. This is largely due to the cost of GaAs solar cells - in space applications, high performance is required and the corresponding high cost of the existing GaAs technologies is accepted. For example, GaAs-based photovoltaics show the best resistance to gamma radiation and high temperature fluctuations, which are of great importance for spacecraft. But in comparison to other solar cells, III-V solar cells are two to three orders of magnitude more expensive than other technologies such as silicon-based solar cells. The primary sources of this cost are the epitaxial growth costs and the substrate the cell is deposited on. GaAs solar cells are most commonly fabricated utilizing epitaxial growth techniques such as metal-organic chemical vapor deposition (MOCVD) and hydride vapor phase epitaxy (HVPE). A significant reduction in costs for these methods would require improvements in tool costs, throughput, material costs, and manufacturing efficiency. Increasing the deposition rate could reduce costs, but this cost reduction would be limited by the fixed times in other parts of the process such as cooling and heating. The substrate used to grow these solar cells is usually germanium or gallium arsenide which are notably expensive materials. One of the main pathways to reduce substrate costs is to reuse the substrate. An early method proposed to accomplish this is epitaxial lift-off (ELO), but this method is time-consuming, somewhat dangerous (with its use of hydrofluoric acid), and requires multiple post-processing steps. However, other methods have been proposed that use phosphide-based materials and hydrochloric acid to achieve ELO with surface passivation and minimal post-etching residues and allows for direct reuse of the GaAs substrate. There is also preliminary evidence that spalling could be used to remove the substrate for reuse. An alternative path to reduce substrate cost is to use cheaper materials, although materials for this application are not currently commercially available or developed. Yet another consideration to lower GaAs solar cell costs could be concentrator photovoltaics. Concentrators use lenses or parabolic mirrors to focus light onto a solar cell, and thus a smaller (and therefore less expensive) GaAs solar cell is needed to achieve the same results. Concentrator systems have the highest efficiency of existing photovoltaics. So, technologies such as concentrator photovoltaics and methods in development to lower epitaxial growth and substrate costs could lead to a reduction in the cost of GaAs solar cells and forge a path for use in terrestrial applications. Light-emission devices GaAs has been used to produce near-infrared laser diodes since 1962. It is often used in alloys with other semiconductor compounds for these applications. N-type GaAs doped with silicon donor atoms (on Ga sites) and boron acceptor atoms (on As sites) responds to ionizing radiation by emitting scintillation photons. At cryogenic temperatures it is among the brightest scintillators known and is a promising candidate for detecting rare electronic excitations from interacting dark matter, due to the following six essential factors: Silicon donor electrons in GaAs have a binding energy that is among the lowest of all known n-type semiconductors. Free electrons above per cm3 are not “frozen out" and remain delocalized at cryogenic temperatures. Boron and gallium are group III elements, so boron as an impurity primarily occupies the gallium site. However, a sufficient number occupy the arsenic site and act as acceptors that efficiently trap ionization event holes from the valence band. After trapping an ionization event hole from the valence band, the boron acceptors can combine radiatively with delocalized donor electrons to produce photons 0.2 eV below the cryogenic band-gap energy (1.52 eV). This is an efficient radiative process that produces scintillation photons that are not absorbed by the GaAs crystal. There is no afterglow, because metastable radiative centers are quickly annihilated by the delocalized electrons. This is evidenced by the lack of thermally induced luminescence. N-type GaAs has a high refractive index (~3.5) and the narrow-beam absorption coefficient is proportional to the free electron density and typically several per cm. One would expect that almost all of the scintillation photons should be trapped and absorbed in the crystal, but this is not the case. Recent Monte Carlo and Feynman path integral calculations have shown that the high luminosity could be explained if most of the narrow beam absorption is not absolute absorption but a novel type of optical scattering from the conduction electrons with a cross section of about 5 x 10−18 cm2 that allows scintillation photons to escape total internal reflection. This cross section is about 107 times larger than Thomson scattering but comparable to the optical cross section of the conduction electrons in a metal mirror. N-type GaAs(Si,B) is commercially grown as 10 kg crystal ingots and sliced into thin wafers as substrates for electronic circuits. Boron oxide is used as an encapsulant to prevent the loss of arsenic during crystal growth, but also has the benefit of providing boron acceptors for scintillation. Fiber optic temperature measurement For this purpose an optical fiber tip of an optical fiber temperature sensor is equipped with a gallium arsenide crystal. Starting at a light wavelength of 850 nm GaAs becomes optically translucent. Since the spectral position of the band gap is temperature dependent, it shifts about 0.4 nm/K. The measurement device contains a light source and a device for the spectral detection of the band gap. With the changing of the band gap, (0.4 nm/K) an algorithm calculates the temperature (all 250 ms). Spin-charge converters GaAs may have applications in spintronics as it can be used instead of platinum in spin-charge converters and may be more tunable. Safety The environment, health and safety aspects of gallium arsenide sources (such as trimethylgallium and arsine) and industrial hygiene monitoring studies of metalorganic precursors have been reported. California lists gallium arsenide as a carcinogen, as do IARC and ECA, and it is considered a known carcinogen in animals. On the other hand, a 2013 review (funded by industry) argued against these classifications, saying that when rats or mice inhale fine GaAs powders (as in previous studies), they get cancer from the resulting lung irritation and inflammation, rather than from a primary carcinogenic effect of the GaAs itself—and that, moreover, fine GaAs powders are unlikely to be created in the production or use of GaAs. See also Aluminium arsenide Aluminium gallium arsenide Arsine Cadmium telluride Gallium antimonide Gallium arsenide phosphide Gallium manganese arsenide Gallium nitride Gallium phosphide Heterostructure emitter bipolar transistor Indium arsenide Indium gallium arsenide Indium phosphide Light-emitting diode MESFET (metal–semiconductor field-effect transistor) MOVPE Multijunction solar cell Photomixing to generate THz Trimethylgallium References Cited sources External links Case Studies in Environmental Medicine: Arsenic Toxicity Physical properties of gallium arsenide (Ioffe Institute) Facts and figures on processing gallium arsenide Arsenides Inorganic compounds Gallium compounds IARC Group 1 carcinogens Optoelectronics III-V semiconductors III-V compounds Solar cells Light-emitting diode materials Zincblende crystal structure
Gallium arsenide
[ "Chemistry" ]
4,679
[ "Inorganic compounds", "Semiconductor materials", "III-V semiconductors", "Light-emitting diode materials", "III-V compounds" ]
144,241
https://en.wikipedia.org/wiki/Molar%20mass
In chemistry, the molar mass () (sometimes called molecular weight or formula weight, but see related quantities for usage) of a chemical compound is defined as the ratio between the mass and the amount of substance (measured in moles) of any sample of the compound. The molar mass is a bulk, not molecular, property of a substance. The molar mass is an average of many instances of the compound, which often vary in mass due to the presence of isotopes. Most commonly, the molar mass is computed from the standard atomic weights and is thus a terrestrial average and a function of the relative abundance of the isotopes of the constituent atoms on Earth. The molar mass is appropriate for converting between the mass of a substance and the amount of a substance for bulk quantities. The molecular mass (for molecular compounds) and formula mass (for non-molecular compounds, such as ionic salts) are commonly used as synonyms of molar mass, differing only in units (daltons vs g/mol); however, the most authoritative sources define it differently. The difference is that molecular mass is the mass of one specific particle or molecule, while the molar mass is an average over many particles or molecules. The molar mass is an intensive property of the substance, that does not depend on the size of the sample. In the International System of Units (SI), the coherent unit of molar mass is kg/mol. However, for historical reasons, molar masses are almost always expressed in g/mol. The mole was defined in such a way that the molar mass of a compound, in g/mol, is numerically equal to the average mass of one molecule or formula unit, in daltons. It was exactly equal before the redefinition of the mole in 2019, and is now only approximately equal, but the difference is negligible for all practical purposes. Thus, for example, the average mass of a molecule of water is about 18.0153 daltons, and the molar mass of water is about 18.0153 g/mol. For chemical elements without isolated molecules, such as carbon and metals, the molar mass is computed dividing by the number of moles of atoms instead. Thus, for example, the molar mass of iron is about 55.845 g/mol. Since 1971, SI defined the "amount of substance" as a separate dimension of measurement. Until 2019, the mole was defined as the amount of substance that has as many constituent particles as there are atoms in 12 grams of carbon-12. During that period, the molar mass of carbon-12 was thus exactly 12 g/mol, by definition. Since 2019, a mole of any substance has been redefined in the SI as the amount of that substance containing an exactly defined number of particles, . The molar mass of a compound in g/mol thus is equal to the mass of this number of molecules of the compound in grams. Molar masses of elements The molar mass of atoms of an element is given by the relative atomic mass of the element multiplied by the molar mass constant, ≈ 1 g/mol. For normal samples from Earth with typical isotope composition, the atomic weight can be approximated by the standard atomic weight or the conventional atomic weight. Multiplying by the molar mass constant ensures that the calculation is dimensionally correct: standard relative atomic masses are dimensionless quantities (i.e., pure numbers) whereas molar masses have units (in this case, grams per mole). Some elements are usually encountered as molecules, e.g. hydrogen (), sulfur (), chlorine (). The molar mass of molecules of these elements is the molar mass of the atoms multiplied by the number of atoms in each molecule: Molar masses of compounds The molar mass of a compound is given by the sum of the relative atomic mass of the atoms which form the compound multiplied by the molar mass constant : Here, is the relative molar mass, also called formula weight. For normal samples from earth with typical isotope composition, the standard atomic weight or the conventional atomic weight can be used as an approximation of the relative atomic mass of the sample. Examples are: An average molar mass may be defined for mixtures of compounds. This is particularly important in polymer science, where there is usually a molar mass distribution of non-uniform polymers so that different polymer molecules contain different numbers of monomer units. Average molar mass of mixtures The average molar mass of mixtures can be calculated from the mole fractions of the components and their molar masses : It can also be calculated from the mass fractions of the components: As an example, the average molar mass of dry air is 28.96 g/mol. Related quantities Molar mass is closely related to the relative molar mass () of a compound and to the standard atomic weights of its constituent elements. However, it should be distinguished from the molecular mass (which is confusingly also sometimes known as molecular weight), which is the mass of one molecule (of any single isotopic composition), and to the atomic mass, which is the mass of one atom (of any single isotope). The dalton, symbol Da, is also sometimes used as a unit of molar mass, especially in biochemistry, with the definition 1 Da = 1 g/mol, despite the fact that it is strictly a unit of mass (1 Da = 1 u = , as of 2022 CODATA recommended values). Obsolete terms for molar mass include gram atomic mass for the mass, in grams, of one mole of atoms of an element, and gram molecular mass for the mass, in grams, of one mole of molecules of a compound. The gram-atom is a former term for a mole of atoms, and gram-molecule for a mole of molecules. Molecular weight (M.W.) (for molecular compounds) and formula weight (F.W.) (for non-molecular compounds), are older terms for what is now more correctly called the relative molar mass (). This is a dimensionless quantity (i.e., a pure number, without units) equal to the molar mass divided by the molar mass constant. Molecular mass The molecular mass () is the mass of a given molecule: it is usually measured in daltons (Da or u). Different molecules of the same compound may have different molecular masses because they contain different isotopes of an element. This is distinct but related to the molar mass, which is a measure of the average molecular mass of all the molecules in a sample and is usually the more appropriate measure when dealing with macroscopic (weigh-able) quantities of a substance. Molecular masses are calculated from the atomic masses of each nuclide, while molar masses are calculated from the standard atomic weights of each element. The standard atomic weight takes into account the isotopic distribution of the element in a given sample (usually assumed to be "normal"). For example, water has a molar mass of , but individual water molecules have molecular masses which range between () and (). The distinction between molar mass and molecular mass is important because relative molecular masses can be measured directly by mass spectrometry, often to a precision of a few parts per million. This is accurate enough to directly determine the chemical formula of a molecule. DNA synthesis usage The term formula weight has a specific meaning when used in the context of DNA synthesis: whereas an individual phosphoramidite nucleobase to be added to a DNA polymer has protecting groups and has its molecular weight quoted including these groups, the amount of molecular weight that is ultimately added by this nucleobase to a DNA polymer is referred to as the nucleobase's formula weight (i.e., the molecular weight of this nucleobase within the DNA polymer, minus protecting groups). Precision and uncertainties The precision to which a molar mass is known depends on the precision of the atomic masses from which it was calculated (and very slightly on the value of the molar mass constant, which depends on the measured value of the dalton). Most atomic masses are known to a precision of at least one part in ten-thousand, often much better (the atomic mass of lithium is a notable, and serious, exception). This is adequate for almost all normal uses in chemistry: it is more precise than most chemical analyses, and exceeds the purity of most laboratory reagents. The precision of atomic masses, and hence of molar masses, is limited by the knowledge of the isotopic distribution of the element. If a more accurate value of the molar mass is required, it is necessary to determine the isotopic distribution of the sample in question, which may be different from the standard distribution used to calculate the standard atomic mass. The isotopic distributions of the different elements in a sample are not necessarily independent of one another: for example, a sample which has been distilled will be enriched in the lighter isotopes of all the elements present. This complicates the calculation of the standard uncertainty in the molar mass. A useful convention for normal laboratory work is to quote molar masses to two decimal places for all calculations. This is more accurate than is usually required, but avoids rounding errors during calculations. When the molar mass is greater than 1000 g/mol, it is rarely appropriate to use more than one decimal place. These conventions are followed in most tabulated values of molar masses. Measurement Molar masses are almost never measured directly. They may be calculated from standard atomic masses, and are often listed in chemical catalogues and on safety data sheets (SDS). Molar masses typically vary between: 1–238 g/mol for atoms of naturally occurring elements; for simple chemical compounds; for polymers, proteins, DNA fragments, etc. While molar masses are almost always, in practice, calculated from atomic weights, they can also be measured in certain cases. Such measurements are much less precise than modern mass spectrometric measurements of atomic weights and molecular masses, and are of mostly historical interest. All of the procedures rely on colligative properties, and any dissociation of the compound must be taken into account. Vapour density The measurement of molar mass by vapour density relies on the principle, first enunciated by Amedeo Avogadro, that equal volumes of gases under identical conditions contain equal numbers of particles. This principle is included in the ideal gas equation: where is the amount of substance. The vapour density () is given by Combining these two equations gives an expression for the molar mass in terms of the vapour density for conditions of known pressure and temperature: Freezing-point depression The freezing point of a solution is lower than that of the pure solvent, and the freezing-point depression () is directly proportional to the amount concentration for dilute solutions. When the composition is expressed as a molality, the proportionality constant is known as the cryoscopic constant () and is characteristic for each solvent. If represents the mass fraction of the solute in solution, and assuming no dissociation of the solute, the molar mass is given by Boiling-point elevation The boiling point of a solution of an involatile solute is higher than that of the pure solvent, and the boiling-point elevation () is directly proportional to the amount concentration for dilute solutions. When the composition is expressed as a molality, the proportionality constant is known as the ebullioscopic constant () and is characteristic for each solvent. If represents the mass fraction of the solute in solution, and assuming no dissociation of the solute, the molar mass is given by See also Mole map (chemistry) References Notes External links HTML5 Molar Mass Calculator web and mobile application. Online Molar Mass Calculator with the uncertainty of M and all the calculations shown Molar Mass Calculator Online Molar Mass and Elemental Composition Calculator Stoichiometry Add-In for Microsoft Excel for calculation of molecular weights, reaction coefficients and stoichiometry. It includes both average atomic weights and isotopic weights. Molar mass: chemistry second-level course. Mass
Molar mass
[ "Physics", "Mathematics" ]
2,538
[ "Scalar physical quantities", "Chemical reaction engineering", "Stoichiometry", "Physical quantities", "Quantity", "Mass", "Intensive quantities", "Chemical quantities", "Size", "nan", "Wikipedia categories named after physical quantities", "Molar quantities", "Matter" ]
144,417
https://en.wikipedia.org/wiki/Comoving%20and%20proper%20distances
In standard cosmology, comoving distance and proper distance (or physical distance) are two closely related distance measures used by cosmologists to define distances between objects. Comoving distance factors out the expansion of the universe, giving a distance that does not change in time except due to local factors, such as the motion of a galaxy within a cluster. Proper distance roughly corresponds to where a distant object would be at a specific moment of cosmological time, which can change over time due to the expansion of the universe. Comoving distance and proper distance are defined to be equal at the present time. At other times, the Universe's expansion results in the proper distance changing, while the comoving distance remains constant. Comoving coordinates Although general relativity allows the formulation of the laws of physics using arbitrary coordinates, some coordinate choices are more natural or easier to work with. Comoving coordinates are an example of such a natural coordinate choice. They assign constant spatial coordinate values to observers who perceive the universe as isotropic. Such observers are called "comoving" observers because they move along with the Hubble flow. A comoving observer is the only observer who will perceive the universe, including the cosmic microwave background radiation, to be isotropic. Non-comoving observers will see regions of the sky systematically blue-shifted or red-shifted. Thus isotropy, particularly isotropy of the cosmic microwave background radiation, defines a special local frame of reference called the comoving frame. The velocity of an observer relative to the local comoving frame is called the peculiar velocity of the observer. Most large lumps of matter, such as galaxies, are nearly comoving, so that their peculiar velocities (owing to gravitational attraction) are small compared to their Hubble-flow velocity seen by observers in moderately nearby galaxies, (i.e. as seen from galaxies just outside the group local to the observed "lump of matter"). The comoving time coordinate is the elapsed time since the Big Bang according to a clock of a comoving observer and is a measure of cosmological time. The comoving spatial coordinates tell where an event occurs while cosmological time tells when an event occurs. Together, they form a complete coordinate system, giving both the location and time of an event. Space in comoving coordinates is usually referred to as being "static", as most bodies on the scale of galaxies or larger are approximately comoving, and comoving bodies have static, unchanging comoving coordinates. So for a given pair of comoving galaxies, while the proper distance between them would have been smaller in the past and will become larger in the future due to the expansion of the universe, the comoving distance between them remains constant at all times. The expanding Universe has an increasing scale factor which explains how constant comoving distances are reconciled with proper distances that increase with time. Comoving distance and proper distance Comoving distance is the distance between two points measured along a path defined at the present cosmological time. For objects moving with the Hubble flow, it is deemed to remain constant in time. The comoving distance from an observer to a distant object (e.g. galaxy) can be computed by the following formula (derived using the Friedmann–Lemaître–Robertson–Walker metric): where a(t′) is the scale factor, te is the time of emission of the photons detected by the observer, t is the present time, and c is the speed of light in vacuum. Despite being an integral over time, this expression gives the correct distance that would be measured by a set of comoving local rulers at fixed time t, i.e. the "proper distance" (as defined below) after accounting for the time-dependent comoving speed of light via the inverse scale factor term in the integrand. By "comoving speed of light", we mean the velocity of light through comoving coordinates [] which is time-dependent even though locally, at any point along the null geodesic of the light particles, an observer in an inertial frame always measures the speed of light as in accordance with special relativity. For a derivation see "Appendix A: Standard general relativistic definitions of expansion and horizons" from Davis & Lineweaver 2004. In particular, see eqs. 16–22 in the referenced 2004 paper [note: in that paper the scale factor is defined as a quantity with the dimension of distance while the radial coordinate is dimensionless.] Definitions Many textbooks use the symbol for the comoving distance. However, this must be distinguished from the coordinate distance in the commonly used comoving coordinate system for a FLRW universe where the metric takes the form (in reduced-circumference polar coordinates, which only works half-way around a spherical universe): In this case the comoving coordinate distance is related to by: Most textbooks and research papers define the comoving distance between comoving observers to be a fixed unchanging quantity independent of time, while calling the dynamic, changing distance between them "proper distance". On this usage, comoving and proper distances are numerically equal at the current age of the universe, but will differ in the past and in the future; if the comoving distance to a galaxy is denoted , the proper distance at an arbitrary time is simply given by where is the scale factor (e.g. Davis & Lineweaver 2004). The proper distance between two galaxies at time t is just the distance that would be measured by rulers between them at that time. Uses of the proper distance Cosmological time is identical to locally measured time for an observer at a fixed comoving spatial position, that is, in the local comoving frame. Proper distance is also equal to the locally measured distance in the comoving frame for nearby objects. To measure the proper distance between two distant objects, one imagines that one has many comoving observers in a straight line between the two objects, so that all of the observers are close to each other, and form a chain between the two distant objects. All of these observers must have the same cosmological time. Each observer measures their distance to the nearest observer in the chain, and the length of the chain, the sum of distances between nearby observers, is the total proper distance. It is important to the definition of both comoving distance and proper distance in the cosmological sense (as opposed to proper length in special relativity) that all observers have the same cosmological age. For instance, if one measured the distance along a straight line or spacelike geodesic between the two points, observers situated between the two points would have different cosmological ages when the geodesic path crossed their own world lines, so in calculating the distance along this geodesic one would not be correctly measuring comoving distance or cosmological proper distance. Comoving and proper distances are not the same concept of distance as the concept of distance in special relativity. This can be seen by considering the hypothetical case of a universe empty of mass, where both sorts of distance can be measured. When the density of mass in the FLRW metric is set to zero (an empty 'Milne universe'), then the cosmological coordinate system used to write this metric becomes a non-inertial coordinate system in the Minkowski spacetime of special relativity where surfaces of constant Minkowski proper-time τ appear as hyperbolas in the Minkowski diagram from the perspective of an inertial frame of reference. In this case, for two events which are simultaneous according to the cosmological time coordinate, the value of the cosmological proper distance is not equal to the value of the proper length between these same events, which would just be the distance along a straight line between the events in a Minkowski diagram (and a straight line is a geodesic in flat Minkowski spacetime), or the coordinate distance between the events in the inertial frame where they are simultaneous. If one divides a change in proper distance by the interval of cosmological time where the change was measured (or takes the derivative of proper distance with respect to cosmological time) and calls this a "velocity", then the resulting "velocities" of galaxies or quasars can be above the speed of light, c. Such superluminal expansion is not in conflict with special or general relativity nor the definitions used in physical cosmology. Even light itself does not have a "velocity" of c in this sense; the total velocity of any object can be expressed as the sum where is the recession velocity due to the expansion of the universe (the velocity given by Hubble's law) and is the "peculiar velocity" measured by local observers (with and , the dots indicating a first derivative), so for light is equal to c (−c if the light is emitted towards our position at the origin and +c if emitted away from us) but the total velocity is generally different from c. Even in special relativity the coordinate speed of light is only guaranteed to be c in an inertial frame; in a non-inertial frame the coordinate speed may be different from c. In general relativity no coordinate system on a large region of curved spacetime is "inertial", but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" in which the local speed of light is c and in which massive objects such as stars and galaxies always have a local speed smaller than c. The cosmological definitions used to define the velocities of distant objects are coordinate-dependent – there is no general coordinate-independent definition of velocity between distant objects in general relativity. How best to describe and popularize that expansion of the universe is (or at least was) very likely proceeding – at the greatest scale – at above the speed of light, has caused a minor amount of controversy. One viewpoint is presented in Davis and Lineweaver, 2004. Short distances vs. long distances Within small distances and short trips, the expansion of the universe during the trip can be ignored. This is because the travel time between any two points for a non-relativistic moving particle will just be the proper distance (that is, the comoving distance measured using the scale factor of the universe at the time of the trip rather than the scale factor "now") between those points divided by the velocity of the particle. If the particle is moving at a relativistic velocity, the usual relativistic corrections for time dilation must be made. See also Distance measure for comparison with other distance measures. Expansion of the universe , for the apparent faster-than-light movement of distant galaxies. Friedmann–Lemaître–Robertson–Walker metric Proper length Redshift, for the link between comoving distance to redshift. Shape of the universe References Further reading Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. Steven Weinberg. Publisher:Wiley-VCH (July 1972). . Principles of Physical Cosmology. P. J. E. Peebles. Publisher:Princeton University Press (1993). . External links Distance measures in cosmology Ned Wright's cosmology tutorial iCosmos: Cosmology Calculator (With Graph Generation ) General method, including locally inhomogeneous case and Fortran 77 software An explanation from the Atlas of the Universe website of distance. Physical cosmology Coordinate charts in general relativity Physical quantities
Comoving and proper distances
[ "Physics", "Astronomy", "Mathematics" ]
2,366
[ "Physical phenomena", "Astronomical sub-disciplines", "Physical quantities", "Quantity", "Theoretical physics", "Astrophysics", "Coordinate charts in general relativity", "Coordinate systems", "Physical properties", "Physical cosmology" ]
144,428
https://en.wikipedia.org/wiki/Degenerate%20matter
Degenerate matter occurs when the Pauli exclusion principle significantly alters a state of matter at low temperature. The term is used in astrophysics to refer to dense stellar objects such as white dwarfs and neutron stars, where thermal pressure alone is not enough to prevent gravitational collapse. The term also applies to metals in the Fermi gas approximation. Degenerate matter is usually modelled as an ideal Fermi gas, an ensemble of non-interacting fermions. In a quantum mechanical description, particles limited to a finite volume may take only a discrete set of energies, called quantum states. The Pauli exclusion principle prevents identical fermions from occupying the same quantum state. At lowest total energy (when the thermal energy of the particles is negligible), all the lowest energy quantum states are filled. This state is referred to as full degeneracy. This degeneracy pressure remains non-zero even at absolute zero temperature. Adding particles or reducing the volume forces the particles into higher-energy quantum states. In this situation, a compression force is required, and is made manifest as a resisting pressure. The key feature is that this degeneracy pressure does not depend on the temperature but only on the density of the fermions. Degeneracy pressure keeps dense stars in equilibrium, independent of the thermal structure of the star. A degenerate mass whose fermions have velocities close to the speed of light (particle kinetic energy larger than its rest mass energy) is called relativistic degenerate matter. The concept of degenerate stars, stellar objects composed of degenerate matter, was originally developed in a joint effort between Arthur Eddington, Ralph Fowler and Arthur Milne. Eddington had suggested that the atoms in Sirius B were almost completely ionised and closely packed. Fowler described white dwarfs as composed of a gas of particles that became degenerate at low temperature; he also pointed out that ordinary atoms are broadly similar in regards to the filling of energy levels by fermions. Milne proposed that degenerate matter is found in most of the nuclei of stars, not only in compact stars. Concept Degenerate matter exhibits quantum mechanical properties when a fermion system temperature approaches absolute zero. These properties result from a combination of the Pauli exclusion principle and quantum confinement. The Pauli principle allows only one fermion in each quantum state and the confinement ensures that energy of these states increases as they are filled. The lowest states fill up and fermions are forced to occupy high energy states even at low temperature. While the Pauli principle and Fermi-Dirac distribution applies to all matter, the interesting cases for degenerate matter involve systems of many fermions. These cases can be understood with the help of the Fermi gas model. Examples include electrons in metals and in white dwarf stars and neutrons in neutron stars. The electrons are confined by Coulomb attraction to positive ion cores; the neutrons are confined by gravitation attraction. The fermions, forced in to higher levels by the Pauli principle, exert pressure preventing further compression. The allocation or distribution of fermions into quantum states ranked by energy is called the Fermi-Dirac distribution. Degenerate matter exhibits the results of Fermi-Dirac distribution. Degeneracy pressure Unlike a classical ideal gas, whose pressure is proportional to its temperature where P is pressure, kB is the Boltzmann constant, N is the number of particles (typically atoms or molecules), T is temperature, and V is the volume, the pressure exerted by degenerate matter depends only weakly on its temperature. In particular, the pressure remains nonzero even at absolute zero temperature. At relatively low densities, the pressure of a fully degenerate gas can be derived by treating the system as an ideal Fermi gas, in this way where m is the mass of the individual particles making up the gas. At very high densities, where most of the particles are forced into quantum states with relativistic energies, the pressure is given by where K is another proportionality constant depending on the properties of the particles making up the gas. All matter experiences both normal thermal pressure and degeneracy pressure, but in commonly encountered gases, thermal pressure dominates so much that degeneracy pressure can be ignored. Likewise, degenerate matter still has normal thermal pressure; the degeneracy pressure dominates to the point that temperature has a negligible effect on the total pressure. The adjacent figure shows the thermal pressure (red line) and total pressure (blue line) in a Fermi gas, with the difference between the two being the degeneracy pressure. As the temperature falls, the density and the degeneracy pressure increase, until the degeneracy pressure contributes most of the total pressure. While degeneracy pressure usually dominates at extremely high densities, it is the ratio between degenerate pressure and thermal pressure which determines degeneracy. Given a sufficiently drastic increase in temperature (such as during a red giant star's helium flash), matter can become non-degenerate without reducing its density. Degeneracy pressure contributes to the pressure of conventional solids, but these are not usually considered to be degenerate matter because a significant contribution to their pressure is provided by electrical repulsion of atomic nuclei and the screening of nuclei from each other by electrons. The free electron model of metals derives their physical properties by considering the conduction electrons alone as a degenerate gas, while the majority of the electrons are regarded as occupying bound quantum states. This solid state contrasts with degenerate matter that forms the body of a white dwarf, where most of the electrons would be treated as occupying free particle momentum states. Exotic examples of degenerate matter include neutron degenerate matter, strange matter, metallic hydrogen and white dwarf matter. Degenerate gases Degenerate gases are gases composed of fermions such as electrons, protons, and neutrons rather than molecules of ordinary matter. The electron gas in ordinary metals and in the interior of white dwarfs are two examples. Following the Pauli exclusion principle, there can be only one fermion occupying each quantum state. In a degenerate gas, all quantum states are filled up to the Fermi energy. Most stars are supported against their own gravitation by normal thermal gas pressure, while in white dwarf stars the supporting force comes from the degeneracy pressure of the electron gas in their interior. In neutron stars, the degenerate particles are neutrons. A fermion gas in which all quantum states below a given energy level are filled is called a fully degenerate fermion gas. The difference between this energy level and the lowest energy level is known as the Fermi energy. Electron degeneracy In an ordinary fermion gas in which thermal effects dominate, most of the available electron energy levels are unfilled and the electrons are free to move to these states. As particle density is increased, electrons progressively fill the lower energy states and additional electrons are forced to occupy states of higher energy even at low temperatures. Degenerate gases strongly resist further compression because the electrons cannot move to already filled lower energy levels due to the Pauli exclusion principle. Since electrons cannot give up energy by moving to lower energy states, no thermal energy can be extracted. The momentum of the fermions in the fermion gas nevertheless generates pressure, termed "degeneracy pressure". Under high densities, matter becomes a degenerate gas when all electrons are stripped from their parent atoms. The core of a star, once hydrogen burning nuclear fusion reactions stops, becomes a collection of positively charged ions, largely helium and carbon nuclei, floating in a sea of electrons, which have been stripped from the nuclei. Degenerate gas is an almost perfect conductor of heat and does not obey ordinary gas laws. White dwarfs are luminous not because they are generating energy but rather because they have trapped a large amount of heat which is gradually radiated away. Normal gas exerts higher pressure when it is heated and expands, but the pressure in a degenerate gas does not depend on the temperature. When gas becomes super-compressed, particles position right up against each other to produce degenerate gas that behaves more like a solid. In degenerate gases the kinetic energies of electrons are quite high and the rate of collision between electrons and other particles is quite low, therefore degenerate electrons can travel great distances at velocities that approach the speed of light. Instead of temperature, the pressure in a degenerate gas depends only on the speed of the degenerate particles; however, adding heat does not increase the speed of most of the electrons, because they are stuck in fully occupied quantum states. Pressure is increased only by the mass of the particles, which increases the gravitational force pulling the particles closer together. Therefore, the phenomenon is the opposite of that normally found in matter where if the mass of the matter is increased, the object becomes bigger. In degenerate gas, when the mass is increased, the particles become spaced closer together due to gravity (and the pressure is increased), so the object becomes smaller. Degenerate gas can be compressed to very high densities, typical values being in the range of 10,000 kilograms per cubic centimeter. There is an upper limit to the mass of an electron-degenerate object, the Chandrasekhar limit, beyond which electron degeneracy pressure cannot support the object against collapse. The limit is approximately 1.44 solar masses for objects with typical compositions expected for white dwarf stars (carbon and oxygen with two baryons per electron). This mass cut-off is appropriate only for a star supported by ideal electron degeneracy pressure under Newtonian gravity; in general relativity and with realistic Coulomb corrections, the corresponding mass limit is around 1.38 solar masses. The limit may also change with the chemical composition of the object, as it affects the ratio of mass to number of electrons present. The object's rotation, which counteracts the gravitational force, also changes the limit for any particular object. Celestial objects below this limit are white dwarf stars, formed by the gradual shrinking of the cores of stars that run out of fuel. During this shrinking, an electron-degenerate gas forms in the core, providing sufficient degeneracy pressure as it is compressed to resist further collapse. Above this mass limit, a neutron star (primarily supported by neutron degeneracy pressure) or a black hole may be formed instead. Neutron degeneracy Neutron degeneracy is analogous to electron degeneracy and exists in neutron stars, which are partially supported by the pressure from a degenerate neutron gas. Neutron stars are formed either directly from the supernova of stars with masses between 10 and 25 M☉ (solar masses), or by white dwarfs acquiring a mass in excess of the Chandrasekhar limit of 1.44 M☉, usually either as a result of a merger or by feeding off of a close binary partner. Above the Chandrasekhar limit, the gravitational pressure at the core exceeds the electron degeneracy pressure, and electrons begin to combine with protons to produce neutrons (via inverse beta decay, also termed electron capture). The result is an extremely compact star composed of "nuclear matter", which is predominantly a degenerate neutron gas with a small admixture of degenerate proton and electron gases. Neutrons in a degenerate neutron gas are spaced much more closely than electrons in an electron-degenerate gas because the more massive neutron has a much shorter wavelength at a given energy. This phenomenon is compounded by the fact that the pressures within neutron stars are much higher than those in white dwarfs. The pressure increase is caused by the fact that the compactness of a neutron star causes gravitational forces to be much higher than in a less compact body with similar mass. The result is a star with a diameter on the order of a thousandth that of a white dwarf. The properties of neutron matter set an upper limit to the mass of a neutron star, the Tolman–Oppenheimer–Volkoff limit, which is analogous to the Chandrasekhar limit for white dwarf stars. Proton degeneracy Sufficiently dense matter containing protons experiences proton degeneracy pressure, in a manner similar to the electron degeneracy pressure in electron-degenerate matter: protons confined to a sufficiently small volume have a large uncertainty in their momentum due to the Heisenberg uncertainty principle. However, because protons are much more massive than electrons, the same momentum represents a much smaller velocity for protons than for electrons. As a result, in matter with approximately equal numbers of protons and electrons, proton degeneracy pressure is much smaller than electron degeneracy pressure, and proton degeneracy is usually modelled as a correction to the equations of state of electron-degenerate matter. Quark degeneracy At densities greater than those supported by neutron degeneracy, quark matter is expected to occur. Several variations of this hypothesis have been proposed that represent quark-degenerate states. Strange matter is a degenerate gas of quarks that is often assumed to contain strange quarks in addition to the usual up and down quarks. Color superconductor materials are degenerate gases of quarks in which quarks pair up in a manner similar to Cooper pairing in electrical superconductors. The equations of state for the various proposed forms of quark-degenerate matter vary widely, and are usually also poorly defined, due to the difficulty of modelling strong force interactions. Quark-degenerate matter may occur in the cores of neutron stars, depending on the equations of state of neutron-degenerate matter. It may also occur in hypothetical quark stars, formed by the collapse of objects above the Tolman–Oppenheimer–Volkoff mass limit for neutron-degenerate objects. Whether quark-degenerate matter forms at all in these situations depends on the equations of state of both neutron-degenerate matter and quark-degenerate matter, both of which are poorly known. Quark stars are considered to be an intermediate category between neutron stars and black holes. History Quantum mechanics uses the word 'degenerate' in two ways: degenerate energy levels and as the low temperature ground state limit for states of matter. The electron degeneracy pressure occurs in the ground state systems which are non-degenerate in energy levels. The term "degeneracy" derives from work on the specific heat of gases that pre-dates the use of the term in quantum mechanics. In 1914 Walther Nernst described the reduction of the specific heat of gases at very low temperature as "degeneration"; he attributed this to quantum effects. In subsequent work in various papers on quantum thermodynamics by Albert Einstein, by Max Planck, and by Erwin Schrödinger, the effect at low temperatures came to be called "gas degeneracy". A fully degenerate gas has no volume dependence on pressure when temperature approaches absolute zero. Early in 1927 Enrico Fermi and separately Llewellyn Thomas developed a semi-classical model for electrons in a metal. The model treated the electrons as a gas. Later in 1927, Arnold Sommerfeld applied the Pauli principle via Fermi-Dirac statistics to this electron gas model, computing the specific heat of metals; the result became Fermi gas model for metals. Sommerfeld called the low temperature region with quantum effects a "wholly degenerate gas". Also in 1927 Ralph H. Fowler applied Fermi's model to the puzzle of the stability of white dwarf stars. This approach was extended to relativistic models by later studies and with the work of Subrahmanyan Chandrasekhar became the accepted model for star stability. See also Bose–Einstein condensate – Degenerate bosonic gas Metallic hydrogen – High-pressure phase of hydrogen Citations References External links Lecture 17: Stellar Evolution. Discusses degenerate gases in models of stars Concepts in astrophysics Degenerate stars Exotic matter Phases of matter
Degenerate matter
[ "Physics", "Chemistry" ]
3,340
[ "Concepts in astrophysics", "Phases of matter", "Astrophysics", "Exotic matter", "Matter" ]