id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
9,697,733 | https://en.wikipedia.org/wiki/Contact%20mechanics | Contact mechanics is the study of the deformation of solids that touch each other at one or more points. A central distinction in contact mechanics is between stresses acting perpendicular to the contacting bodies' surfaces (known as normal stress) and frictional stresses acting tangentially between the surfaces (shear stress). Normal contact mechanics or frictionless contact mechanics focuses on normal stresses caused by applied normal forces and by the adhesion present on surfaces in close contact, even if they are clean and dry.
Frictional contact mechanics emphasizes the effect of friction forces.
Contact mechanics is part of mechanical engineering. The physical and mathematical formulation of the subject is built upon the mechanics of materials and continuum mechanics and focuses on computations involving elastic, viscoelastic, and plastic bodies in static or dynamic contact. Contact mechanics provides necessary information for the safe and energy efficient design of technical systems and for the study of tribology, contact stiffness, electrical contact resistance and indentation hardness. Principles of contacts mechanics are implemented towards applications such as locomotive wheel-rail contact, coupling devices, braking systems, tires, bearings, combustion engines, mechanical linkages, gasket seals, metalworking, metal forming, ultrasonic welding, electrical contacts, and many others. Current challenges faced in the field may include stress analysis of contact and coupling members and the influence of lubrication and material design on friction and wear. Applications of contact mechanics further extend into the micro- and nanotechnological realm.
The original work in contact mechanics dates back to 1881 with the publication of the paper "On the contact of elastic solids" "Über die Berührung fester elastischer Körper" by Heinrich Hertz. Hertz attempted to understand how the optical properties of multiple, stacked lenses might change with the force holding them together. Hertzian contact stress refers to the localized stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact. It gives the contact stress as a function of the normal contact force, the radii of curvature of both bodies and the modulus of elasticity of both bodies. Hertzian contact stress forms the foundation for the equations for load bearing capabilities and fatigue life in bearings, gears, and any other bodies where two surfaces are in contact.
History
Classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the contact problem of two elastic bodies with curved surfaces. This still-relevant classical solution provides a foundation for modern problems in contact mechanics. For example, in mechanical engineering and tribology, Hertzian contact stress is a description of the stress within mating parts. The Hertzian contact stress usually refers to the stress close to the area of contact between two spheres of different radii.
It was not until nearly one hundred years later that Kenneth L. Johnson, Kevin Kendall, and Alan D. Roberts found a similar solution for the case of adhesive contact. This theory was rejected by Boris Derjaguin and co-workers who proposed a different theory of adhesion in the 1970s. The Derjaguin model came to be known as the Derjaguin–Muller–Toporov (DMT) model (after Derjaguin, M. V. Muller and Yu. P. Toporov), and the Johnson et al. model came to be known as the Johnson–Kendall–Roberts (JKR) model for adhesive elastic contact. This rejection proved to be instrumental in the development of the David Tabor and later Daniel Maugis parameters that quantify which contact model (of the JKR and DMT models) represent adhesive contact better for specific materials.
Further advancement in the field of contact mechanics in the mid-twentieth century may be attributed to names such as Frank Philip Bowden and Tabor. Bowden and Tabor were the first to emphasize the importance of surface roughness for bodies in contact. Through investigation of the surface roughness, the true contact area between friction partners is found to be less than the apparent contact area. Such understanding also drastically changed the direction of undertakings in tribology. The works of Bowden and Tabor yielded several theories in contact mechanics of rough surfaces.
The contributions of J. F. Archard (1957) must also be mentioned in discussion of pioneering works in this field. Archard concluded that, even for rough elastic surfaces, the contact area is approximately proportional to the normal force. Further important insights along these lines were provided by Jonh A. Greenwood and J. B. P. Williamson (1966), A. W. Bush (1975), and Bo N. J. Persson (2002). The main findings of these works were that the true contact surface in rough materials is generally proportional to the normal force, while the parameters of individual micro-contacts (pressure and size of the micro-contact) are only weakly dependent upon the load.
Classical solutions for non-adhesive elastic contact
The theory of contact between elastic bodies can be used to find contact areas and indentation depths for simple geometries. Some commonly used solutions are listed below. The theory used to compute these solutions is discussed later in the article. Solutions for multitude of other technically relevant shapes, e.g. the truncated cone, the worn sphere, rough profiles, hollow cylinders, etc. can be found in
Contact between a sphere and a half-space
An elastic sphere of radius indents an elastic half-space where total deformation is , causing a contact area of radius
The applied force is related to the displacement by
where
and , are the elastic moduli and , the Poisson's ratios associated with each body.
The distribution of normal pressure in the contact area as a function of distance from the center of the circle is
where is the maximum contact pressure given by
The radius of the circle is related to the applied load by the equation
The total deformation is related to the maximum contact pressure by
The maximum shear stress occurs in the interior at for .
Contact between two spheres
For contact between two spheres of radii and , the area of contact is a circle of radius . The equations are the same as for a sphere in contact with a half plane except that the effective radius is defined as
Contact between two crossed cylinders of equal radius
This is equivalent to contact between a sphere of radius and a plane.
Contact between a rigid cylinder with flat end and an elastic half-space
If a rigid cylinder is pressed into an elastic half-space, it creates a pressure distribution described by
where is the radius of the cylinder and
The relationship between the indentation depth and the normal force is given by
Contact between a rigid conical indenter and an elastic half-space
In the case of indentation of an elastic half-space of Young's modulus using a rigid conical indenter, the depth of the contact region and contact radius are related by
with defined as the angle between the plane and the side surface of the cone. The total indentation depth is given by:
The total force is
The pressure distribution is given by
The stress has a logarithmic singularity at the tip of the cone.
Contact between two cylinders with parallel axes
In contact between two cylinders with parallel axes, the force is linearly proportional to the length of cylinders L and to the indentation depth d:
The radii of curvature are entirely absent from this relationship. The contact radius is described through the usual relationship
with
as in contact between two spheres. The maximum pressure is equal to
Bearing contact
The contact in the case of bearings is often a contact between a convex surface (male cylinder or sphere) and a concave surface (female cylinder or sphere: bore or hemispherical cup).
Method of dimensionality reduction
Some contact problems can be solved with the method of dimensionality reduction (MDR). In this method, the initial three-dimensional system is replaced with a contact of a body with a linear elastic or viscoelastic foundation (see fig.). The properties of one-dimensional systems coincide exactly with those of the original three-dimensional system, if the form of the bodies is modified and the elements of the foundation are defined according to the rules of the MDR. MDR is based on the solution to axisymmetric contact problems first obtained by Ludwig Föppl (1941) and Gerhard Schubert (1942)
However, for exact analytical results, it is required that the contact problem is axisymmetric and the contacts are compact.
Hertzian theory of non-adhesive elastic contact
The classical theory of contact focused primarily on non-adhesive contact where no tension force is allowed to occur within the contact area, i.e., contacting bodies can be separated without adhesion forces. Several analytical and numerical approaches have been used to solve contact problems that satisfy the no-adhesion condition. Complex forces and moments are transmitted between the bodies where they touch, so problems in contact mechanics can become quite sophisticated. In addition, the contact stresses are usually a nonlinear function of the deformation. To simplify the solution procedure, a frame of reference is usually defined in which the objects (possibly in motion relative to one another) are static. They interact through surface tractions (or pressures/stresses) at their interface.
As an example, consider two objects which meet at some surface in the (,)-plane with the -axis assumed normal to the surface. One of the bodies will experience a normally-directed pressure distribution and in-plane surface traction distributions and over the region . In terms of a Newtonian force balance, the forces:
must be equal and opposite to the forces established in the other body. The moments corresponding to these forces:
are also required to cancel between bodies so that they are kinematically immobile.
Assumptions in Hertzian theory
The following assumptions are made in determining the solutions of Hertzian contact problems:
The strains are small and within the elastic limit.
The surfaces are continuous and non-conforming (implying that the area of contact is much smaller than the characteristic dimensions of the contacting bodies).
Each body can be considered an elastic half-space.
The surfaces are frictionless.
Additional complications arise when some or all these assumptions are violated and such contact problems are usually called non-Hertzian.
Analytical solution techniques
Analytical solution methods for non-adhesive contact problem can be classified into two types based on the geometry of the area of contact. A conforming contact is one in which the two bodies touch at multiple points before any deformation takes place (i.e., they just "fit together"). A non-conforming contact is one in which the shapes of the bodies are dissimilar enough that, under zero load, they only touch at a point (or possibly along a line). In the non-conforming case, the contact area is small compared to the sizes of the objects and the stresses are highly concentrated in this area. Such a contact is called concentrated, otherwise it is called diversified.
A common approach in linear elasticity is to superpose a number of solutions each of which corresponds to a point load acting over the area of contact. For example, in the case of loading of a half-plane, the Flamant solution is often used as a starting point and then generalized to various shapes of the area of contact. The force and moment balances between the two bodies in contact act as additional constraints to the solution.
Point contact on a (2D) half-plane
A starting point for solving contact problems is to understand the effect of a "point-load" applied to an isotropic, homogeneous, and linear elastic half-plane, shown in the figure to the right. The problem may be either plane stress or plane strain. This is a boundary value problem of linear elasticity subject to the traction boundary conditions:
where is the Dirac delta function. The boundary conditions state that there are no shear stresses on the surface and a singular normal force P is applied at (0, 0). Applying these conditions to the governing equations of elasticity produces the result
for some point, , in the half-plane. The circle shown in the figure indicates a surface on which the maximum shear stress is constant. From this stress field, the strain components and thus the displacements of all material points may be determined.
Line contact on a (2D) half-plane
Normal loading over a region
Suppose, rather than a point load , a distributed load is applied to the surface instead, over the range . The principle of linear superposition can be applied to determine the resulting stress field as the solution to the integral equations:
Shear loading over a region
The same principle applies for loading on the surface in the plane of the surface. These kinds of tractions would tend to arise as a result of friction. The solution is similar the above (for both singular loads and distributed loads ) but altered slightly:
These results may themselves be superposed onto those given above for normal loading to deal with more complex loads.
Point contact on a (3D) half-space
Analogously to the Flamant solution for the 2D half-plane, fundamental solutions are known for the linearly elastic 3D half-space as well. These were found by Boussinesq for a concentrated normal load and by Cerruti for a tangential load. See the section on this in Linear elasticity.
Numerical solution techniques
Distinctions between conforming and non-conforming contact do not have to be made when numerical solution schemes are employed to solve contact problems. These methods do not rely on further assumptions within the solution process since they base solely on the general formulation of the underlying equations. Besides the standard equations describing the deformation and motion of bodies two additional inequalities can be formulated. The first simply restricts the motion and deformation of the bodies by the assumption that no penetration can occur. Hence the gap between two bodies can only be positive or zero
where denotes contact. The second assumption in contact mechanics is related to the fact, that no tension force is allowed to occur within the contact area (contacting bodies can be lifted up without adhesion forces). This leads to an inequality which the stresses have to obey at the contact interface. It is formulated for the normal stress .
At locations where there is contact between the surfaces the gap is zero, i.e. , and there the normal stress is different than zero, indeed, . At locations where the surfaces are not in contact the normal stress is identical to zero; , while the gap is positive; i.e., . This type of complementarity formulation can be expressed in the so-called Kuhn–Tucker form, viz.
These conditions are valid in a general way. The mathematical formulation of the gap depends upon the kinematics of the underlying theory of the solid (e.g., linear or nonlinear solid in two- or three dimensions, beam or shell model). By restating the normal stress in terms of the contact pressure, ; i.e., the Kuhn-Tucker problem can be restated as in standard complementarity form i.e. In the linear elastic case the gap can be formulated as where is the rigid body separation, is the geometry/topography of the contact (cylinder and roughness) and is the elastic deformation/deflection. If the contacting bodies are approximated as linear elastic half spaces, the Boussinesq-Cerruti integral equation solution can be applied to express the deformation () as a function of the contact pressure (); i.e., where for line loading of an elastic half space and for point loading of an elastic half-space.
After discretization the linear elastic contact mechanics problem can be stated in standard Linear Complementarity Problem (LCP) form.
where is a matrix, whose elements are so called influence coefficients relating the contact pressure and the deformation. The strict LCP formulation of the CM problem presented above, allows for direct application of well-established numerical solution techniques such as Lemke's pivoting algorithm. The Lemke algorithm has the advantage that it finds the numerically exact solution within a finite number of iterations. The MATLAB implementation presented by Almqvist et al. is one example that can be employed to solve the problem numerically. In addition, an example code for an LCP solution of a 2D linear elastic contact mechanics problem has also been made public at MATLAB file exchange by Almqvist et al.
Contact between rough surfaces
When two bodies with rough surfaces are pressed against each other, the true contact area formed between the two bodies, , is much smaller than the apparent or nominal contact area . The mechanics of contacting rough surfaces are discussed in terms of normal contact mechanics and static frictional interactions. Natural and engineering surfaces typically exhibit roughness features, known as asperities, across a broad range of length scales down to the molecular level, with surface structures exhibiting self affinity, also known as surface fractality. It is recognized that the self affine structure of surfaces is the origin of the linear scaling of true contact area with applied pressure. Assuming a model of shearing welded contacts in tribological interactions, this ubiquitously observed linearity between contact area and pressure can also be considered the origin of the linearity of the relationship between static friction and applied normal force.
In contact between a "random rough" surface and an elastic half-space, the true contact area is related to the normal force by
with equal to the root mean square (also known as the quadratic mean) of the surface slope and . The median pressure in the true contact surface
can be reasonably estimated as half of the effective elastic modulus multiplied with the root mean square of the surface slope .
An overview of the GW model
Greenwood and Williamson in 1966 (GW) proposed a theory of elastic contact mechanics of rough surfaces which is today the foundation of many theories in tribology (friction, adhesion, thermal and electrical conductance, wear, etc.). They considered the contact between a smooth rigid plane and a nominally flat deformable rough surface covered with round tip asperities of the same radius R. Their theory assumes that the deformation of each asperity is independent of that of its neighbours and is described by the Hertz model. The heights of asperities have a random distribution. The probability that asperity height is between and is . The authors calculated the number of contact spots n, the total contact area and the total load P in general case. They gave those formulas in two forms: in the basic and using standardized variables. If one assumes that N asperities covers a rough surface, then the expected number of contacts is
The expected total area of contact can be calculated from the formula
and the expected total force is given by
where:
R, radius of curvature of the microasperity,
z, height of the microasperity measured from the profile line,
d, close the surface,
, composite Young's modulus of elasticity,
, modulus of elasticity of the surface,
, Poisson's surface coefficients.
Greenwood and Williamson introduced standardized separation and standardized height distribution whose standard deviation is equal to one. Below are presented the formulas in the standardized form.
where:
d is the separation,
is the nominal contact area,
is the surface density of asperities,
is the effective Young modulus.
and can be determined when the terms are calculated for the given surfaces using the convolution of the surface roughness . Several studies have followed the suggested curve fits for assuming a Gaussian surface high distribution with curve fits presented by Arcoumanis et al. and Jedynak among others. It has been repeatedly observed that engineering surfaces do not demonstrate Gaussian surface height distributions e.g. Peklenik. Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction.
Recently the exact approximants to and were published by Jedynak. They are given by the following rational formulas, which are approximants to the integrals . They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis.
For the coefficients are
The maximum relative error is .
For the coefficients are
The maximum relative error is . The paper also contains the exact expressions for
where erfc(z) means the complementary error function and is the modified Bessel function of the second kind.
For the situation where the asperities on the two surfaces have a Gaussian height distribution and the peaks can be assumed to be spherical, the average contact pressure is sufficient to cause yield when where is the uniaxial yield stress and is the indentation hardness. Greenwood and Williamson defined a dimensionless parameter called the plasticity index that could be used to determine whether contact would be elastic or plastic.
The Greenwood-Williamson model requires knowledge of two statistically dependent quantities; the standard deviation of the surface roughness and the curvature of the asperity peaks. An alternative definition of the plasticity index has been given by Mikic. Yield occurs when the pressure is greater than the uniaxial yield stress. Since the yield stress is proportional to the indentation hardness , Mikic defined the plasticity index for elastic-plastic contact to be
In this definition represents the micro-roughness in a state of complete plasticity and only one statistical quantity, the rms slope, is needed which can be calculated from surface measurements. For , the surface behaves elastically during contact.
In both the Greenwood-Williamson and Mikic models the load is assumed to be proportional to the deformed area. Hence, whether the system behaves plastically or elastically is independent of the applied normal force.
An overview of the GT model
The model proposed by John A. Greenwood and John H. Tripp (GT), extended the GW model to contact between two rough surfaces. The GT model is widely used in the field of elastohydrodynamic analysis.
The most frequently cited equations given by the GT model are for the asperity contact area
and load carried by asperities
where:
, roughness parameter,
, nominal contact area,
, Stribeck oil film parameter, first defined by Stribeck \cite{gt} as ,
, effective elastic modulus,
, statistical functions introduced to match the assumed Gaussian distribution of asperities.
Matthew Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction.
The exact solutions for and are firstly presented by Jedynak. They are expressed by as follows. They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis.
where erfc(z) means the complementary error function and is the modified Bessel function of the second kind.
In paper one can find comprehensive review of existing approximants to . New proposals give the most accurate approximants to and , which are reported in the literature. They are given by the following rational formulas, which are very exact approximants to integrals . They are calculated for the Gaussian distribution of asperities
For the coefficients are
The maximum relative error is .
For the coefficients are
The maximum relative error is .
Adhesive contact between elastic bodies
When two solid surfaces are brought into close proximity, they experience attractive van der Waals forces. R. S. Bradley's van der Waals model provides a means of calculating the tensile force between two rigid spheres with perfectly smooth surfaces. The Hertzian model of contact does not consider adhesion possible. However, in the late 1960s, several contradictions were observed when the Hertz theory was compared with experiments involving contact between rubber and glass spheres.
It was observed that, though Hertz theory applied at large loads, at low loads
the area of contact was larger than that predicted by Hertz theory,
the area of contact had a non-zero value even when the load was removed, and
there was even strong adhesion if the contacting surfaces were clean and dry.
This indicated that adhesive forces were at work. The Johnson-Kendall-Roberts (JKR) model and the Derjaguin-Muller-Toporov (DMT) models were the first to incorporate adhesion into Hertzian contact.
Bradley model of rigid contact
It is commonly assumed that the surface force between two atomic planes at a distance from each other can be derived from the Lennard-Jones potential. With this assumption
where is the force (positive in compression), is the total surface energy of both surfaces per unit area, and is the equilibrium separation of the two atomic planes.
The Bradley model applied the Lennard-Jones potential to find the force of adhesion between two rigid spheres. The total force between the spheres is found to be
where are the radii of the two spheres.
The two spheres separate completely when the pull-off force is achieved at at which point
JKR model of elastic contact
To incorporate the effect of adhesion in Hertzian contact, Johnson, Kendall, and Roberts formulated the JKR theory of adhesive contact using a balance between the stored elastic energy and the loss in surface energy. The JKR model considers the effect of contact pressure and adhesion only inside the area of contact. The general solution for the pressure distribution in the contact area in the JKR model is
Note that in the original Hertz theory, the term containing was neglected on the ground that tension could not be sustained in the contact zone. For contact between two spheres
where is the radius of the area of contact, is the applied force, is the total surface energy of both surfaces per unit contact area,
are the radii, Young's moduli, and Poisson's ratios of the two spheres, and
The approach distance between the two spheres is given by
The Hertz equation for the area of contact between two spheres, modified to take into account the surface energy, has the form
When the surface energy is zero, , the Hertz equation for contact between two spheres is recovered. When the applied load is zero, the contact radius is
The tensile load at which the spheres are separated (i.e., ) is predicted to be
This force is also called the pull-off force. Note that this force is independent of the moduli of the two spheres. However, there is another possible solution for the value of at this load. This is the critical contact area , given by
If we define the work of adhesion as
where are the adhesive energies of the two surfaces and is an interaction term, we can write the JKR contact radius as
The tensile load at separation is
and the critical contact radius is given by
The critical depth of penetration is
DMT model of elastic contact
The Derjaguin–Muller–Toporov (DMT) model is an alternative model for adhesive contact which assumes that the contact profile remains the same as in Hertzian contact but with additional attractive interactions outside the area of contact.
The radius of contact between two spheres from DMT theory is
and the pull-off force is
When the pull-off force is achieved the contact area becomes zero and there is no singularity in the contact stresses at the edge of the contact area.
In terms of the work of adhesion
and
Tabor parameter
In 1977, Tabor showed that the apparent contradiction between the JKR and DMT theories could be resolved by noting that the two theories were the extreme limits of a single theory parametrized by the Tabor parameter () defined as
where is the equilibrium separation between the two surfaces in contact. The JKR theory applies to large, compliant spheres for which is large. The DMT theory applies for small, stiff spheres with small values of .
Subsequently, Derjaguin and his collaborators by applying Bradley's surface force law to an elastic half space, confirmed that as the Tabor parameter increases, the pull-off force falls from the Bradley value to the JKR value . More detailed calculations were later done by Greenwood revealing the S-shaped load/approach curve which explains the jumping-on effect. A more efficient method of doing the calculations and additional results were given by Feng
Maugis–Dugdale model of elastic contact
Further improvement to the Tabor idea was provided by Maugis who represented the surface force in terms of a Dugdale cohesive zone approximation such that the work of adhesion is given by
where is the maximum force predicted by the Lennard-Jones potential and is the maximum separation obtained by matching the areas under the Dugdale and Lennard-Jones curves (see adjacent figure). This means that the attractive force is constant for . There is not further penetration in compression. Perfect contact occurs in an area of radius and adhesive forces of magnitude extend to an area of radius . In the region , the two surfaces are separated by a distance with and . The ratio is defined as
.
In the Maugis–Dugdale theory, the surface traction distribution is divided into two parts - one due to the Hertz contact pressure and the other from the Dugdale adhesive stress. Hertz contact is assumed in the region . The contribution to the surface traction from the Hertz pressure is given by
where the Hertz contact force is given by
The penetration due to elastic compression is
The vertical displacement at is
and the separation between the two surfaces at is
The surface traction distribution due to the adhesive Dugdale stress is
The total adhesive force is then given by
The compression due to Dugdale adhesion is
and the gap at is
The net traction on the contact area is then given by and the net contact force is . When the adhesive traction drops to zero.
Non-dimensionalized values of are introduced at this stage that are defied as
In addition, Maugis proposed a parameter which is equivalent to the Tabor parameter . This parameter is defined as
where the step cohesive stress equals to the theoretical stress of the Lennard-Jones potential
Zheng and Yu suggested another value for the step cohesive stress
to match the Lennard-Jones potential, which leads to
Then the net contact force may be expressed as
and the elastic compression as
The equation for the cohesive gap between the two bodies takes the form
This equation can be solved to obtain values of for various values of and . For large values of , and the JKR model is obtained. For small values of the DMT model is retrieved.
Carpick–Ogletree-Salmeron (COS) model
The Maugis–Dugdale model can only be solved iteratively if the value of is not known a-priori. The Carpick–Ogletree–Salmeron (COS) approximate solution (after Robert Carpick, D. Frank Ogletree and Miquel Salmeron)simplifies the process by using the following relation to determine the contact radius :
where is the contact area at zero load, and is a transition parameter that is related to by
The case corresponds exactly to JKR theory while corresponds to DMT theory. For intermediate cases the COS model corresponds closely to the Maugis–Dugdale solution for .
Influence of contact shape
Even in the presence of perfectly smooth surfaces, geometry can come into play in form of the macroscopic shape of the contacting region. When a rigid punch with flat but oddly shaped face is carefully pulled off its soft counterpart, its detachment occurs not instantaneously but detachment fronts start at pointed corners and travel inwards, until the final configuration is reached which for macroscopically isotropic shapes is almost circular. The main parameter determining the adhesive strength of flat contacts occurs to be the maximum linear size of the contact. The process of detachment can as observed experimentally can be seen in the film.
See also
(ECR)
References
External links
: A MATLAB routine to solve the linear elastic contact mechanics problem entitled; "An LCP solution of the linear elastic contact mechanics problem" is provided at the file exchange at MATLAB Central.
: Contact mechanics calculator.
: detailed calculations and formulae of JKR theory for two spheres.
[5]: A Matlab code for Hertz contact analysis (includes line, point and elliptical cases).
[6]: JKR, MD, and DMT models of adhesion (Matlab routines).
Bearings (mechanical)
Mechanical engineering
Solid mechanics | Contact mechanics | [
"Physics",
"Engineering"
] | 6,732 | [
"Solid mechanics",
"Applied and interdisciplinary physics",
"Mechanical engineering",
"Mechanics"
] |
7,470,561 | https://en.wikipedia.org/wiki/Fractography | Fractography is the study of the fracture surfaces of materials. Fractographic methods are routinely used to determine the cause of failure in engineering structures, especially in product failure and the practice of forensic engineering or failure analysis. In material science research, fractography is used to develop and evaluate theoretical models of crack growth behavior.
One of the aims of fractographic examination is to determine the cause of failure by studying the characteristics of a fractured surface. Different types of crack growth (e.g. fatigue, stress corrosion cracking, hydrogen embrittlement) produce characteristic features on the surface, which can be used to help identify the failure mode. The overall pattern of cracking can be more important than a single crack, however, especially in the case of brittle materials like ceramics and glasses.
Usage
Fractography is a widely used technique in forensic engineering, forensic materials engineering and fracture mechanics to understand the causes of failures and also to verify theoretical failure predictions with real life failures. It is of use in forensic science for analysing broken products which have been used as weapons, such as broken bottles for example. Thus a defendant might claim that a bottle was faulty and broke accidentally when it impacted a victim of an assault. Fractography could show the allegation to be false, and that considerable force was needed to smash the bottle before using the broken end as a weapon to deliberately attack the victim. Bullet holes in glass windscreens or windows can also indicate the direction of impact and the energy of the projectile. In these cases, the overall pattern of cracking is vital to reconstructing the sequence of events, rather than the specific characteristics of a single crack. Fractography can determine whether a cause of train derailment was a faulty rail, or if a wing of a plane had fatigue cracks before a crash.
Fractography is used also in materials research, since fracture properties can correlate with other properties and with structure of materials.
Feature identification
Origin
An important aim of fractography is to establish and examine the origin of cracking, as examination at the origin may reveal the cause of crack initiation. Initial fractographic examination is commonly carried out on a macro scale utilising low power optical microscopy and oblique lighting techniques to identify the extent of cracking, possible modes and likely origins. Optical microscopy or macrophotography are often enough to pinpoint the nature of the failure and the causes of crack initiation and growth if the loading pattern is known.
Common features that may cause crack initiation are inclusions, voids or empty holes in the material, contamination, and stress concentrations.
Fatigue crack growth
The image of a broken crankshaft shows the component failed from a surface defect near the bulb at lower centre. The semi-circular marks near the origin indicate a crack growing up into the bulk material by process known as fatigue. The crankshaft also shows hachures, which are the lines on fracture surfaces that can be traced back to the origin of the fracture. Some modes of crack growth can leave characteristic marks on the surface that identify the mode of crack growth and origin on a macro scale e.g. beachmarks or striations on fatigue cracks.
Microscopy
Microscopes can be used to determine the initiation point and the mechanism that caused crack growth. The information can be obtained from images of the fracture surface known as fractographs and used in constructing diagrams. A schematic fracture surface map can be used to isolate and identify the features on the surface which show how the product failed. Such a map can be a valuable way of presenting information which shows clearly how a crack was initiated which grew with time.
USB Microscopy
USB microscopes are especially useful for examining fracture surface features since they are small enough to be hand-held. A variety of camera sizes and resolution are available commercially at low cost. The camera cable plugs into the computer via a USB plug and most such devices come with illumination at the camera supplied by LED lights.
Scanning electron microscopy
In many cases, fractography requires examination at a finer scale, which is usually carried out in a scanning electron microscope or SEM. The resolution is much higher than the optical microscope, although samples are examined in a partial vacuum and colour is absent. Improved SEM's now allow examination at near atmospheric pressures, so allowing examination of sensitive materials such as those of biological origin.
The SEM is especially useful when combined with Energy dispersive X-ray spectroscopy or EDX, which can be performed in the microscope, so very small areas of the sample can be analysed for their elemental composition.
Example
Breast implant
A cusp is formed where brittle cracks meet, as shown on the picture of a failed catheter (Cp). The cusp was formed by brittle failure of the catheter on a breast implant in silicone rubber. The origin of the cracks is at the shoulder at the left-hand side. Identifying such features will allow a fracture surface map to be made of the surface being studied. The implant failed because of overload, all the imposed loads being concentrated at the connection between the catheter and the bag holding salt solution. As a result, the patient reported loss of fluid from the implant, and it was extracted surgically and replaced.
In the case of the failed breast implant catheter, the crack path was very simple, but the cause more subtle. Further scanning electron microscopy showed numerous microcracks between the bag and the catheter, indicating that the adhesive bond between the two components had failed prematurely, perhaps through faulty manufacture. The material of construction of both bag and catheter, silicone rubber is a physically weak elastomer, and product design must allow for the low tear or shear strength of the material.
Maritime Patrol Aircraft
A non-critical crack occurred in the fastener hole of a lower wing plank. The plank was made from a 3.2 mm thick AA7075-T6 aluminium alloy. The time of the detection of the crack and the aircraft's counting g-meter allowed investigators to find out the load on the aircraft from use. The cracks on an SEM showed evidence and patterns of fatigue. The cyclic load and fatigue appeared to have progressively gone worse with some cracks being large and others being small in length and width indicating occasional force stronger than 2> g's. The g-meter showed that the aircraft had flown 2,500 flights, with the g force and acceleration occasionally exceeding more than 2 G's. This was more than the maximum advertised for the manufacturer. The conclusion was that fatigue and cracks should be inspected regularly on old or commonly used aircraft. The study also found novel ways for Quantitative fractography to be used on aircraft, which compares load history (in this case the g-meter) and records of the alloy experiencing fatigue in a lab setting with different pressure, cycles, and temperatures. The study used the database of cracks to create a model that predicts forces and crack progression.
See also
Conchoidal fracture
Fatigue (material)
Failure analysis
Forensic engineering
Forensic materials engineering
Fracture
Forensic polymer engineering
Forensic science
References
Lewis, Peter Rhys, Reynolds, K, and Gagg, C, Forensic Materials Engineering: Case studies, CRC Press (2004).
Mills, Kathleen Fractography, American Society of Metals (ASM) handbook, volume 12 (1991).
N.T. Goldsmith, R.J.H. Wanhill, L. Molent, Quantitative fractography of fatigue and an illustrative case study, Engineering Failure Analysis, volume 96 (February 2019) Pages 426–435.
Fracture mechanics
Mechanical failure
Forensic disciplines
Materials degradation | Fractography | [
"Materials_science",
"Engineering"
] | 1,541 | [
"Structural engineering",
"Fracture mechanics",
"Materials science",
"Mechanical engineering",
"Materials degradation",
"Mechanical failure"
] |
7,473,698 | https://en.wikipedia.org/wiki/Multipoint%20ground | A multipoint ground is an alternate type of electrical installation that attempts to solve the ground loop and mains hum problem by creating many alternate paths for electrical energy to find its way back to ground. The distinguishing characteristic of a multipoint ground is the use of many interconnected grounding conductors into a loose grid configuration. There will be many paths between any two points in a multipoint grounding system, rather than the single path found in a star topology ground. This type of ground may also be known as a Signal Reference Grid or Ground (SRG) or an Equipotential Ground.
Advantages
If installed correctly, it can maintain reference ground potential much better than a star topology in a similar application across a wider range of frequencies and currents.
Disadvantages
A multipoint ground system is more complicated to install and maintain over the long term, and can be more expensive to install.
Star topology systems can be converted to multipoint systems by installing new conductors between old existing ones. However, this should be done with care as it can inadvertently introduce noise onto signal lines during the conversion process. The noise can be diminished over time as noisy and failed components are removed and repaired, but some isolation of high current (e.g. motors and lighting) and sensitive low current (e.g. amplifiers and radios) equipment may always be necessary.
Design considerations
A multipoint grounding system can solve several problems, but they must all be addressed in turn. The size of the conductors must be designed to meet the expected load in operations and in lightning protection. The amount of cross bonding, and the topology of the grids, is determined by the expected frequencies in the signals to be carried and the uses the installation will be put to.
A ground grid is provided primarily for safety, and the size of the conductors is probably governed by local building or electrical code. One factor to keep in mind is that since the final grid will have multiple paths to ground, the final system resistance to ground will likely be lower than for a typical star ground. But this does not change the need for adequate conductor size to any given piece of equipment in case of a fault.
Lightning protection is provided by bonding the multipoint ground grid to one or more grounding rods under or at the perimeter of the building, and then up to the lightning rods. If the building has significant metal framing elements, these should be bonded to the lightning rods and grounding rods as well.
If the building has large motors, driving such things as fans, pumps, elevators, etc., these should also be on the multipoint grid. However, they should not be on segments of the grid that will service equipment such as audio amplifiers, small signal radio circuits, computer networks, sensitive electrical instrumentation, etc. Since building two grids into the same building may be prohibitively expensive, a good compromise is to connect the low frequency, high current equipment to the grid at or near the ground rods and entrance transformers, in such a way that their load
will not flow across the segment of the grid connected to the low current equipment. Thus the system is still an electrically continuous unit, but motor noise does not impinge directly into signal paths.
The cross bonding is governed by the frequencies and wavelengths to be protected against. A multipoint ground is at its best when it allows currents of many different frequencies to find a path to ground. If the system is expected to always have no more than main current present, the wavelengths involved at 50 or 60 Hz will cause the system design to become a star topology. But if higher frequencies are present, they need to be closer. In general, the spacing between nodes should be less than 1/8 of the shortest wavelength present. This will guarantee that current can always flow no matter which path it tries to take. If less than 1/8 wavelength node spacing cannot be achieved, then at least include as many cross connects as possible, as closely spaced as possible.
External links
Mil-HDBK-419A Grounding, Bonding & Shielding for Electronic Equipment & Facilities, Volume 1 & 2
Mil-HDBK-188-124 Grounding, Bonding and Shielding For Common Long Haul/Tactical Communication Systems Including Ground Based Communications-Electronics Facilities and Equipments
Electrical circuits
Electric power
Electrical safety
Electrical wiring | Multipoint ground | [
"Physics",
"Engineering"
] | 863 | [
"Physical quantities",
"Electrical systems",
"Building engineering",
"Physical systems",
"Power (physics)",
"Electronic engineering",
"Electric power",
"Electrical engineering",
"Electrical wiring",
"Electrical circuits"
] |
7,473,734 | https://en.wikipedia.org/wiki/Annual%20Review%20of%20Biomedical%20Engineering | Annual Review of Biomedical Engineering is an academic journal published by Annual Reviews. In publication since 1999, this journal covers the significant developments in the broad field of biomedical engineering with an annual volume of review articles. It is edited by Martin L. Yarmush and Mehmet Toner. As of 2024, Journal Citation Reports gave the journal has an impact factor of 12.8 ranking it fourth out of 122 journals in the category "Biomedical Engineering". As of 2021, Annual Review of Biomedical Engineering is being published as open access, under the Subscribe to Open model.
History
The Annual Review of Biomedical Engineering was first published in 1999 by the nonprofit publisher Annual Reviews. The inaugural editor was Martin L. Yarmush; Yarmush remained editor until 2021, at which point he was co-editor along with Mehmet Toner. Though it began with a physical edition, it is now only published electronically.
Scope and indexing
The Annual Review of Biomedical Engineering defines its scope as covering significant developments relevant to biomedical engineering. Included subfields are biomechanics; biomaterials; computational genomics; proteomics; healthcare, biochemical, and tissue engineering; biomonitoring; and medical imaging. As of 2022, Journal Citation Reports lists the journal's impact factor as 11.324, ranking it seventh of 98 journal titles in the category "Biomedical Engineering". It is abstracted and indexed in Scopus, Science Citation Index Expanded, MEDLINE, EMBASE, Inspec and Academic Search, among others.
Editorial processes
The Annual Review of Biomedical Engineering is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee.
Current editorial board
As of 2022, the editorial committee consists of the co-editors and the following members:
James S. Duncan
Martha L. Gray
Todd P. Coleman
Frances Ligler
Wendy M. Murray
Yaakov Nahmias
Eleftherios Terry Papoutsakis
Erkin Şeker
Marjolein C. H. van der Meulen
Jennifer L. West
George R. Wodicka
See also
List of engineering journals and magazines
References
Biomedical engineering journals
Biomedical engineering
Biomedical Engineering
Academic journals established in 1999
Annual journals
English-language journals | Annual Review of Biomedical Engineering | [
"Engineering",
"Biology"
] | 548 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
7,474,102 | https://en.wikipedia.org/wiki/Hit%20to%20lead | Hit to lead (H2L) also known as lead generation is a stage in early drug discovery where small molecule hits from a high throughput screen (HTS) are evaluated and undergo limited optimization to identify promising lead compounds. These lead compounds undergo more extensive optimization in a subsequent step of drug discovery called lead optimization (LO). The drug discovery process generally follows the following path that includes a hit to lead stage:
Target validation (TV) → Assay development → High-throughput screening (HTS) → Hit to lead (H2L) → Lead optimization (LO) → Preclinical development → Clinical development
The hit to lead stage starts with confirmation and evaluation of the initial screening hits and is followed by synthesis of analogs (hit expansion). Typically the initial screening hits display binding affinities for their biological target in the micromolar (10−6 molar concentration) range. Through limited H2L optimization, the affinities of the hits are often improved by several orders of magnitude to the nanomolar (10−9 M) range. The hits also undergo limited optimization to improve metabolic half life so that the compounds can be tested in animal models of disease and also to improve selectivity against other biological targets binding that may result in undesirable side effects.
On average, only one in every 5,000 compounds that enters drug discovery to the stage of preclinical development becomes an approved drug.
Hit confirmation
After hits are identified from a high throughput screen, the hits are confirmed and evaluated using the following methods:
Confirmatory testing: compounds that were found active against the selected target are re-tested using the same assay conditions used during the HTS to make sure that the activity is reproducible.
Dose response curve: the compound is tested over a range of concentrations to determine the concentration that results in half maximal binding or activity (IC50 or EC50 value respectively).
Orthogonal testing: confirmed hits are assayed using a different assay which is usually closer to the target physiological condition or using a different technology.
Secondary screening: confirmed hits are tested in a functional cellular assay to determine efficacy.
Synthetic tractability: medicinal chemists evaluate compounds according to their synthesis feasibility and other parameters such as up-scaling or cost of goods.
Biophysical testing: nuclear magnetic resonance (NMR), isothermal titration calorimetry (ITC), dynamic light scattering (DLS), surface plasmon resonance (SPR), dual polarisation interferometry (DPI), microscale thermophoresis (MST) are commonly used to assess whether the compound binds effectively to the target, the kinetics, thermodynamics, and stoichiometry of binding, any associated conformational change and to rule out promiscuous binding.
Hit ranking and clustering: Confirmed hit compounds are then ranked according to the various hit confirmation experiments.
Freedom to operate evaluation: hit structures are checked in specialized databases to determine if they are patentable.
Hit expansion
Following hit confirmation, several compound clusters will be chosen according to their characteristics in the previously defined tests. An Ideal compound cluster will contain members that possess:
high affinity towards the target (less than 1 μM)
selectivity versus other targets
significant efficacy in a cellular assay
druglikeness (moderate molecular weight and lipophilicity usually estimated as ClogP). Affinity, molecular weight and lipophilicity can be linked in single parameter such as ligand efficiency and lipophilic efficiency.
low to moderate binding to human serum albumin
low interference with P450 enzymes and P-glycoproteins
low cytotoxicity
metabolic stability
high cell membrane permeability
sufficient water solubility (above 10 μM)
chemical stability
synthetic tractability
patentability
The project team will usually select between three and six compound series to be further explored. The next step will allow the testing of analogous compounds to determine a quantitative structure-activity relationship (QSAR). Analogs can be quickly selected from an internal library or purchased from commercially available sources ("SAR by catalog" or "SAR by purchase"). Medicinal chemists will also start synthesizing related compounds using different methods such as combinatorial chemistry, high-throughput chemistry, or more classical organic chemistry synthesis.
Lead optimization phase
The objective of this drug discovery phase is to synthesize lead compounds, new analogs with improved potency, reduced off-target activities, and physiochemical/metabolic properties suggestive of reasonable in vivo pharmacokinetics. This optimization is accomplished through chemical modification of the hit structure, with modifications chosen by employing knowledge of the structure–activity relationship (SAR) as well as structure-based design if structural information about the target is available.
Lead optimization is concerned with experimental testing and confirmation of the compound based on animal efficacy models and ADMET (in vitro and in situ) tools that may be followed by target identification and target validation.
Best Practices for Hit Finding
For educational purposes the European Federation for Medicinal Chemistry and Chemical Biology (EFMC) shared a series of webinars including 'Best Practices for Hit Finding' as well as 'Hit Generation Case Studies'.
See also
References
Drug discovery | Hit to lead | [
"Chemistry",
"Biology"
] | 1,074 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
7,474,152 | https://en.wikipedia.org/wiki/Hybrid%20speciation | Hybrid speciation is a form of speciation where hybridization between two different species leads to a new species, reproductively isolated from the parent species. Previously, reproductive isolation between two species and their parents was thought to be particularly difficult to achieve, and thus hybrid species were thought to be very rare. With DNA analysis becoming more accessible in the 1990s, hybrid speciation has been shown to be a somewhat common phenomenon, particularly in plants. In botanical nomenclature, a hybrid species is also called a nothospecies. Hybrid species are by their nature polyphyletic.
Ecology
A hybrid may occasionally be better fitted to the local environment than the parental lineage, and as such, natural selection may favor these individuals. If reproductive isolation is subsequently achieved, a separate species may arise. Reproductive isolation may be genetic, ecological, behavioral, spatial, or a combination of these.
If reproductive isolation fails to establish, the hybrid population may merge with either or both parent species. This will lead to an influx of foreign genes into the parent population, a situation called an introgression. Introgression is a source of genetic variation, and can in itself facilitate speciation. There is evidence that introgression is a ubiquitous phenomenon in plants and animals, even in humans, where genetic material from Neanderthals and Denisovans is responsible for much of the immune genes in non-African populations.
Ecological constraints
For a hybrid form to persist, it must be able to exploit the available resources better than either parent species, which, in most cases, it will have to compete with. For example: while grizzly bears and polar bears may be able to mate and produce offspring, a grizzly–polar bear hybrid is apparently less- suited in either of the parents' ecological niches than the original parent species themselves. So: although the hybrid is fertile (i.e. capable of reproduction and thus theoretically could propagate), this poor adaptation would be unlikely to support the establishment of a permanent population.
Likewise, lions and tigers have historically overlapped in a portion of their range and can theoretically produce wild hybrids: ligers, which are a cross between a male lion and female tiger, and tigons, which are a cross between a male tiger and a female lion; however, tigers and lions have thus far only hybridized in captivity. In both ligers and tigons, the females are fertile and the males are sterile. One of these hybrids (the tigon) carries growth-inhibitor genes from both parents and thus is smaller than either parent species and might in the wild come into competition with smaller carnivores, e.g. the leopard. The other hybrid, the liger, ends up larger than either of its parents: about a thousand pounds (450 kilograms) fully grown. No tiger-lion hybrids are known from the wild, and the ranges of the two species no longer overlap (tigers are not found in Africa, and while there was formerly overlap in the distribution of the two species in Asia, both have been extirpated from much of their respective historic ranges, and the Asiatic lion is now restricted to the Gir Forest National Park, where tigers are mostly absent).
Some situations may favor hybrid population. One example is rapid turnover of available environment types, like the historical fluctuation of water level in Lake Malawi, a situation that generally favors speciation. A similar situation can be found where closely related species occupy a chain of islands. This will allow any present hybrid population to move into new, unoccupied habitats, avoiding direct competition with parent species and giving a hybrid population time and space to establish. Genetics, too, can occasionally favor hybrids. In the Amboseli National Park in Kenya, yellow baboons and anubis baboons regularly interbreed. The hybrid males reach maturity earlier than their pure-bred cousins, setting up a situation where the hybrid population may over time replace one or both of the parent species in the area.
Genetics of hybridization
Genetics are more variable and malleable in plants than in animals, probably reflecting the higher activity level in animals. Hybrids' genetics will necessarily be less stable than those of species evolving through isolation, which explains why hybrid species appear more common in plants than in animals. Many agricultural crops are hybrids with double or even triple chromosome sets. Having multiple sets of chromosomes is called polyploidy. Polyploidy is usually fatal in animals where extra chromosome sets upset fetal development, but is often found in plants. A form of hybrid speciation that is relatively common in plants occurs when an infertile hybrid becomes fertile after doubling of the chromosome number.
Hybridization without change in chromosome number is called homoploid hybrid speciation. This is the situation found in most animal hybrids. For a hybrid to be viable, the chromosomes of the two organisms will have to be very similar, i.e., the parent species must be closely related, or else the difference in chromosome arrangement will make mitosis problematic. With polyploid hybridization, this constraint is less acute.
Super-numerary chromosome numbers can be unstable, which can lead to instability in the genetics of the hybrid. The European edible frog appears to be a species, but is actually a triploid semi-permanent hybrid between pool frogs and marsh frogs. In most populations, the edible frog population is dependent on the presence of at least one of the parent species to be maintained, as each individual need two gene sets from one parent species and one from the other. Also, the male sex determination gene in the hybrids is only found in the genome of the pool frog, further undermining stability. Such instability can also lead to rapid reduction of chromosome numbers, creating reproductive barriers and thus allowing speciation.
Hybrid speciation in animals
Homoploid hybrid speciation
Hybrid speciation in animals is primarily homoploid. While thought not to be very common, a few animal species are the result of hybridization, mostly insects such as tephritid fruitflies that inhabit Lonicera plants and Heliconius butterflies, as well as some fish, one marine mammal, the clymene dolphin, a few birds. and certain Bufotes toads.
One bird is an unnamed form of Darwin's finch from the Galapagos island of Daphne Major, described in 2017 and likely founded in the early 1980s by a male Española cactus finch from Española Island and a female medium ground finch from Daphne Major. Another is the great skua, which has a surprising genetic similarity to the physically very different pomarine skua; most ornithologists now assume it to be a hybrid between the pomarine skua and one of the southern skuas. The golden-crowned manakin was formed 180,000 years ago by hybridization between snow-capped and opal-crowned manakins.
A 2021 DNA study determined that the Columbian mammoth of North America was a hybrid species between woolly mammoths and another lineage, discovered in Krestovka, descended from steppe mammoths. The two populations had diverged from the ancestral steppe mammoth earlier in the Pleistocene. Analysis of genetic material recovered from their remains showed that half of the ancestry of the Columbian mammoths originated from the Krestovka lineage and the other half from woolly mammoths, with the hybridization happening more than 420,000 years ago, during the Middle Pleistocene. This is the first evidence of hybrid speciation obtained from prehistoric DNA.
Multiple hybrids during rapid divergence
Rapidly diverging species can sometimes form multiple hybrid species, giving rise to a species complex, like several physically divergent but closely related genera of cichlid fishes in Lake Malawi. The duck genus Anas (mallards and teals) has a very recent divergence history, many of the species are inter-fertile, and quite a few of them are thought to be hybrids. While hybrid species generally appear rare in mammals, the American red wolf appears to be a hybrid species of the Canis species complex, between gray wolf and coyote. Hybridization may have led to the species-rich Heliconius butterflies, though this conclusion has been criticized.
Hybrid speciation in plants
Hybrid speciation occurs when two divergent lineages (e.g., species) with independent evolutionary histories come into contact and interbreed. Hybridization can result in speciation when hybrid populations become isolated from the parental lineages, leading to divergence from the parent populations.
Polyploid hybrid speciation
In cases where the first-generation hybrids are viable but infertile, fertility can be restored by whole genome duplication (polyploidy), resulting in reproductive isolation and polyploid speciation. Polyploid speciation is commonly observed in plants because their nature allows them to support genome duplications. Polyploids are considered a new species because the occurrence of a whole genome duplication imposes post-zygotic barriers, which enable reproductive isolation between parent populations and hybrid offspring. Polyploids can arise through single step mutations or through triploid bridges. In single step mutations, allopolyploids are the result of unreduced gametes in crosses between divergent lineages. The F1 hybrids produced from these mutations are infertile due to failure of bivalent pairing of chromosomes and segregation into gametes which leads to the production of unreduced gametes by single division meiosis, which results in unreduced, diploid (2N) gametes. Triploid bridges occur in low frequencies in populations and are produced when unreduced gametes combine with haploid (1N) gametes to produce a triploid offspring that can function as a bridge to the formation of tetraploids. In both paths, the polyploid hybrids are reproductively isolated from the parents due to the difference in ploidy. Polyploids manage to remain in populations because they generally experience less inbreeding depression and have higher self-fertility.
Homoploid hybrid speciation
Homoploid (diploid) speciation is another result of hybridization, but the hybrids remain diploid. It is less common in plants than polyploid speciation because, without genome duplication, genetic isolation must develop through other mechanisms. Studies on diploid hybrid populations of Louisiana irises show how these populations occur in Hybrid zones created by disturbances and ecotones (Anderson 1949). Novel niches can allow for the persistence of hybrid lineages. For example, established sunflower (Helianthus) hybrid species show transgressive phenotypes and display genomic divergence separating them from the parent species.
See also
Clymene dolphin
Eastern coyote
Coywolf
Genetic pollution
Hybrid name
New Mexico whiptail
Secondary contact
Ring species
Chimera (genetics)
References
Genetics
Speciation
Evolutionary biology terminology
Interspecific hybrids | Hybrid speciation | [
"Biology"
] | 2,201 | [
"Evolutionary processes",
"Speciation",
"Genetics",
"Evolutionary biology terminology"
] |
7,474,528 | https://en.wikipedia.org/wiki/Pandora%20FMS | Pandora FMS (for Pandora Flexible Monitoring System) is software for monitoring computer networks. Pandora FMS allows monitoring in a visual way the status and performance of several parameters from different operating systems, servers, applications and hardware systems such as firewalls, proxies, databases, web servers or routers.
Pandora FMS can be deployed in almost any operating system. It features remote monitoring (WMI, SNMP, TCP, UDP, ICMP, HTTP...) and it can also use agents. An agent is available for each platform. It can also monitor hardware systems with a TCP/IP stack, such as load balancers, routers, network switches, printers or firewalls.
Pandora FMS has several servers that process and get information from different sources, using WMI for gathering remote Windows information, a predictive server, a plug-in server which makes complex user-defined network tests, an advanced export server to replicate data between different sites of Pandora FMS, a network discovery server, and an SNMP Trap console.
Released under the terms of the GNU General Public License, Pandora FMS is free software. At first the project was hosted on SourceForge.net, from where it has been downloaded over one million times, and selected the “Staff Pick” Project of the Month, June 2016, and elected “Community Choice” Project of the Month, November 2017.
Components
Pandora Server
In Pandora FMS architecture, servers are the core of the system because they are the recipients of bundles of information. They also generate monitoring alerts. It is possible to have different modular configurations for the servers: several servers for very big systems, or just a single server. Servers are also responsible for inserting the gathered data into Pandora's database. It is possible to have several Pandora Servers connected to a single Database. Different servers are used for different kind of monitoring: remote monitoring, WMI monitoring, SNMP and other network monitoring, inventory recollection, etc. Highly scalable (up to 2000 nodes with one single server), completely web-driven and a multitenant interface. It has a very flexible ACL system and a lot of graphical reports and user-defined control screens.
Servers are developed in Perl and work on any platform that has the required modules. Pandora was originally developed for
Web console
Pandora's user interface allows people to operate and manage the monitoring system. It is developed in PHP and depends on a database and a web server. It can work in a wide range of platforms: Linux, Solaris, Windows, AIX and others. Several web consoles can be deployed in the same system if required. Web Console has multiples choices, in example SNMP monitoring.
Agents
Agents are daemons or services that can monitor any numeric parameter, Boolean status, string or numerical incremental data and/or condition. They can be developed in any language (as Shellscript, WSH, Perl or C). They run on any type of platform (Microsoft, AIX, Solaris, Linux, IPSO, Mac OS or FreeBSD), also SAP, because the agents can communicate with the Pandora FMS Servers to send data in XML using SSH, FTP, NFS, Tentacle (protocol) or any data transfer means.
Database
The database module is the core module of Pandora. All the information of the system resides here. For example, all data gathered by agents, configuration defined by administrator, events, incidents, audit info, etc. are stored in the database. At present, MySQL database and MariaDB database is supported. Oracle support has been added in 6.0 release.
Software appliances
Pandora FMS has a software appliance based on a customized CentOS Linux, installable on CD, comes ready to use (including a live CD) or ready to install to hard disk.
It have also an AMI Appliance based on Amazon AWS.
A Docker image is also available at Docker Hub.
See also
Comparison of network monitoring systems
Data logging
References
External links
Free network-related software
Free software programmed in Perl
Multi-agent systems
Free network management software
System monitors | Pandora FMS | [
"Engineering"
] | 855 | [
"Artificial intelligence engineering",
"Multi-agent systems"
] |
7,475,753 | https://en.wikipedia.org/wiki/C15H24O | {{DISPLAYTITLE:C15H24O}}
The molecular formula C15H24O may refer to:
Butylated hydroxytoluene, a food additive
Khusimol
Nonylphenol
1-Nonyl-4-phenol
α-Santalol
β-Santalol
Spathulenol
Molecular formulas | C15H24O | [
"Physics",
"Chemistry"
] | 74 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
4,297,420 | https://en.wikipedia.org/wiki/Respiratory%20quotient | The respiratory quotient (RQ or respiratory coefficient) is a dimensionless number used in calculations of basal metabolic rate (BMR) when estimated from carbon dioxide production. It is calculated from the ratio of carbon dioxide produced by the body to oxygen consumed by the body, when the body is in a steady state. Such measurements, like measurements of oxygen uptake, are forms of indirect calorimetry. It is measured using a respirometer. The respiratory quotient value indicates which macronutrients are being metabolized, as different energy pathways are used for fats, carbohydrates, and proteins. If metabolism consists solely of lipids, the respiratory quotient is approximately 0.7, for proteins it is approximately 0.8, and for carbohydrates it is 1.0. Most of the time, however, energy consumption is composed of both fats and carbohydrates. The approximate respiratory quotient of a mixed diet is 0.8. Some of the other factors that may affect the respiratory quotient are energy balance, circulating insulin, and insulin sensitivity.
It can be used in the alveolar gas equation.
Respiratory exchange ratio
The respiratory exchange ratio (RER) is the ratio between the metabolic production of carbon dioxide (CO2) and the uptake of oxygen (O2).
The ratio is determined by comparing exhaled gases to room air. Measuring this ratio is equal to RQ only at rest or during mild to moderate aerobic exercise without the accumulation of lactate. The loss of accuracy during more intense anaerobic exercise is among others due to factors including the bicarbonate buffer system. The body tries to compensate for the accumulation of lactate and minimize the acidification of the blood by expelling more CO2 through the respiratory system.
The RER can exceed 1.0 during intense exercise. A value above 1.0 cannot be attributed to the substrate metabolism, but rather to the aforementioned factors regarding bicarbonate buffering. Calculation of RER is commonly done in conjunction with exercise tests such as the VO2 max test. This can be used as an indicator that the participants are nearing exhaustion and the limits of their cardio-respiratory system. An RER greater than or equal to 1.0 is often used as a secondary endpoint criterion of a VO2 max test.
Calculation
The respiratory quotient (RQ) is the ratio:
RQ = CO2 eliminated / O2 consumed
where the term "eliminated" refers to carbon dioxide (CO2) removed from the body in a steady state.
In this calculation, the CO2 and O2 must be given in the same units, and in quantities proportional to the number of molecules. Acceptable inputs would be either moles, or else volumes of gas at standard temperature and pressure.
Many metabolized substances are compounds containing only the elements carbon, hydrogen, and oxygen. Examples include fatty acids, glycerol, carbohydrates, deamination products, and ethanol. For complete oxidation of such compounds, the chemical equation is
CxHyOz + (x + y/4 - z/2) O2
→ x CO2 + (y/2) H2O
and thus metabolism of this compound gives an RQ of x/(x + y/4 - z/2).
For glucose, with the molecular formula, C6H12O6, the complete oxidation equation is C6H12O6 + 6 O2
→ 6 CO2 + 6 H2O. Thus, the RQ= 6 CO2/ 6 O2=1.
For oxidation of a fatty acid molecule, namely palmitic acid:
A RQ near 0.7 indicates that fat is the predominant fuel source, a value of 1.0 is indicative of carbohydrate being the predominant fuel source, and a value between 0.7 and 1.0 suggests a mix of both fat and carbohydrate. In general a mixed diet corresponds with an RER of approximately 0.8. For fats, the RQ depends on the specific fatty acids present. Amongst the commonly stored fatty acids in vertebrates, RQ varies from 0.692 (stearic acid) to as high as 0.759 (docosahexaenoic acid). Historically, it was assumed that 'average fat' had an RQ of about 0.71, and this holds true for most mammals including humans. However, a recent survey showed that aquatic animals, especially fish, have fat that should yield higher RQs on oxidation, reaching as high as 0.73 due to high amounts of docosahexaenoic acid.
The range of respiratory coefficients for organisms in metabolic balance usually ranges from 1.0 (representing the value expected for pure carbohydrate oxidation) to ~0.7 (the value expected for pure fat oxidation). In general, molecules that are more oxidized (e.g., glucose) require less oxygen to be fully metabolized and, therefore, have higher respiratory quotients. Conversely, molecules that are less oxidized (e.g., fatty acids) require more oxygen for their complete metabolism and have lower respiratory quotients. See BMR for a discussion of how these numbers are derived. A mixed diet of fat and carbohydrate results in an average value between these numbers.
RQ value corresponds to a caloric value for each liter (L) of CO2 produced. If O2 consumption numbers are available, they are usually used directly, since they are more direct and reliable estimates of energy production.
RQ as measured includes a contribution from the energy produced from protein. However, due to the complexity of the various ways in which different amino acids can be metabolized, no single RQ can be assigned to the oxidation of protein in the diet.
Insulin, which increases lipid storage and decreases fat oxidation, is positively associated with increases in the respiratory quotient. A positive energy balance will also lead to an increased respiratory quotient.
Applications
Practical applications of the respiratory quotient can be found in severe cases of chronic obstructive pulmonary disease, in which patients spend a significant amount of energy on respiratory effort. By increasing the proportion of fats in the diet, the respiratory quotient is driven down, causing a relative decrease in the amount of CO2 produced. This reduces the respiratory burden to eliminate CO2, thereby reducing the amount of energy spent on respirations.
Respiratory Quotient can be used as an indicator of over or underfeeding. Underfeeding, which forces the body to utilize fat stores, will lower the respiratory quotient, while overfeeding, which causes lipogenesis, will increase it. Underfeeding is marked by a respiratory quotient below 0.85, while a respiratory quotient greater than 1.0 indicates overfeeding. This is particularly important in patients with compromised respiratory systems, as an increased respiratory quotient significantly corresponds to increased respiratory rate and decreased tidal volume, placing compromised patients at a significant risk.
Because of its role in metabolism, respiratory quotient can be used in analysis of liver function and diagnosis of liver disease. In patients with liver cirrhosis, non-protein respiratory quotient (npRQ) values act as good indicators in the prediction of overall survival rate. Patients having a npRQ < 0.85 show considerably lower survival rates as compared to patients with a npRQ > 0.85. A decrease in npRQ corresponds to a decrease in glycogen storage by the liver. Similar research indicates that non-alcoholic fatty liver diseases are also accompanied by a low respiratory quotient value, and the non protein respiratory quotient value was a good indication of disease severity.
Recently the respiratory quotient is also used from aquatic scientists to illuminate its environmental applications. Experimental studies with natural bacterioplankton using different single substrates suggested that RQ is linked to the elemental composition of the respired compounds. By this way, it is demonstrated that bacterioplankton RQ is not only a practical aspect of Bacterioplankton Respiration determination, but also a major ecosystem state variable that provides unique information about aquatic ecosystem functioning. Based on the stoichiometry of the different metabolized substrates, the scientists can predict that dissolved oxygen (O2) and carbon dioxide (CO2) in aquatic ecosystems should covary inversely due to the processing of photosynthesis and respiration. Using this quotient we could shed light on the metabolic behavior and the simultaneous roles of chemical and physical forcing that shape the biogeochemistry of aquatic ecosystems.
Moving from a molecular and cellular level to an ecosystem level, various processes account for the exchange of O2 and CO2 between the biosphere and atmosphere. Field measurements of the concurrent consumption of oxygen (-ΔO2) and production of carbon dioxide (ΔCO2) can be used to derive an apparent respiratory quotient (ARQ). This value reflects a cumulative effect of not only the aerobic respiration of all organisms (microorganisms and higher consumers) in the sample, but also all the other biogeochemical processes which consume O2 without a corresponding CO2 production and vice versa influencing the observed RQ.
Respiratory quotients of some substances
See also
References
External links
Biochemistry methods
Energy conversion
Metabolism
Respiratory physiology
Underwater diving physiology | Respiratory quotient | [
"Chemistry",
"Biology"
] | 1,934 | [
"Biochemistry methods",
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
4,303,674 | https://en.wikipedia.org/wiki/Deposition%20%28phase%20transition%29 | Deposition is the phase transition in which gas transforms into solid without passing through the liquid phase. Deposition is a thermodynamic process. The reverse of deposition is sublimation and hence sometimes deposition is called desublimation.
Applications
Examples
One example of deposition is the process by which, in sub-freezing air, water vapour changes directly to ice without first becoming a liquid. This is how frost and hoar frost form on the ground or other surfaces. Another example is when frost forms on a leaf. For deposition to occur, thermal energy must be removed from a gas. When the air becomes cold enough, water vapour in the air surrounding the leaf loses enough thermal energy to change into a solid. Even though the air temperature may be below the dew point, the water vapour may not be able to condense spontaneously if there is no way to remove the latent heat. When the leaf is introduced, the supercooled water vapour immediately begins to condense, but by this point is already past the freezing point. This causes the water vapour to change directly into a solid.
Another example is the soot that is deposited on the walls of chimneys. Soot molecules rise from the fire in a hot and gaseous state. When they come into contact with the walls they cool, and change to the solid state, without formation of the liquid state. The process is made use of industrially in combustion chemical vapour deposition.
Industrial applications
There is an industrial coating process, known as evaporative deposition, whereby a solid material is heated to the gaseous state in a low-pressure chamber, the gas molecules travel across the chamber space and then deposit to the solid state on a target surface, forming a smooth and thin layer on the target surface. Again, the molecules do not go through an intermediate liquid state when going from the gas to the solid. See also physical vapor deposition, which is a class of processes used to deposit thin films of various materials onto various surfaces.
Deposition releases energy and is an exothermic phase change.
See also
References
Gaja, Shiv P., Fundamentals of Atmospheric Modeling, Cambridge University Press, 2nd ed., 2005, p. 525
Moore, John W., et al., Principles of Chemistry: The Molecular Science, Brooks Cole, 2009, p. 387
Whitten, Kenneth W., et al., Chemistry, Brooks-Cole, 9th ed., 2009, p. 7
Phase transitions | Deposition (phase transition) | [
"Physics",
"Chemistry"
] | 505 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Phases of matter",
"Statistical mechanics",
"Matter"
] |
4,304,272 | https://en.wikipedia.org/wiki/Vasopressin%20receptor%201A | Vasopressin receptor 1A (V1AR), or arginine vasopressin receptor 1A (officially called AVPR1A) is one of the three major receptor types for vasopressin (AVPR1B and AVPR2 being the others), and is present throughout the brain, as well as in the periphery in the liver, kidney, and vasculature.
AVPR1A is also known as:
V1a vasopressin receptor
antidiuretic hormone receptor 1A
SCCL vasopressin subtype 1a receptor
V1-vascular vasopressin receptor AVPR1A
vascular/hepatic-type arginine vasopressin receptor
Structure and function
Human AVPR1A cDNA is 1472 bp long and encodes a 418 amino-acid long polypeptide which shares 72%, 36%, 37%, and 45% sequence identity with rat AVPR1A, human AVPR2, rat AVPR2, and human oxytocin receptor (OXTR), respectively. AVPR1A is a G-protein coupled receptor (GPCR) with 7 transmembrane domains that couples to Gaq/11 guanosine triphosphate (GTP) binding proteins, which along with Gbl, activate phospholipase C activity. Clinically, the V1A receptor is related to vasoconstriction compared to the V1B receptor that is more related to adrenocorticotropic hormone (ACTH) release or the V2 receptor that is linked to the antidiuretic function of antidiuretic hormone (ADH).
Ligand binding
In the N-terminal juxtamembrane segment of the AVPR1A, the glutamate residue at position 54 (E54) and the arginine residue at position 46 (R46) are critical for binding with arginine vasopressin (AVP) and AVP agonists, with E54 likely to interact with AVP and R46 to contribute to a conformational switch.
Competitors of [125I]Tyr-Phaa-specific binding to AVPR1A include:
Linear V1a antagonist phenylacetyl-D-Tyr(Et)-Phe-Gln-Asn-Lys-Pro-Arg-NH2 (Ki = 1.2 ± 0.2 nM)
Relcovaptan (SR-49059) (Ki = 1.3 ± 0.2 nM)
AVP (Ki = 1.8 ± 0.4 nM)
Linear V1a antagonist phenylacetyl-D-Tyr(Et)-Phe-Val-Asn-Lys-Pro-Tyr-NH2 (Ki = 3.0 ± 0.5 nM)
V2 antagonist d(CH2)5-[D-Ile2, Ile4, Ala-NH2]AVP (Ki = 68 ± 17 nM)
Oxytocin (Ki = 129 ± 22 nM)
The AVPR1A is endocytosed by binding to beta-arrestin, which dissociates rapidly from AVPR1A to allow it to return to the plasma membrane; however, upon activation, AVPR1A can heterodimerize with AVPR2 to increase beta-arrestin-mediated endocytosis (and intracellular accumulation) of AVPR1A, since AVPR2 is far less likely to dissociate from beta-arrestin.
Role in behavior
The activity of genetic variants of the AVPR1A gene might be related to generosity and altruistic behavior. Nature News has referred to AVPR1A as the "ruthlessness gene".
Prairie vs. montane voles
The injection of oxytocin (OXT) vs. oxytocin antagonist (OTA) at birth has sexually dimorphic effects in prairie voles later on in life in various areas of the brain.
Males treated with OXT showed increases in AVPR1A in the ventral pallidum, lateral septum, and cingulate cortex, while females showed decreases; males treated with an OTA showed decreases in AVPR1A in the bed nucleus of the stria terminalis, medial preoptic area of the hypothalamus, and lateral septum.
Although the AVPR1A coding region is 99% identical between prairie and montane voles, and binding and second messenger activity does not differ, patterns of distribution of AVPR1A differ drastically.
Mice
Male knockout mice in AVPR1A have reduced anxiety-like behavior and greatly impaired social recognition abilities, without any defects in spatial and nonsocial olfactory learning and memory tasks, as measured by the elevated plus maze, light/dark box, Morris water maze, forced swim, baseline acoustic startle and prepulse inhibition (PPI), and olfactory habituation tests. Some studies have shown AVPR1A knockout mice to have deficits in their circadian rhythms and olfaction.
AVPR1A's role in social recognition is particularly important in the lateral septum, as using viral vectors to replace inactivated AVPR1A expression rescues social recognition and increases anxiety-related behavior. However, conflicting results have been found in another study. Also, unlike vasopressin 1b receptor and oxytocin knockout mice, AVPR1A knockout mice have a normal Bruce effect (appropriate failure of pregnancy in presence of novel male).
Although activation of AVPR1A is a major mediator of anxiogenesis in males, it is not in females.
Rats
AVPR1A transcripts are diurnally expressed 12 hours out of phase from vasopressin expression in vasopressin and vasoactive intestinal polypeptide neurons of the suprachiasmatic nucleus in both vasopressin-normal Sprague-Dawley rats, as well as vasopressin-deficient Brattleboro rats.
Rats with reduced AVPR1A in the bed nucleus of the stria terminalis have increased incidences of the isolation potentiated startle, a measure of isolation-induced anxiety.
Subchronic phencyclidine (PCP) treatment (which induces symptoms similar to those of schizophrenia) reduces AVPR1A density in many brain regions, implying there might be a role for AVPR1A in schizophrenia.
AVPR1A is present in the lateral septum, neocortical layer IV, hippocampal formation, amygdalostriatal area, bed nucleus of the stria terminalis, suprachiasmatic nucleus, ventral tegmental area, substantia nigra, superior colliculus, dorsal raphe, nucleus of the solitary tract, spinal cord, and inferior olive, while mRNA transcripts for AVPR1A are found in the olfactory bulb, hippocampal formation, lateral septum, suprachiasmatic nucleus, paraventricular nucleus, anterior hypothalamic area, arcuate nucleus, lateral habenula, ventral tegmental area, substantia nigra (pars compacta), superior colliculus, raphe nuclei, locus coeruleus, inferior olive, choroid plexus, endothelial cells, area postrema and nucleus of the solitary tract.
Humans
Although vasopressin cell and fiber distribution patterns are highly conserved across species (with centrally projecting systems being sexually dimorphic), the vasopressin receptor AVPR1A distribution differs both between and within species; vasopressin production occurs in the hypothalamus, bed nucleus of the stria terminalis, and the medial amygdala (projecting to the lateral septum and ventral pallidum), while vasopressin binding sites in humans are in the lateral septum, thalamus, basal amygdaloid nucleus, and brainstem, but not cortex.
Human AVPR1A is situated on chromosome 12q14-15, and the promoter region does not have repeat sequences homologous to those found in prairie voles. Three polymorphic repetitive sequences have been found in humans in the 5’ flanking region: RS3, RS1, and a (GT)25 dinucleotide repeat.
A 2015 study found a correlation between AVPR1A expression and predisposition to extra-pair mating in women but not in men.
Polymorphisms
RS3
The AVPR1A repeat polymorphism RS3 is a complex (CT)4-TT-(CT)8-(GT)24 repeat that is 3625 bp upstream of the transcription start site.
Homozygosity in allele 334 of RS3 is associated in men (but not women) with problems with pair-bonding behavior, measured by traits such as partner bonding, perceived marital problems, marital status, as well as spousal perception of marital quality.
In a study of 203 male and female university students, participants with short (308–325 bp) vs. long (327–343) versions of RS3 were less generous, as measured by lower scores on both money allocations in the dictator game, as well as by self-report with the Bardi-Schwartz Universalism and Benevolence Value-expressive Behavior Scales; although the precise functional significance of longer AVPR1A RS3 repeats is not known, they are associated with higher AVPR1A postmortem hippocampal mRNA levels.
Relative to all other alleles, the 334 allele of RS3 shows overactivation of left amygdala (in response to fearful face stimuli), with longer variants of RS3 additionally associated with stronger amygdala activation.
RS1
The AVPR1A repeat polymorphism RS1 is a (GATA)14 tetranucleotide repeat that is 553 bp upstream from the transcription start site. Allele 320 in RS1 is associated with increased novelty seeking and decreased harm avoidance; additionally, relative to all other alleles, the 320 allele of RS1 showed significantly less activity in the left amygdala, with shorter variants showing a trend of stronger activity.
Other microsatellites
The AGAT polymorphism is associated with age of first intercourse in females, with those homozygous for long repeats more likely to have sex before age 15 than any other genotype. However, there is no evidence of preferential transmission of AVPR1A microsatellite repeats to hypersexual or uninhibited people-seeking.
Polymorphisms in AVPR1A have also been shown to be associated with social interaction skills, and have been linked to such diverse traits as dancing and musical ability, altruism and autism.
Chimpanzee populations have individuals with single (only (GT)25 microsatellite) and duplicated (the (GT)25 microsatellite as well as the RS3) alleles, with allele frequencies of 0.795 and 0.205, respectively.
References
Further reading
External links
G protein-coupled receptors
Biology of bipolar disorder | Vasopressin receptor 1A | [
"Chemistry"
] | 2,309 | [
"G protein-coupled receptors",
"Signal transduction"
] |
4,304,500 | https://en.wikipedia.org/wiki/Singular%20measure | In mathematics, two positive (or signed or complex) measures and defined on a measurable space are called singular if there exist two disjoint measurable sets whose union is such that is zero on all measurable subsets of while is zero on all measurable subsets of This is denoted by
A refined form of Lebesgue's decomposition theorem decomposes a singular measure into a singular continuous measure and a discrete measure. See below for examples.
Examples on Rn
As a particular case, a measure defined on the Euclidean space is called singular, if it is singular with respect to the Lebesgue measure on this space. For example, the Dirac delta function is a singular measure.
Example. A discrete measure.
The Heaviside step function on the real line,
has the Dirac delta distribution as its distributional derivative. This is a measure on the real line, a "point mass" at However, the Dirac measure is not absolutely continuous with respect to Lebesgue measure nor is absolutely continuous with respect to but if is any non-empty open set not containing 0, then but
Example. A singular continuous measure.
The Cantor distribution has a cumulative distribution function that is continuous but not absolutely continuous, and indeed its absolutely continuous part is zero: it is singular continuous.
Example. A singular continuous measure on
The upper and lower Fréchet–Hoeffding bounds are singular distributions in two dimensions.
See also
References
Eric W Weisstein, CRC Concise Encyclopedia of Mathematics, CRC Press, 2002. .
J Taylor, An Introduction to Measure and Probability, Springer, 1996. .
Integral calculus
Measures (measure theory) | Singular measure | [
"Physics",
"Mathematics"
] | 335 | [
"Physical quantities",
"Calculus",
"Measures (measure theory)",
"Quantity",
"Size",
"Integral calculus"
] |
16,465,219 | https://en.wikipedia.org/wiki/Mass%20spectral%20interpretation | Mass spectral interpretation is the method employed to identify the chemical formula, characteristic fragment patterns and possible fragment ions from the mass spectra. Mass spectra is a plot of relative abundance against mass-to-charge ratio. It is commonly used for the identification of organic compounds from electron ionization mass spectrometry. Organic chemists obtain mass spectra of chemical compounds as part of structure elucidation and the analysis is part of many organic chemistry curricula.
Mass spectra generation
Electron ionization (EI) is a type of mass spectrometer ion source in which a beam of electrons interacts with a gas phase molecule M to form an ion according to
with a molecular ion . The superscript "+" indicates the ion charge and the superscript "•" indicates an unpaired electron of the radical ion. The energy of the electron beam is typically 70 electronvolts and the ionization process typically produces extensive fragmentation of the chemical bonds of the molecule.
Due to the high vacuum pressure in the ionization chamber, the mean free path of molecules are varying from 10 cm to 1 km and then the fragmentations are unimolecular processes. Once the fragmentation is initiated, the electron is first excited from the site with the lowest ionization energy. Since the order of the electron energy is non-bonding electrons > pi bond electrons > sigma bond electrons, the order of ionization preference is non-bonding electrons > pi bond electrons > sigma bond electrons.
The peak in the mass spectrum with the greatest intensity is called the base peak. The peak corresponding to the molecular ion is often, but not always, the base peak. Identification of the molecular ion can be difficult. Examining organic compounds, the relative intensity of the molecular ion peak diminishes with branching and with increasing mass in a homologous series. In the spectrum for toluene for example, the molecular ion peak is located at 92 m/z corresponding to its molecular mass. Molecular ion peaks are also often preceded by an M-1 or M-2 peak resulting from loss of a hydrogen radical or dihydrogen, respectively. Here, M refers to the molecular mass of the compound. In the spectrum for toluene, a hydrogen radical (proton-electron pair) is lost, forming the M-1 (91) peak.
Peaks with mass less than the molecular ion are the result of fragmentation of the molecule. Many reaction pathways exist for fragmentation, but only newly formed cations will show up in the mass spectrum, not radical fragments or neutral fragments. Metastable peaks are broad peaks with low intensity at non-integer mass values. These peaks result from ions with lifetimes shorter than the time needed to traverse the distance between ionization chamber and the detector.
Molecular formula determination
Nitrogen rule
The nitrogen rule states that organic molecules that contain hydrogen, carbon, nitrogen, oxygen, silicon, phosphorus, sulfur, or the halogens have an odd nominal mass if they have an odd number of nitrogen atoms or an even mass if they have an even number of nitrogen atoms are present. The nitrogen rule is true for structures in which all of the atoms in the molecule have a number of covalent bonds equal to their standard valency, counting each sigma bond and pi bond as a separate covalent bond.
Rings rule
From degree of unsaturation principles, molecules containing only carbon, hydrogen, halogens, nitrogen, and oxygen follow the formula
where C is the number of carbons, H is the number of hydrogens, X is the number of halogens, and N is the number of nitrogen.
Even electron rule
The even electron rule states that ions with an even number of electrons (cations but not radical ions) tend to form even-electron fragment ions and odd-electron ions (radical ions) form odd-electron ions or even-electron ions. Even-electron species tend to fragment to another even-electron cation and a neutral molecule rather than two odd-electron species.
Stevenson's rules
The more stable the product cation, the more abundant the corresponding decomposition process. Several theories can be utilized to predict the fragmentation process, such as the electron octet rule, the resonance stabilization and hyperconjugation and so on.
Rule of 13
The Rule of 13 is a simple procedure for tabulating possible chemical formula for a given molecular mass. The first step in applying the rule is to assume that only carbon and hydrogen are present in the molecule and that the molecule comprises some number of CH "units" each of which has a nominal mass of 13. If the molecular weight of the molecule in question is M, the number of possible CH units is n and
where r is the remainder. The base formula for the molecule is
and the degree of unsaturation is
A negative value of u indicates the presence of heteroatoms in the molecule and a half-integer value of u indicates the presence of an odd number of nitrogen atoms. On addition of heteroatoms, the molecular formula is adjusted by the equivalent mass of carbon and hydrogen. For example, adding N requires removing CH2 and adding O requires removing CH4.
Isotope effects
Isotope peaks within a spectrum can help in structure elucidation. Compounds containing halogens (especially chlorine and bromine) can produce very distinct isotope peaks. The mass spectrum of methylbromide has two prominent peaks of equal intensity at m/z 94 (M) and 96 (M+2) and then two more at 79 and 81 belonging to the bromine fragment.
Even when compounds only contain elements with less intense isotope peaks (carbon or oxygen), the distribution of these peaks can be used to assign the spectrum to the correct compound. For example, two compounds with identical mass of 150 Da, C8H12N3+ and C9H10O2+, will have two different M+2 intensities which makes it possible to distinguish between them.
Fragmentation
The fragmentation pattern of the spectra beside the determination of the molar weight of an unknown compound also suitable to give structural information, especially in combination with the calculation of the degree of unsaturation from the molecular formula (when available). Neutral fragments frequently lost are carbon monoxide, ethylene, water, ammonia, and hydrogen sulfide. There are several fragmentation processes, as follows.
α - cleavage
Fragmentation arises from a homolysis processes. This cleavage results from the tendency of the unpaired electron from the radical site to pair up with an electron from another bond to an atom adjacent to the charge site, as illustrated below. This reaction is defined as a homolytic cleavage since only a single electron is transferred. The driving forces for such reaction is the electron donating abilities of the radical sites: N > S, O,π > Cl, Br > H. An example is the cleavage of carbon-carbon bonds next to a heteroatom. In this depiction, single-electron movements are indicated by a single-headed arrow.
Sigma bond cleavage
The ionization of alkanes weakens the C-C bond, ultimately resulting in the decomposition. As the bond breaks, a charged, even electron species (R+) and a neutral radical species (R•) are generated. Highly substituted carbocations are more stable than the nonsubstituted ones. An example is depicted below.
Inductive cleavage
This reaction results from the inductive effect of the radical sites, as depicted below. This reaction is defined as a heterolytic cleavage since a pair of electrons is transferred. The driving forces for such reaction are the electronegativities of the radical sites: halogens > O, S >> N, C. this reaction is less favored than radical-site reactions.
McLafferty rearrangement
The McLafferty rearrangement can occur in a molecule containing a keto-group and involves β-cleavage, with the gain of the γ-hydrogen atom. Ion-neutral complex formation involves bond homolysis or bond heterolysis, in which the fragments do not have enough kinetic energy to separate and, instead, reaction with one another like an ion-molecule reaction.
Hydrogen rearrangement to a saturated heteroatom
The “1,5 ” hydrogen shift cause transfer of one γ- hydrogen to a radical site on a saturated heteroatom. The same requirements for McLafferty rearrangement apply to hydrogen rearrangement to a saturated heteroatom. Such rearrangement initiates charge-site reaction, resulting in the formation of an odd electron ion and a small neutral molecule ( water, or acid and so on). For alcohols, this heterolytic cleavage releases a water molecule. Since the charge-site reactions are dominant in the less bulky alcohols, this reaction is favored for alcohols as primary > secondary > tertiary.
Double-hydrogen rearrangement
The “1,5 ” hydrogen shift cause transfer of two γ- hydrogen to two radical sites on two different unsaturated atoms. The same requirements for McLafferty rearrangement apply to double-hydrogen rearrangement. This reaction is observed for three unsaturated functional groups, namely thioesters, esters and amides.
Ortho rearrangement
The “1,5 ” hydrogen shift cause transfer of two γ- hydrogen to two radical sites on two different unsaturated atoms. The same requirements for The “1,5 ” hydrogen shift occur between proper substituents in the ortho positions of the aromatic rings. The same requirements for McLafferty rearrangement apply to ortho rearrangement except for the strong α,β carbon-carbon double bond. Such rearrangement initiates charge-site reaction, resulting in the formation of an odd electron ion and a small neutral molecule ( water, or HCl and so on). This reaction can be utilized to differentiate ortho from para and meta isomersMcLafferty rearrangement apply to double-hydrogen rearrangement. This reaction is observed for three unsaturated functional groups, namely thioesters, esters and amides.
Retro-Diels-Alder reaction
This reaction occurs mainly in cyclohexene and its derivatives. Upon ionization, the pi electrons are excited and generate a charge site and a radical site. Following this, two successive α cleavages yield a butadiene radical and a neutral ethene since ethene has a higher ionisation energy than butadiene ( Stevenson's rules).
Cycloelimination reaction
This reaction occurs mainly in four-membered cyclic molecules. Once ionized, it produces a distonic ion and then further fragments to yield an ethene radical ion and a neutral ethene molecule.
Fragmentation patterns of specific compound classes
Alkanes
For linear alkanes, molecular ion peaks are often observed. However, for long chain compounds, the intensity of the molecular ion peaks are often weak. Linear fragments often differ by 14 Da (CH2 = 14). For example, hexane fragmentation patterns. The m/z=57 butyl cation is the base peak, and other most abundant peaks in the spectrum are alkyl carbocations at m/z=15, 29, 43 Da.
Branched alkanes have somewhat weaker molecular ion peaks in the spectra. They tend to fragment at the branched point. For the 2,3-dimethylbutane, an isopropyl cation peak (m/z=43) is very strong.
Cycloalkanes have relatively intense molecular ion peaks (two bonds have to break). Alkene fragmentation peaks are often most significant mode. Loss of “CH2CH2“ (= 28) is common, if present. However, for the substituted cycloalkanes, they prefer to form the cycloalkyl cations by cleavage at the branched points.
Alkenes
Alkenes often produce stronger molecular ion peaks than alkanes due to the lower ionization energy of a pi electron than a σ electron. After the ionization, double bonds can migrate easily, resulting in almost impossible determination of isomers. Allylic cleavage is most significant fragmentation mode due to resonance stabilization.
McLafferty-like rearrangements are possible (similar to carbonyl pi bonds). Again, bond migration is possible.
Cyclohexenes often undergo retro Diels-Alder reactions.
Alkynes
Similar to alkenes, alkynes often show strong molecular ion peak. Propargylic cleavage is a most significant fragmentation mode.
Aromatic hydrocarbons
Aromatic hydrocarbons show distinct molecular ion peak.benzylic cleavage is pretty common. When alkyl groups are attached to the ring, a favorable mode of cleavage is to lose a H-radical to form the tropylium cation (m/z 91).
Alkyl substituted benzenes can fragment via the kinetic controlled process to form C6H5+, C6H6+ ions.
Another common mode of fragmentation is the McLafferty rearrangement, which requires the alkyl chain length to be at least longer than 3 carbons.
Alcohols
Alcohols generally have weak molecular ion peaks due to the strong electronegativity of oxygen. “Alpha” cleavage is common due to the resonance stabilization. The largest alkyl group will be lost.
Another common fragmentation mode is dehydration (M-18). For longer chain alcohols, a McLafferty type rearrangement can produce water and ethylene (M -46).
Cyclic alcohols tend to show stronger M+ peaks than linear chains. And they follow similar fragmentation pathways: Alpha cleavage and dehydration.
Phenol
Phenol exhibit a strong molecular ion peak. Loss of H· is observed (M – 1), CO (M – 28) and formyl radical (HCO·, M – 29) is common observed.
Ether
Ethers produce slightly more intense molecular ion peaks compared to the corresponding alcohols or alkanes. There are two common cleavage modes. α-cleavage and C-O bond cleavage.
Aromatic ethers can generate the C6H5O+ ion by loss of the alkyl group rather than H; this can expel CO as in the phenolic degradation.
Carbonyl compounds
There are five types of carbonyl compounds, including aldehydes, ketones, carboxylic acids and esters. The principal fragmentation modes are described as follows:
Alpha-cleavage can occur on either side of the carbonyl functional group since an oxygen lone pair can stabilize the positive charge.
β-cleavage is a characteristic mode of carbonyl compounds' fragmentation due to the resonance stabilization.
For longer chain carbonyl compounds (carbon number is bigger than 4), McLafferty rearrangements are dominant.
According to these fragmentation patterns, the characteristic peaks of carbonyl compounds are summarized in the following table.
For aromatic carbonyl compounds, Alpha-cleavages are favorable primarily to lose G· (M – 1,15, 29…) to form the C6H5CO+ ion (m/z=105), which can further lose CO (m/z= 77) and HCCH (m/z=51).
Amines
Amines follow nitrogen rule. Odd molecular ion mass-to-charge ratio suggests existence of odd numbers of nitrogens. Nonetheless, molecular ion peaks are weak in aliphatic amines due to the ease of fragmentation next to amines. Alpha-cleavage reactions are the most important fragmentation mode for amines; for 1° n-aliphatic amines, there is an intense peak at m/z 30.
Aromatic amines have intense molecular ion peaks. For anilines, they prefer to lose a hydrogen atom before the expulsion of HCN.
Nitriles
The principle fragmentation mode is the loss of an H-atom (M – 1) from the carbon next to the CN group due to the resonance stabilization. McLafferty rearrangement can be observed when they have longer chain lengths.
Nitro compounds
The aliphatic nitro compounds normally show weak molecular ion peaks, while the aromatic nitro compounds give a strong peak. Common degradation mode is loss of NO+ and NO2+.
Electrospray and atmospheric pressure chemical ionization
Electrospray and atmospheric pressure chemical ionization have different rules for spectrum interpretation due to the different ionization mechanisms.
See also
Component Detection Algorithm (CODA), an algorithm used in mass spectrometry data analysis
List of mass spectrometry software
References
Mass spectrometry | Mass spectral interpretation | [
"Physics",
"Chemistry"
] | 3,392 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
5,730,974 | https://en.wikipedia.org/wiki/Stability%20derivatives | Stability derivatives, and also control derivatives, are measures of how particular forces and moments on an aircraft change as other parameters related to stability change (parameters such as airspeed, altitude, angle of attack, etc.). For a defined "trim" flight condition, changes and oscillations occur in these parameters. Equations of motion are used to analyze these changes and oscillations. Stability and control derivatives are used to linearize (simplify) these equations of motion so the stability of the vehicle can be more readily analyzed.
Stability and control derivatives change as flight conditions change. The collection of stability and control derivatives as they change over a range of flight conditions is called an aero model. Aero models are used in engineering flight simulators to analyze stability, and in real-time flight simulators for training and entertainment.
Stability derivative vs. control derivative
Stability derivatives and control derivatives are related because they both are measures of forces and moments on a vehicle as other parameters change. Often the words are used together and abbreviated in the term "S&C derivatives." They differ in that stability derivatives measure the effects of changes in flight conditions while control derivatives measure effects of changes in the control surface positions:
Stability derivative measures how much change occurs in a force or moment acting on the vehicle when there is a small change in a flight condition parameter such as angle of attack, airspeed, altitude, etc. (Such parameters are called "states".)
Control derivative measures how much change occurs in a force or moment acting on the vehicle when there is a small change in the deflection of a control surface such as the ailerons, elevator, and rudder.
Uses
Linearization (simplification) of stability analysis
Stability and control derivatives change as flight conditions change. That is, the forces and moments on the vehicle are seldom simple (linear) functions of its states. Because of this, the dynamics of atmospheric flight vehicles can be difficult to analyze. The following are two methods used to tackle this complexity.
Small oscillations about otherwise steady flight conditions One way to simplify analysis is to consider only small oscillations about otherwise steady flight conditions. The set of flight conditions (such as altitude, airspeed, angle of attack) are called "trim" conditions when they are steady and not changing. When flight conditions are steady, stability and control derivatives are constant and can be more easily analyzed mathematically. The analysis at a single set of flight conditions is then applied to a range of different flight conditions.
Application in simulators for stability analysis In a flight simulator, it is possible to "look up" new values for stability and control derivatives as conditions change. And so, the "linear approximations" aren't as great and stability can be assessed in maneuvers that span a greater range of flight conditions. Flight simulators used for analysis such as this are called "engineering simulators". The set of values for stability and control derivatives (as they change over various flight conditions) is called an aero model.
Use in flight simulators
In addition to engineering simulators, aero models are often used in real time flight simulators for home use and professional flight training.
Names for the axes of vehicles
Air vehicles use a coordinate system of axes to help name important parameters used in the analysis of stability. All the axes run through the center of gravity (called the "CG"):
"X" or "x" axis runs from back to front along the body, called the Roll Axis.
"Y" or "y" axis runs left to right along the wing, called the Pitch Axis.
"Z" or "z" runs from top to bottom, called the Yaw Axis.
Two slightly different alignments of these axes are used depending on the situation: "body-fixed axes", and "stability axes".
Body-fixed axes
Body-fixed axes, or "body axes", are defined and fixed relative to the body of the vehicle.:
X body axis is aligned along the vehicle body and is usually positive toward the normal direction of motion.
Y body axis is at a right angle to the x body axis and is oriented along the wings of the vehicle. If there are no wings (as with a missile), a "horizontal" direction is defined in a way that is useful. The Y body axis is usually taken to be positive to right side of the vehicle.
Z body axis is perpendicular to wing-body (XY) plane and usually points downward.
Stability axes
Aircraft (usually not missiles) operate at a nominally constant "trim" angle of attack. The angle of the nose (the X Axis) does not align with the direction of the oncoming air. The difference in these directions is the angle of attack. So, for many purposes, parameters are defined in terms of a slightly modified axis system called "stability axes". The stability axis system is used to get the X axis aligned with the oncoming flow direction. Essentially, the body axis system is rotated about the Y body axis by the trim angle of attack and then "re-fixed" to the body of the aircraft:
X stability axis is aligned into the direction of the oncoming air in steady flight. (It is projected into the plane made by the X and Z body axes if there is sideslip).
Y stability axis is the same as the Y body-fixed axis.
Z stability axis is perpendicular to the plane made by the X stability axis and the Y body axis.
Names for forces, moments, and velocities
Forces and velocities along each of the axes
Forces on the vehicle along the body axes are called "Body-axis Forces":
X, or FX, is used to indicate forces on the vehicle along the X axis
Y, or FY, is used to indicate forces on the vehicle along the Y axis
Z, or FZ, is used to indicate forces on the vehicle along the Z axis
u (lower case) is used for speed of the oncoming flow along the X body axis
v (lower case) is used for speed of the oncoming flow along the Y body axis
w (lower case) is used for speed of the oncoming flow along the Z body axis
It is helpful to think of these speeds as projections of the relative wind vector on to the three body axes, rather than in terms of the translational motion of the vehicle relative to the fluid. As the body rotates relative to direction of the relative wind, these components change, even when there is no net change in speed.
Moments and angular rates around each of the axes
L is used to indicate the "rolling moment", which is around the X axis. Whether it is around the X body axis or the X stability axis depends on context (such as a subscript).
M is used to indicate the name of the "pitching moment", which is around the Y axis.
N is used to indicate the name of the "yawing moment", which is around the Z axis. Whether it is around the Z body axis or the Z stability axis depends on context (such as a subscript).
"P" or "p" is used for angular rate about the X axis ("Roll rate about the roll axis"). Whether it is around the X body axis or the X stability axis depends on context (such as a subscript).
"Q" or "q" is used for angular rate about the Y axis ("Pitch rate about the pitch axis").
"R" or "r" is used for angular rate about the Z axis ("Yaw rate about the yaw axis"). Whether it is around the Z body axis or the Z stability axis depends on context (such as a subscript).
Equations of motion
The use of stability derivatives is most conveniently demonstrated with missile or rocket configurations, because these exhibit greater symmetry than aeroplanes, and the equations of motion are correspondingly simpler. If it is assumed that the vehicle is roll-controlled, the pitch and yaw motions may be treated in isolation. It is common practice to consider the yaw plane, so that only 2D motion need be considered. Furthermore, it is assumed that thrust equals drag, and the longitudinal equation of motion may be ignored.
The body is oriented at angle (psi) with respect to inertial axes. The body is oriented at an angle (beta) with respect to the velocity vector, so that the components of velocity in body axes are:
where is the speed.
The aerodynamic forces are generated with respect to body axes, which is not an inertial frame. In order to calculate the motion, the forces must be referred to inertial axes. This requires the body components of velocity to be resolved through the heading angle into inertial axes.
Resolving into fixed (inertial) axes:
The acceleration with respect to inertial axes is found by differentiating these components of velocity with respect to time:
From Newton's Second Law, this is equal to the force acting divided by the mass. Now forces arise from the pressure distribution over the body, and hence are generated in body axes, and not in inertial axes, so the body forces must be resolved to inertial axes, as Newton's Second Law does not apply in its simplest form to an accelerating frame of reference.
Resolving the body forces:
Newton's Second Law, assuming constant mass:
where m is the mass.
Equating the inertial values of acceleration and force, and resolving back into body axes, yields the equations of motion:
The sideslip, , is a small quantity, so the small perturbation equations of motion become:
The first resembles the usual expression of Newton's Second Law, whilst the second is essentially the centrifugal acceleration.
The equation of motion governing the rotation of the body is derived from the time derivative of angular momentum:
where C is the moment of inertia about the yaw axis.
Assuming constant speed, there are only two state variables; and , which will be written more compactly as the yaw rate r.
There is one force and one moment, which for a given flight condition will each be functions of , r and their time derivatives. For typical missile configurations the forces and moments depend, in the short term, on and r. The forces may be expressed in the form:
where is the force corresponding to the equilibrium condition (usually called the trim) whose stability is being investigated.
It is common practice to employ a shorthand:
The partial derivative and all similar terms characterising the increments in forces and moments due to increments in the state variables are called stability derivatives.
Typically, is insignificant for missile configurations, so the equations of motion reduce to:
Stability derivative contributions
Each stability derivative is determined by the position, size, shape and orientation of the missile components. In aircraft, the directional stability determines such features as dihedral of the main planes, size of fin and area of tailplane, but the large number of important stability derivatives involved precludes a detailed discussion within this article. The missile is characterised by only three stability derivatives, and hence provides a useful introduction to the more complex aeroplane dynamics.
Consider first , a body at an angle of attack generates a lift force in the opposite direction to the motion of the body. For this reason is always negative.
At low angles of attack, the lift is generated primarily by the wings, fins and the nose region of the body. The total lift acts at a distance ahead of the centre of gravity (it has a negative value in the figure), this, in missile parlance, is the centre of pressure . If the lift acts ahead of the centre of gravity, the yawing moment will be negative, and will tend to increase the angle of attack, increasing both the lift and the moment further. It follows that the centre of pressure must lie aft of the centre of gravity for static stability. is the static margin and must be negative for longitudinal static stability. Alternatively, positive angle of attack must generate positive yawing moment on a statically stable missile, i.e. must be positive. It is common practice to design manoeuvrable missiles with near zero static margin (i.e. neutral static stability).
The need for positive explains why arrows and darts have flights and unguided rockets have fins.
The effect of angular velocity is mainly to decrease the nose lift and increase the tail lift, both of which act in a sense to oppose the rotation. is therefore always negative. There is a contribution from the wing, but since missiles tend to have small static margins (typically less than a calibre), this is usually small. Also the fin contribution is greater than that of the nose, so there is a net force , but this is usually insignificant compared with and is usually ignored.
Response
Manipulation of the equations of motion yields a second order homogeneous linear differential equation in the angle of attack :
The qualitative behavior of this equation is considered in the article on directional stability. Since and are both negative, the damping is positive. The stiffness does not only depend on the static stability term , it also contains a term which effectively determines the angle of attack due to the body rotation. The distance of the center of lift, including this term, ahead of the centre of gravity is called the maneuver margin. It must be negative for stability.
This damped oscillation in angle of attack and yaw rate, following a disturbance, is called the 'weathercock' mode, after the tendency of a weathercock to point into wind.
Comments
The state variables were chosen to be the angle of attack and the yaw rate r, and have omitted the speed perturbation u, together with the associated derivatives e.g. . This may appear arbitrary. However, since the timescale of the speed variation is much greater than that of the variation in angle of attack, its effects are negligible as far as the directional stability of the vehicle is concerned. Similarly, the effect of roll on yawing motion was also ignored, because missiles generally have low aspect ratio configurations and the roll inertia is much less than the yaw inertia, consequently the roll loop is expected to be much faster than the yaw response, and is ignored. These simplifications of the problem based on a priori knowledge, represent an engineer's approach. Mathematicians prefer to keep the problem as general as possible and only simplify it at the end of the analysis, if at all.
Aircraft dynamics is more complex than missile dynamics, mainly because the simplifications, such as separation of fast and slow modes, and the similarity between pitch and yaw motions, are not obvious from the equations of motion, and are consequently deferred until a late stage of the analysis. Subsonic transport aircraft have high aspect ratio configurations, so that yaw and roll cannot be treated as decoupled. However, this is merely a matter of degree; the basic ideas needed to understand aircraft dynamics are covered in this simpler analysis of missile motion.
Control derivatives
Deflection of control surfaces modifies the pressure distribution over the vehicle, and these are dealt with by including perturbations in forces and moments due to control deflection. The fin deflection is normally denoted (zeta). Including these terms, the equations of motion become:
Including the control derivatives enables the response of the vehicle to be studied, and the equations of motion used to design the autopilot.
Examples
CL, called dihedral effect, is a stability derivative that measures changes in rolling moment as Angle of sideslip changes. The "L" indicates rolling moment and the indicates sideslip angle.
See also
Longitudinal static stability
Neutral point
Aerodynamic center
Flight dynamics
Directional stability
References
Babister A W: Aircraft Dynamic Stability and Response. Elsever 1980,
Friedland B: Control System Design. McGraw-Hill Book Company 1987.
Roskam Jan: Airplane Flight Dynamics and Automatic Flight Controls. Roskam Aviation and Engineering Corporation 1979. Second Printing 1982. Library of Congress Catalog Card Number: 78-31382.
Aerodynamics | Stability derivatives | [
"Chemistry",
"Engineering"
] | 3,276 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
5,732,212 | https://en.wikipedia.org/wiki/MUSCL%20scheme | In the study of partial differential equations, the MUSCL scheme is a finite volume method that can provide highly accurate numerical solutions for a given system, even in cases where the solutions exhibit shocks, discontinuities, or large gradients. MUSCL stands for Monotonic Upstream-centered Scheme for Conservation Laws (van Leer, 1979), and the term was introduced in a seminal paper by Bram van Leer (van Leer, 1979). In this paper he constructed the first high-order, total variation diminishing (TVD) scheme where he obtained second order spatial accuracy.
The idea is to replace the piecewise constant approximation of Godunov's scheme by reconstructed states, derived from cell-averaged states obtained from the previous time-step. For each cell, slope limited, reconstructed left and right states are obtained and used to calculate fluxes at the cell boundaries (edges). These fluxes can, in turn, be used as input to a Riemann solver, following which the solutions are averaged and used to advance the solution in time. Alternatively, the fluxes can be used in Riemann-solver-free schemes, which are basically Rusanov-like schemes.
Linear reconstruction
We will consider the fundamentals of the MUSCL scheme by considering the following simple first-order, scalar, 1D system, which is assumed to have a wave propagating in the positive direction,
Where represents a state variable and represents a flux variable.
The basic scheme of Godunov uses piecewise constant approximations for each cell, and results in a first-order upwind discretisation of the above problem with cell centres indexed as . A semi-discrete scheme can be defined as follows,
This basic scheme is not able to handle shocks or sharp discontinuities as they tend to become smeared. An example of this effect is shown in the diagram opposite, which illustrates a 1D advective equation with a step wave propagating to the right. The simulation was carried out with a mesh of 200 cells and used a 4th order Runge–Kutta time integrator (RK4).
To provide higher resolution of discontinuities, Godunov's scheme can be extended to use piecewise linear approximations of each cell, which results in a central difference scheme that is second-order accurate in space. The piecewise linear approximations are obtained from
Thus, evaluating fluxes at the cell edges we get the following semi-discrete scheme
where and are the piecewise approximate values of cell edge variables, i.e.,
Although the above second-order scheme provides greater accuracy for smooth solutions, it is not a total variation diminishing (TVD) scheme and introduces spurious oscillations into the solution where discontinuities or shocks are present. An example of this effect is shown in the diagram opposite, which illustrates a 1D advective equation , with a step wave propagating to the right. This loss of accuracy is to be expected due to Godunov's theorem. The simulation was carried out with a mesh of 200 cells and used RK4 for time integration.
MUSCL based numerical schemes extend the idea of using a linear piecewise approximation to each cell by using slope limited left and right extrapolated states. This results in the following high resolution, TVD discretisation scheme,
Which, alternatively, can be written in the more succinct form,
The numerical fluxes correspond to a nonlinear combination of first and second-order approximations to the continuous flux function.
The symbols and represent scheme dependent functions (of the limited extrapolated cell edge variables), i.e.,
where, using downwind slopes:
and
The function is a limiter function that limits the slope of the piecewise approximations to ensure the solution is TVD, thereby avoiding the spurious oscillations that would otherwise occur around discontinuities or shocks - see Flux limiter section. The limiter is equal to zero when and is equal to unity when . Thus, the accuracy of a TVD discretization degrades to first order at local extrema, but tends to second order over smooth parts of the domain.
The algorithm is straight forward to implement. Once a suitable scheme for has been chosen, such as the Kurganov and Tadmor scheme (see below), the solution can proceed using standard numerical integration techniques.
Kurganov and Tadmor central scheme
A precursor to the Kurganov and Tadmor (KT) central scheme, (Kurganov and Tadmor, 2000), is the Nessyahu and Tadmor (NT) a staggered central scheme, (Nessyahu and Tadmor, 1990). It is a Riemann-solver-free, second-order, high-resolution scheme that uses MUSCL reconstruction. It is a fully discrete method that is straight forward to implement and can be used on scalar and vector problems, and can be viewed as a Rusanov flux (also called the local Lax-Friedrichs flux) supplemented with high order reconstructions. The algorithm is based upon central differences with comparable performance to Riemann type solvers when used to obtain solutions for PDE's describing systems that exhibit high-gradient phenomena.
The KT scheme extends the NT scheme and has a smaller amount of numerical viscosity than the original NT scheme. It also has the added advantage that it can be implemented as either a fully discrete or semi-discrete scheme. Here we consider the semi-discrete scheme.
The calculation is shown below:
Where the local propagation speed, , is the maximum absolute value of the eigenvalue of the Jacobian of over cells given by
and represents the spectral radius of
Beyond these CFL related speeds, no characteristic information is required.
The above flux calculation is most frequently called Lax-Friedrichs flux (though it's worth mentioning that such flux expression does not appear in Lax, 1954 but rather on Rusanov, 1961).
An example of the effectiveness of using a high resolution scheme is shown in the diagram opposite, which illustrates the 1D advective equation , with a step wave propagating to the right. The simulation was carried out on a mesh of 200 cells, using the Kurganov and Tadmor central scheme with Superbee limiter and used RK-4 for time integration. This simulation result contrasts extremely well against the above first-order upwind and second-order central difference results shown above. This scheme also provides good results when applied to sets of equations - see results below for this scheme applied to the Euler equations. However, care has to be taken in choosing an appropriate limiter because, for example, the Superbee limiter can cause unrealistic sharpening for some smooth waves.
The scheme can readily include diffusion terms, if they are present. For example, if the above 1D scalar problem is extended to include a diffusion term, we get
for which Kurganov and Tadmor propose the following central difference approximation,
Where,
Full details of the algorithm (full and semi-discrete versions) and its derivation can be found in the original paper (Kurganov and Tadmor, 2000), along with a number of 1D and 2D examples. Additional information is also available in the earlier related paper by Nessyahu and Tadmor (1990).
Note: This scheme was originally presented by Kurganov and Tadmor as a 2nd order scheme based upon linear extrapolation. A later paper (Kurganov and Levy, 2000) demonstrates that it can also form the basis of a third order scheme. A 1D advective example and an Euler equation example of their scheme, using parabolic reconstruction (3rd order), are shown in the parabolic reconstruction and Euler equation sections below.
Piecewise parabolic reconstruction
It is possible to extend the idea of linear-extrapolation to higher order reconstruction, and an example is shown in the diagram opposite. However, for this case the left and right states are estimated by interpolation of a second-order, upwind biased, difference equation. This results in a parabolic reconstruction scheme that is third-order accurate in space.
We follow the approach of Kermani (Kermani, et al., 2003), and present a third-order upwind biased scheme, where the symbols and again represent scheme dependent functions (of the limited reconstructed cell edge variables). But for this case they are based upon parabolically reconstructed states, i.e.,
and
Where = 1/3 and,
and the limiter function , is the same as above.
Parabolic reconstruction is straight forward to implement and can be used with the Kurganov and Tadmor scheme in lieu of the linear extrapolation shown above. This has the effect of raising the spatial solution of the KT scheme to 3rd order. It performs well when solving the Euler equations, see below. This increase in spatial order has certain advantages over 2nd order schemes for smooth solutions, however, for shocks it is more dissipative - compare diagram opposite with above solution obtained using the KT algorithm with linear extrapolation and Superbee limiter. This simulation was carried out on a mesh of 200 cells using the same KT algorithm but with parabolic reconstruction. Time integration was by RK-4, and the alternative form of van Albada limiter, , was used to avoid spurious oscillations.
Example: 1D Euler equations
For simplicity we consider the 1D case without heat transfer and without body force. Therefore, in conservation vector form, the general Euler equations reduce to
where
and where is a vector of states and is a vector of fluxes.
The equations above represent conservation of mass, momentum, and energy. There are thus three equations and four unknowns, (density) (fluid velocity), (pressure) and (total energy). The total energy is given by,
where represents specific internal energy.
In order to close the system an equation of state is required. One that suits our purpose is
where is equal to the ratio of specific heats for the fluid.
We can now proceed, as shown above in the simple 1D example, by obtaining the left and right extrapolated states for each state variable. Thus, for density we obtain
where
Similarly, for momentum , and total energy . Velocity , is calculated from momentum, and pressure , is calculated from the equation of state.
Having obtained the limited extrapolated states, we then proceed to construct the edge fluxes using these values. With the edge fluxes known, we can now construct the semi-discrete scheme, i.e.,
The solution can now proceed by integration using standard numerical techniques.
The above illustrates the basic idea of the MUSCL scheme. However, for a practical solution to the Euler equations, a suitable scheme (such as the above KT scheme), also has to be chosen in order to define the function .
The diagram opposite shows a 2nd order solution to G A Sod's shock tube problem (Sod, 1978) using the above high resolution Kurganov and Tadmor Central Scheme (KT) with Linear Extrapolation and Ospre limiter. This illustrates clearly demonstrates the effectiveness of the MUSCL approach to solving the Euler equations. The simulation was carried out on a mesh of 200 cells using Matlab code (Wesseling, 2001), adapted to use the KT algorithm and Ospre limiter. Time integration was performed by a 4th order SHK (equivalent performance to RK-4) integrator. The following initial conditions (SI units) were used:
pressure left = 100000 [Pa];
pressure right= 10000 [Pa];
density left = 1.0 [kg/m3];
density right = 0.125 [kg/m3];
length = 20 [m];
velocity left = 0 [m/s];
velocity right = 0 [m/s];
duration =0.01 [s];
lambda = 0.001069 (Δt/Δx).
The diagram opposite shows a 3rd order solution to G A Sod's shock tube problem (Sod, 1978) using the above high resolution Kurganov and Tadmor Central Scheme (KT) but with parabolic reconstruction and van Albada limiter. This again illustrates the effectiveness of the MUSCL approach to solving the Euler equations. The simulation was carried out on a mesh of 200 cells using Matlab code (Wesseling, 2001), adapted to use the KT algorithm with Parabolic Extrapolation and van Albada limiter. The alternative form of van Albada limiter, , was used to avoid spurious oscillations. Time integration was performed by a 4th order SHK integrator. The same initial conditions were used.
Various other high resolution schemes have been developed that solve the Euler equations with good accuracy. Examples of such schemes are,
the Osher scheme, and
the Liou-Steffen AUSM (advection upstream splitting method) scheme.
More information on these and other methods can be found in the references below. An open source implementation of the Kurganov and Tadmor central scheme can be found in the external links below.
See also
Finite volume method
Flux limiter
Godunov's theorem
High resolution scheme
Method of lines
Sergei K. Godunov
Total variation diminishing
Sod shock tube
References
Kermani, M. J., Gerber, A. G., and Stockie, J. M. (2003), Thermodynamically Based Moisture Prediction Using Roe’s Scheme, The 4th Conference of Iranian AeroSpace Society, Amir Kabir University of Technology, Tehran, Iran, January 27–29.
Kurganov, Alexander and Eitan Tadmor (2000), New High-Resolution Central Schemes for Nonlinear Conservation Laws and Convection-Diffusion Equations, J. Comput. Phys., 160, 241–282.
Kurganov, Alexander and Doron Levy (2000), A Third-Order Semidiscrete Central Scheme for Conservation Laws and Convection-Diffusion Equations, SIAM J. Sci. Comput., 22, 1461–1488.
Lax, P. D. (1954). Weak Solutions of Non-linear Hyperbolic Equations and Their Numerical Computation, Comm. Pure Appl. Math., VII, pp159–193.
Leveque, R. J. (2002). Finite Volume Methods for Hyperbolic Problems, Cambridge University Press.
van Leer, B. (1979), Towards the Ultimate Conservative Difference Scheme, V. A Second Order Sequel to Godunov's Method, J. Com. Phys.., 32, 101–136.
Nessyahu, H. and E. Tadmor (1990), Non-oscillatory central differencing for hyperbolic conservation laws, J. Comput. Phys., 87, 408–463. .
Rusanov, V. V. (1961). Calculation of Intersection of Non-Steady Shock Waves with Obstacles, J. Comput. Math. Phys. USSR, 1, pp267–279.
Sod, G. A. (1978), A Numerical Study of a Converging Cylindrical Shock. J. Fluid Mechanics, 83, 785–794.
Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag.
Wesseling, Pieter (2001), Principles of Computational Fluid Dynamics, Springer-Verlag.
Further reading
Hirsch, C. (1990), Numerical Computation of Internal and External Flows, vol 2, Wiley.
Laney, Culbert B. (1998), Computational Gas Dynamics, Cambridge University Press.
Tannehill, John C., et al. (1997), Computational Fluid mechanics and Heat Transfer, 2nd Ed., Taylor and Francis.
External links
GEES – Open source code solving the Euler Equations using the Kurganov and Tadmor central scheme, written in Fortran (author: Arno Mayrhofer)
Fluid dynamics
Numerical differential equations
Computational fluid dynamics | MUSCL scheme | [
"Physics",
"Chemistry",
"Engineering"
] | 3,336 | [
"Computational fluid dynamics",
"Chemical engineering",
"Computational physics",
"Piping",
"Fluid dynamics"
] |
5,733,602 | https://en.wikipedia.org/wiki/Fayette%20County%20Reservoir | Fayette County Reservoir is a power station cooling reservoir on Cedar Creek in the Colorado River basin, 3 miles west of Fayetteville, Texas and 10 miles east of La Grange, Texas. The reservoir was created in 1978 when a dam was built on the creek to provide a cooling pond for the Fayette Power Project which provides electrical generation to Fayette County and surrounding areas. The dam, lake, and power plant are managed by the Lower Colorado River Authority. There is very little vegetation compared to what can usually be found in fisheries, and some invasive plant species are present. The lake is open to the public for recreational activities, including boating, fishing, camping, and hiking.
Fayette County Reservoir is also known as Lake Fayette.
Description
Fayette County Reservoir is located within the Post Oak Savannah ecoregion in Texas. Habitat in the littoral zone is mainly natural and rocky shoreline. The water level is supplied and maintained by the Colorado River. Its function as a power plant cooling reservoir increases water temperatures throughout the lake. Water clarity is considered normal, and nutrient levels are high. The dam itself is composed of compacted soil approximately 96 ft. high and 15, 259 ft. long.
History
Construction of the dam for Fayette County Reservoir impounded the upstream section of Cedar Creek, turning it into a recharge area where it was previously a groundwater discharge area. Most of the vegetation in the area was removed during construction.
Lake Fayette used to be the site of a small town called Biegel, which was famous for its pickles. Outlines of houses may be visible on fish finders. The Biegel-December House, the Legler Log House, and the Gentner-Kroll-Polasek Farmstead were historic structures present in the construction area, and were relocated prior to construction. Other historic and archaeological sites were present on the construction site, and 25 sites were investigated by the Texas Archaeological Survey. No other structures or artifacts were removed.
Fish and plant populations
Fayette County Reservoir has been stocked with species of fish intended to improve the utility of the reservoir for recreational fishing. Fish present in Fayette County Reservoir include catfish, largemouth bass, and sunfish. Largemouth Bass is considered the most popular sport fishing species in this reservoir, and fish are typically abundant and active between February and June. The Channel Catfish is another important sport fishing species, and their populations are being closely monitored as a result of decreasing abundance. The most abundant prey species in the reservoir are Bluegill, Gizzard Shad, and Threadfin Shad.
The level of overall coverage by aquatic vegetation in Fayette County Reservoir is lower than what is typical for fisheries. A 2012 survey of aquatic vegetation found an invasive plant species, Eurasian watermilfoil, but more recent surveys indicate that it is no longer present, and coverage by other invasive species is low.
Recreational uses
Boating, fishing, and camping are popular recreational uses of the lake. There are boat ramps and piers available to visitors, as well as access to shoreline for fishing. Fishing tournaments are held annually, notably for largemouth bass. There is also a playground, a 3-mile trail connecting Oak Thicket Park and Park Prairie Park, and a nature trail in Oak Thicket Park.
References
External links
Fayette County Reservoir - Texas Parks & Wildlife
Lower Colorado River Authority
Biegel, Texas Underwater Ghost Town
Fishing Regulations for Fayette County
Fayette County
Protected areas of Fayette County, Texas
Lower Colorado River Authority
Bodies of water of Fayette County, Texas
Cooling ponds | Fayette County Reservoir | [
"Chemistry",
"Environmental_science"
] | 716 | [
"Cooling ponds",
"Water pollution"
] |
5,733,841 | https://en.wikipedia.org/wiki/Beartown%20State%20Forest | Beartown State Forest is a publicly owned forest with recreational features located in the towns of Great Barrington, Monterey, Lee, and Tyringham, Massachusetts. The state forest's more than include of recreational parkland. It is managed by the Massachusetts Department of Conservation and Recreation.
History
The forest was established with the state's purchase of 5000 acres in 1921. Forest roads were created by workers with the Civilian Conservation Corps beginning in 1933. Major CCC projects included the building of an earthen dam to create Benedict Pond. The CCC camps were active here until 1940.
Flora and fauna
Wildlife include deer, bobcats, fishers, black bear, and beaver. Flora includes deciduous forests, various flowering shrubs and wildflowers. Two areas of old growth forest exist in the park. At Burgoyne Pass (), there are of old-growth eastern hemlock, northern red oak, eastern white pine, sweet birch, and yellow birch. At East Brook, there are of old-growth eastern hemlock and yellow birch.
Activities and amenities
The forest has trails for horseback riding, mountain biking, snowmobiling, snowshoeing, and all-terrain vehicle use. A interpretive trail loops around Benedict Pond and a stretch of the Appalachian Trail passes near the pond and across the forest. Swimming, fishing, and a ramp for non-motorized boating are offered on Benedict Pond. There are also facilities for camping, picnicking and restricted hunting as well as handicapped-accessible beaches and restrooms.
See also
List of old growth forests in Massachusetts
References
External links
Beartown State Forest Department of Conservation and Recreation
Beartown State Forest Map Department of Conservation and Recreation
Massachusetts state forests
State parks of Massachusetts
Parks in Berkshire County, Massachusetts
Campgrounds in Massachusetts
Civilian Conservation Corps in Massachusetts
Great Barrington, Massachusetts
Monterey, Massachusetts
Tyringham, Massachusetts
Lee, Massachusetts
1921 establishments in Massachusetts
Old-growth forests
Protected areas established in 1921 | Beartown State Forest | [
"Biology"
] | 387 | [
"Old-growth forests",
"Ecosystems"
] |
13,718,304 | https://en.wikipedia.org/wiki/Terminal%20countdown%20demonstration%20test | A terminal countdown demonstration test (TCDT) is a simulation of the final hours of a launch countdown and serves as a practice exercise in which both the launch team and flight crew rehearse launch day timelines and procedures. In the specific case of a TCDT for the Space Shuttle, the test culminated in a simulated ignition and RSLS Abort (automated shutdown of the orbiter's main engines). Following the simulated abort, the flight crew was briefed on emergency egress procedures and use of the fixed service structure slidewire system. On some earlier shuttle missions, and Apollo missions, the test would conclude with the flight crew evacuating the launch pad by use of these emergency systems, but this is no longer part of the test.
Unmanned carrier rocket launches also undergo TCDTs, when countdown procedures are followed. These vary for specific rockets, for example solid-fuelled rockets would not simulate an engine shutdown, as it is impossible to shut down a solid rocket after it has been lit.
TCDTs typically are carried out a few days before launch.
See also
Space Shuttle program
Ares (rocket)
References
Ares (rocket family)
Space Shuttle program
Spaceflight
Time | Terminal countdown demonstration test | [
"Physics",
"Astronomy",
"Mathematics"
] | 243 | [
"Physical quantities",
"Time",
"Time stubs",
"Outer space",
"Outer space stubs",
"Quantity",
"Astronomy stubs",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Spaceflight"
] |
13,718,716 | https://en.wikipedia.org/wiki/N-Acetyltalosaminuronic%20acid | N-Acetyltalosaminuronic acid is a uronic acid. It is a component of pseudopeptidoglycan, a structural polymer found in the cell walls in some types of Archaea.
Amino sugars
Sugar acids | N-Acetyltalosaminuronic acid | [
"Chemistry"
] | 51 | [
"Amino sugars",
"Carbohydrates",
"Sugar acids",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
13,721,033 | https://en.wikipedia.org/wiki/Etifelmine | Etifelmine (INN; also known as gilutensin) is a stimulant drug. It was used for the treatment of hypotension (low blood pressure).
Synthesis
The base-catalyzed reaction between benzophenone (1) and butyronitrile (2) gives 2-[hydroxy(diphenyl)methyl]butanenitrile (3). Catalytic hydrogenation reduces the nitrile group to a primary amine giving 1,1-diphenyl-2-ethyl-3-aminopropanol (4). The tertiary hydroxyl group is dehydrated by treatment with anhydrous hydrogen chloride gas, completing the synthesis of etifelmine (5).
See also
2-MDP
Pridefine
References
Stimulants
Amines
Benzhydryl compounds | Etifelmine | [
"Chemistry"
] | 182 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
13,721,526 | https://en.wikipedia.org/wiki/Standalone%20program | A standalone program, also known as a freestanding program, is a computer program that does not load any external module, library function or program and that is designed to boot with the bootstrap procedure of the target processor – it runs on bare metal. In early computers like the ENIAC without the concept of an operating system, standalone programs were the only way to run a computer. Standalone programs are usually written in assembly language for a specific hardware.
Later standalone programs typically were provided for utility functions such as disk formatting. Also, computers with very limited memory may use standalone programs, i.e. most computers until the mid-1950s and later still embedded processors.
Standalone programs are now mainly limited to SoC's or microcontrollers (where battery life, price, and data space are at premiums) and critical systems. In extreme cases every possible set of inputs and errors must be tested and thus every potential output known; fully independent [separate physical suppliers and programing teams] yet fully parallel system-state monitoring; or where the attack surface must be minimized; an operating system would add unacceptable complexity and uncertainty (examples include industrial operator safety interrupts, commercial airlines, medical devices, ballistic missile launch controls and lithium-battery charge controllers in consumer devices [fire hazard and chip cost of approximately 10 cents]). Resource limited microcontrollers can also be made more tolerant of varied environmental conditions than the more powerful hardware needed for an operating system; this is possible because of the much lower clock frequency, pin spacing, lack of large data buses (e.g. DDR4 RAM modules), and limited transistor count allowance for wider design margins and thus the potential for more robust electrical and physical properties both in circuit layout and material choices.
See also
Bare machine
References
Legacy systems
Computer programming | Standalone program | [
"Technology",
"Engineering"
] | 369 | [
"Computer programming",
"Computer systems",
"Software engineering",
"Legacy systems",
"Computers",
"History of computing"
] |
13,722,767 | https://en.wikipedia.org/wiki/Willmore%20conjecture | In differential geometry, the Willmore conjecture is a lower bound on the Willmore energy of a torus. It is named after the English mathematician Tom Willmore, who conjectured it in 1965. A proof by Fernando Codá Marques and André Neves was announced in 2012 and published in 2014.
Willmore energy
Let v : M → R3 be a smooth immersion of a compact, orientable surface. Giving M the Riemannian metric induced by v, let H : M → R be the mean curvature (the arithmetic mean of the principal curvatures κ1 and κ2 at each point). In this notation, the Willmore energy W(M) of M is given by
It is not hard to prove that the Willmore energy satisfies W(M) ≥ 4π, with equality if and only if M is an embedded round sphere.
Statement
Calculation of W(M) for a few examples suggests that there should be a better bound than W(M) ≥ 4π for surfaces with genus g(M) > 0. In particular, calculation of W(M) for tori with various symmetries led Willmore to propose in 1965 the following conjecture, which now bears his name
For every smooth immersed torus M in R3, W(M) ≥ 2π2.
In 1982, Peter Wai-Kwong Li and Shing-Tung Yau proved the conjecture in the non-embedded case, showing that if is an immersion of a compact surface, which is not an embedding, then W(M) is at least 8π.
In 2012, Fernando Codá Marques and André Neves proved the conjecture in the embedded case, using the Almgren–Pitts min-max theory of minimal surfaces. Martin Schmidt claimed a proof in 2002, but it was not accepted for publication in any peer-reviewed mathematical journal (although it did not contain a proof of the Willmore conjecture, he proved some other important conjectures in it). Prior to the proof of Marques and Neves, the Willmore conjecture had already been proved for many special cases, such as tube tori (by Willmore himself), and for tori of revolution (by Langer & Singer).
References
Conjectures that have been proved
Surfaces
Theorems in differential geometry
de:Willmore-Energie | Willmore conjecture | [
"Mathematics"
] | 475 | [
"Theorems in differential geometry",
"Theorems in geometry",
"Conjectures that have been proved",
"Mathematical problems",
"Mathematical theorems"
] |
13,725,281 | https://en.wikipedia.org/wiki/Differential%20inclusion | In mathematics, differential inclusions are a generalization of the concept of ordinary differential equation of the form
where F is a multivalued map, i.e. F(t, x) is a set rather than a single point in . Differential inclusions arise in many situations including differential variational inequalities, projected dynamical systems, Moreau's sweeping process, linear and nonlinear complementarity dynamical systems, discontinuous ordinary differential equations, switching dynamical systems, and fuzzy set arithmetic.
For example, the basic rule for Coulomb friction is that the friction force has magnitude μN in the direction opposite to the direction of slip, where N is the normal force and μ is a constant (the friction coefficient). However, if the slip is zero, the friction force can be any force in the correct plane with magnitude smaller than or equal to μN. Thus, writing the friction force as a function of position and velocity leads to a set-valued function.
In differential inclusion, we not only take a set-valued map at the right hand side but also we can take a subset of a Euclidean space for some as following way. Let and Our main purpose is to find a function satisfying the differential inclusion a.e. in where is an open bounded set.
Theory
Existence theory usually assumes that F(t, x) is an upper hemicontinuous function of x, measurable in t, and that F(t, x) is a closed, convex set for all t and x.
Existence of solutions for the initial value problem
for a sufficiently small time interval [t0, t0 + ε), ε > 0 then follows.
Global existence can be shown provided F does not allow "blow-up" ( as for a finite ).
Existence theory for differential inclusions with non-convex F(t, x) is an active area of research.
Uniqueness of solutions usually requires other conditions.
For example, suppose satisfies a one-sided Lipschitz condition:
for some C for all x1 and x2. Then the initial value problem
has a unique solution.
This is closely related to the theory of maximal monotone operators, as developed by Minty and Haïm Brezis.
Filippov's theory only allows for discontinuities in the derivative , but allows no discontinuities in the state, i.e. need be continuous. Schatzman and later Moreau (who gave it the currently accepted name) extended the notion to measure differential inclusion (MDI) in which the inclusion is evaluated by taking the limit from above for .
Applications
Differential inclusions can be used to understand and suitably interpret discontinuous ordinary differential equations, such as arise for Coulomb friction in mechanical systems and ideal switches in power electronics. An important contribution has been made by A. F. Filippov, who studied regularizations of discontinuous equations. Further, the technique of regularization was used by N.N. Krasovskii in the theory of differential games.
Differential inclusions are also found at the foundation of non-smooth dynamical systems (NSDS) analysis, which is used in the analog study of switching electrical circuits using idealized component equations (for example using idealized, straight vertical lines for the sharply exponential forward and breakdown conduction regions of a diode characteristic) and in the study of certain non-smooth mechanical system such as stick-slip oscillations in systems with dry friction or the dynamics of impact phenomena. Software that solves NSDS systems exists, such as INRIA's Siconos.
In continuous function when Fuzzy concept is used in differential inclusion a new concept comes as Fuzzy differential inclusion which has application in Atmospheric dispersion modeling and Cybernetics in Medical imaging.
See also
Stiffness, which affects ODEs/DAEs for functions with "sharp turns" and which affects numerical convergence
References
Dynamical systems
Variational analysis | Differential inclusion | [
"Physics",
"Mathematics"
] | 808 | [
"Mechanics",
"Dynamical systems"
] |
13,725,692 | https://en.wikipedia.org/wiki/List%20of%20public%20transport%20smart%20cards | The following tables list smart cards used for public transport and other electronic purse applications.
Africa
Americas
Asia and Oceania
Europe
Gallery
See also
Calypso, an international electronic ticketing standard, originally designed by a group of transit operators
CIPURSE, is an open security standard for transit fare collection systems
Smartcards on buses and trams in Great Britain
Smartcards on National Rail (Great Britain)
References
Public transport fare collection
Lists of brands
Technology-related lists
Transport lists
Macau Pass S.A. | List of public transport smart cards | [
"Physics"
] | 98 | [
"Physical systems",
"Transport",
"Transport lists"
] |
3,160,029 | https://en.wikipedia.org/wiki/Numberlink | Numberlink is a type of logic puzzle involving finding paths to connect numbers in a grid.
Rules
The player has to pair up all the matching numbers on the grid with single continuous lines (or paths). The lines cannot branch off or cross over each other, and the numbers have to fall at the end of each line (i.e., not in the middle).
It is considered that a problem is well-designed only if it has a unique solution and all the cells in the grid are filled, although some Numberlink designers do not stipulate this.
History
In 1897, a slightly different form of the puzzle was printed in the Brooklyn Daily Eagle, in a column by Sam Loyd. Another early, printed version of Number Link can be found in Henry Ernest Dudeney's book Amusements in mathematics (1917) as a puzzle for motorists (puzzle no. 252). This puzzle type was popularized in Japan by Nikoli as Arukone (アルコネ, Alphabet Connection) and Nanbarinku (ナンバーリンク, Number Link). The only difference between Arukone and Nanbarinku is that in Arukone the clues are letter pairs (as in Dudeney's puzzle), while in Nanbarinku the clues are number pairs.
, three books consisting entirely of Numberlink puzzles have been published by Nikoli.
Versions of this known as Wire Storm, Flow Free and Alphabet Connection have been released as apps for iOS, Android and Windows Phone.
Computational complexity
As a computational problem, finding a solution to a given Numberlink puzzle is NP-complete.
NP-completeness is maintained even if "zig-zag" paths are allowed. Informally, this means paths may have "unnecessary bends" in them (see the reference for a more technical explanation).
See also
List of Nikoli puzzle types
References
External links
Online version of Numberlink in HTML5
Logic puzzles
NP-complete problems | Numberlink | [
"Mathematics"
] | 397 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
3,160,379 | https://en.wikipedia.org/wiki/Organ%20printing | Organ printing utilizes techniques similar to conventional 3D printing where a computer model is fed into a printer that lays down successive layers of plastics or wax until a 3D object is produced. In the case of organ printing, the material being used by the printer is a biocompatible plastic. The biocompatible plastic forms a scaffold that acts as the skeleton for the organ that is being printed. As the plastic is being laid down, it is also seeded with human cells from the patient's organ that is being printed for. After printing, the organ is transferred to an incubation chamber to give the cells time to grow. After a sufficient amount of time, the organ is implanted into the patient.
To many researchers the ultimate goal of organ printing is to create organs that can be fully integrated into the human body. Successful organ printing has the potential to impact several industries, notably artificial organs organ transplants, pharmaceutical research, and the training of physicians and surgeons.
History
The field of organ printing stemmed from research in the area of stereolithography, the basis for the practice of 3D printing that was invented in 1984. In this early era of 3D printing, it was not possible to create lasting objects because of the material used for the printing process was not durable. 3D printing was instead used as a way to model potential end products that would eventually be made from different materials under more traditional techniques. In the beginning of the 1990s, nanocomposites were developed that allowed 3D printed objects to be more durable, permitting 3D printed objects to be used for more than just models. It was around this time that those in the medical field began considering 3D printing as an avenue for generating artificial organs. By the late 1990s, medical researchers were searching for biocompatible materials that could be used in 3D printing.
The concept of bioprinting was first demonstrated in 1988. At this time, a researcher used a modified HP inkjet printer to deposit cells using cytoscribing technology. Progress continued in 1999 when the first artificial organ made using bioprinting was printed by a team of scientist leads by Dr. Anthony Atala at the Wake Forest Institute for Regenerative Medicine. The scientists at Wake Forest printed an artificial scaffold for a human bladder and then seeded the scaffold with cells from their patient. Using this method, they were able to grow a functioning organ and ten years after implantation the patient had no serious complications.
After the bladder at Wake Forest, strides were taken towards printing other organs. In 2002, a miniature, fully functional kidney was printed. In 2003, Dr. Thomas Boland from Clemson University patented the use of inkjet printing for cells. This process utilized a modified spotting system for the deposition of cells into organized 3D matrices placed on a substrate. This printer allowed for extensive research into bioprinting and suitable biomaterials. For instance, since these initial findings, the 3D printing of biological structures has been further developed to encompass the production of tissue and organ structures, as opposed to cell matrices. Additionally, more techniques for printing, such as extrusion bioprinting, have been researched and subsequently introduced as a means of production.
In 2004, the field of bioprinting was drastically changed by yet another new bioprinter. This new printer was able to use live human cells without having to build an artificial scaffold first. In 2009, Organovo used this novel technology to create the first commercially available bioprinter. Soon after, Organovo's bioprinter was used to develop a biodegradable blood vessel, the first of its kind, without a cell scaffold.
In the 2010s and beyond, further research has been put forth into producing other organs, such as the liver and heart valves, and tissues, such as a blood-borne network, via 3D printing. In 2019, scientists in Israel made a major breakthrough when they were able to print a rabbit-sized heart with a network of blood vessels that were capable of contracting like natural blood vessels. The printed heart had the correct anatomical structure and function compared to real hearts. This breakthrough represented a real possibility of printing fully functioning human organs. In fact, scientists at the Warsaw Foundation for Research and Development of Science in Poland have been working on creating a fully artificial pancreas using bioprinting technology. As of today, these scientists have been able to develop a functioning prototype. This is a growing field and much research is still being conducted.
3D printing techniques
3D printing for the manufacturing of artificial organs has been a major topic of study in biological engineering. As the rapid manufacturing techniques entailed by 3D printing become increasingly efficient, their applicability in artificial organ synthesis has grown more evident. Some of the primary benefits of 3D printing lie in its capability of mass-producing scaffold structures, as well as the high degree of anatomical precision in scaffold products. This allows for the creation of constructs that more effectively resemble the microstructure of a natural organ or tissue structure. Organ printing using 3D printing can be conducted using a variety of techniques, each of which confers specific advantages that can be suited to particular types of organ production.
Sacrificial writing into functional tissue (SWIFT)
Sacrificial writing into function tissue (SWIFT) is a method of organ printing where living cells are packed tightly to mimic the density that occurs in the human body. While packing, tunnels are carved to mimic blood vessels and oxygen and essential nutrients are delivered via these tunnels. This technique pieces together other methods that only packed cells or created vasculature. SWIFT combines both and is an improvement that brings researchers closer to creating functional artificial organs.
Stereolithographic (SLA) 3D bioprinting
This method of organ printing uses spatially controlled light or laser to create a 2D pattern that is layered through a selective photopolymerization in the bio-ink reservoir. A 3D structure can then be built in layers using the 2D pattern. Afterwards the bio-ink is removed from the final product. SLA bioprinting allows for the creation of complex shapes and internal structures. The feature resolution for this method is extremely high and the only disadvantage is the scarcity of resins that are biocompatible.
Drop-based bioprinting (Inkjet)
Drop-based bioprinting makes cellular developments utilizing droplets of an assigned material, which has oftentimes been combined with a cell line. Cells themselves can also be deposited in this manner with or without polymer. When printing polymer scaffolds using these methods, each drop starts to polymerize upon contact with the substrate surface and merge into a larger structure as droplets start to coalesce. Polymerization can happen through a variety of methods depending on the polymer used. For instance, alginate polymerization is started by calcium ions in the substrate, which diffuse into the liquified bioink and permit for the arrangement of a strong gel. Drop-based bioprinting is commonly utilized due to its productive speed. However, this may make it less appropriate for more complicated organ structures.
Extrusion bioprinting
Extrusion bioprinting includes the consistent statement of a specific printing fabric and cell line from an extruder, a sort of portable print head. This tends to be a more controlled and gentler handle for fabric or cell statement, and permits for more noteworthy cell densities to be utilized within the development of 3D tissue or organ structures. In any case, such benefits are set back by the slower printing speeds involved by this procedure. Extrusion bioprinting is frequently coupled with UV light, which photopolymerizes the printed fabric to create a more steady, coordinated construct.
Fused deposition modeling
Fused deposition modeling (FDM) is more common and inexpensive compared to selective laser sintering. This printer uses a printhead that is similar in structure to an inkjet printer; however, ink is not used. Plastic beads are heated at high temperature and released from the printhead as it moves, building the object in thin layers. A variety of plastics can be used with FDM printers. Additionally, most of the parts printed by FDM are typically composed from the same thermoplastics that are utilized in tradition injection molding or machining techniques. Due to this, these parts have analogous durability, mechanical properties, and stability characteristics. Precision control allows for a consistent release amount and specific location deposition for each layer contributing to the shape. As the heated plastic is deposited from the printhead, it fuses or bonds to the layers below. As each layer cools, they harden and gradually take hold of the solid shape intended to be created as more layers are contributed to the structure.
Selective laser sintering
Selective laser sintering (SLS) uses powdered material as the substrate for printing new objects. SLS can be used to create metal, plastic, and ceramic objects. This technique uses a laser controlled by a computer as the power source to sinter powdered material. The laser traces a cross-section of the shape of the desired object in the powder, which fuses it together into a solid form. A new layer of powder is then laid down and the process repeats itself, building each layer with every new application of powder, one by one, to form the entirety of the object. One of the advantages of SLS printing is that it requires very little additional tooling, i.e. sanding, once the object is printed. Recent advances in organ printing using SLS include 3D constructs of craniofacial implants as well as scaffolds for cardiac tissue engineering.
Printing materials
Printing materials must fit a broad spectrum of criteria, one of the foremost being biocompatibility. The resulting scaffolds formed by 3D printed materials should be physically and chemically appropriate for cell proliferation. Biodegradability is another important factor, and insures that the artificially formed structure can be broken down upon successful transplantation, to be replaced by a completely natural cellular structure. Due to the nature of 3D printing, materials used must be customizable and adaptable, being suited to wide array of cell types and structural conformations.
Natural polymers
Materials for 3D printing usually consist of alginate or fibrin polymers that have been integrated with cellular adhesion molecules, which support the physical attachment of cells. Such polymers are specifically designed to maintain structural stability and be receptive to cellular integration. The term bio-ink has been used as a broad classification of materials that are compatible with 3D bioprinting. Hydrogel alginates have emerged as one of the most commonly used materials in organ printing research, as they are highly customizable, and can be fine-tuned to simulate certain mechanical and biological properties characteristic of natural tissue. The ability of hydrogels to be tailored to specific needs allows them to be used as an adaptable scaffold material, that are suited for a variety of tissue or organ structures and physiological conditions. A major challenge in the use of alginate is its stability and slow degradation, which makes it difficult for the artificial gel scaffolding to be broken down and replaced with the implanted cells' own extracellular matrix. Alginate hydrogel that is suitable for extrusion printing is also often less structurally and mechanically sound; however, this issue can be mediated by the incorporation of other biopolymers, such as nanocellulose, to provide greater stability. The properties of the alginate or mixed-polymer bioink are tunable and can be altered for different applications and types of organs.
Other natural polymers that have been used for tissue and 3D organ printing include chitosan, hydroxyapatite (HA), collagen, and gelatin. Gelatin is a thermosensitive polymer with properties exhibiting excellent wear solubility, biodegradability, biocompatibility, as well as a low immunologic rejection. These qualities are advantageous and result in high acceptance of the 3D bioprinted organ when implanted in vivo.
Synthetic polymers
Synthetic polymers are human made through chemical reactions of monomers. Their mechanical properties are favorable in that their molecular weights can be regulated from low to high based on differing requirements. However, their lack of functional groups and structural complexity has limited their usage in organ printing. Current synthetic polymers with excellent 3D printability and in vivo tissue compatibility, include polyethylene glycol (PEG), poly(lactic-glycolic acid) (PLGA), and polyurethane (PU). PEG is a biocompatible, nonimmunogenic synthetic polyether that has tunable mechanical properties for use in 3D bioprinting. Though PEG has been utilized in various 3D printing applications, the lack of cell-adhesive domains has limited further use in organ printing. PLGA, a synthetic copolymer, is widely familiar in living creatures, such as animals, humans, plants, and microorganisms. PLGA is used in conjunction with other polymers to create different material systems, including PLGA-gelatin, PLGA-collagen, all of which enhance mechanical properties of the material, biocompatible when placed in vivo, and have tunable biodegradability. PLGA has most often been used in printed constructs for bone, liver, and other large organ regeneration efforts. Lastly, PU is unique in that it can be classified into two groups: biodegradable or non-biodegradable. It has been used in the field of bioprinting due to its excellent mechanical and bioinert properties. An application of PU would be inanimate artificial hearts; however, using existing 3D bioprinters, this polymer cannot be printed. A new elastomeric PU was created composed of PEG and polycaprolactone (PCL) monomers. This new material exhibits excellent biocompatibility, biodegradability, bioprintability, and biostability for use in complex bioartificial organ printing and manufacturing. Due to high vascular and neural network construction, this material can be applied to organ printing in a variety of complex ways, such as the brain, heart, lung, and kidney.
Natural-synthetic hybrid polymers
Natural-synthetic hybrid polymers are based on the synergic effect between synthetic and biopolymeric constituents. Gelatin-methacryloyl (GelMA) has become a popular biomaterial in the field of bioprinting. GelMA has shown it has viable potential as a bioink material due to its suitable biocompatibility and readily tunable psychochemical properties. Hyaluronic acid (HA)-PEG is another natural-synthetic hybrid polymer that has proven to be very successful in bioprinting applications. HA combined with synthetic polymers aid in obtaining more stable structures with high cell viability and limited loss in mechanical properties after printing. A recent application of HA-PEG in bioprinting is the creation of artificial liver. Lastly, a series of biodegradable polyurethane (PU)-gelatin hybrid polymers with tunable mechanical properties and efficient degradation rates have been implemented in organ printing. This hybrid has the ability to print complicated structures such as a nose-shaped construct.
All of the polymers described above have the potential to be manufactured into implantable, bioartificial organs for purposes including, but not limited to, customized organ restoration, drug screening, as well as metabolic model analysis.
Cell sources
The creation of a complete organ often requires incorporation of a variety of different cell types, arranged in distinct and patterned ways. One advantage of 3D-printed organs, compared to traditional transplants, is the potential to use cells derived from the patient to make the new organ. This significantly decreases the likelihood of transplant rejection, and may remove the need for immunosuppressive drugs after transplant, which would reduce the health risks of transplants. However, since it may not always be possible to collect all the needed cell types, it may be necessary to collect adult stem cells or induce pluripotency in collected tissue. This involves resource-intensive cell growth and differentiation and comes with its own set of potential health risks, since cell proliferation in a printed organ occurs outside the body and requires external application of growth factors. However, the ability of some tissues to self-organize into differentiated structures may provide a way to simultaneously construct the tissues and form distinct cell populations, improving the efficacy and functionality of organ printing.
Types of printers and processes
The types of printers used for organ printing include:
Inkjet printer
Multi-nozzle
Hybrid printer
Electrospinning
Drop-on-demand
These printers are used in the methods described previously. Each printer requires different materials and has its own advantages and limitations.
Applications
Organ donation
Currently, the sole method for treatment for those in organ failure is to await a transplant from a living or recently deceased donor. In the United States alone, there are over 100,000 patients on the organ transplant list waiting for donor organs to become available. Patients on the donor list can wait days, weeks, months, or even years for a suitable organ to become available. The average wait time for some common organ transplants are as follows: four months for a heart or lung, eleven months for a liver, two years for a pancreas, and five years for a kidney. This is a significant increase from the 1990s, when a patient could wait as little as five weeks for a heart. These extensive wait times are due to a shortage of organs as well as the requirement for finding an organ that is suitable for the recipient. An organ is deemed suitable for a patient based on blood type, comparable body size between donor and recipient, the severity of the patient's medical condition, the length of time the patient has been waiting for an organ, patient availability (i.e. ability to contact patient, if patient has an infection), the proximity of the patient to the donor, and the viability time of the donor organ. In the United States, 20 people die everyday waiting for organs. 3D organ printing has the potential to remove both these issues; if organs could be printed as soon as there is need, there would be no shortage. Additionally, seeding printed organs with a patient's own cells would eliminate the need to screen donor organs for compatibility.
Physician and surgical training
Surgical usage of 3D printing has evolved from printing surgical instrumentation to the development of patient-specific technologies for total joint replacements, dental implants, and hearing aids. In the field of organ printing, applications can be applied for patients and surgeons. For instance, printed organs have been used to model structure and injury to better understand the anatomy and discuss a treatment regime with patients. For these cases, the functionality of the organ is not required and is used for proof-of-concept. These model organs provide advancement for improving surgical techniques, training inexperienced surgeons, and moving towards patient-specific treatments.
Pharmaceutical research
3D organ printing technology permits the fabrication of high degrees of complexity with great reproducibility, in a fast and cost-effective manner. 3D printing has been used in pharmaceutical research and fabrication, providing a transformative system allowing precise control of droplet size and dose, personalized medicine, and the production of complex drug-release profiles. This technology calls for implantable drug delivery devices, in which the drug is injected into the 3D printed organ and is released once in vivo. Also, organ printing has been used as a transformative tool for in vitro testing. The printed organ can be utilized in discovery and dosage research upon drug-release factors.
Organ-on-a-chip
Organ printing technology can also be combined with microfluidic technology to develop organs-on-chips. These organs-on-chips have the potential to be used for disease models, aiding in drug discovery, and performing high-throughput assays. Organ-on-chips work by providing a 3D model that imitates the natural extracellular matrix, allowing them to display realistic responses to drugs. Thus far, research has been focused on developing liver-on-a-chip and heart-on-a-chip, but there exists the potential to develop an entire body-on-a-chip model.
By combining 3D printed organs, researchers are able to create a body-on-a-chip. The heart-on-a-chip model has already been used to investigate how several drugs with heart rate-based negative side effects, such as the chemotherapeutic drug doxorubicin could affect people on an individual basis. The new body-on-a-chip platform includes liver, heart, lungs, and kidney-on-a-chip. The organs-on-a-chip are separately printed or constructed and then integrated together. Using this platform drug toxicity studies are performed in high throughput, lowering the cost and increasing the efficiency in the drug-discovery pipeline.
Legal and safety
3D-printing techniques have been used in a variety of industries for the overall goal of fabricating a product. Organ printing, on the other hand, is a novel industry that utilizes biological components to develop therapeutic applications for organ transplants. Due to the increased interest in this field, regulation and ethical considerations desperately need to be established. Specifically, there can be legal complications from pre-clinical to clinical translation for this treatment method.
Regulation
The current American regulation for organ matching is centered on the national registry of organ donors after the National Organ Transplant Act was passed in 1984. This act was set in place to ensure equal and honest distribution, although it has been proven insufficient due to the large demand for organ transplants. Organ printing can assist in diminishing the imbalance between supply and demand by printing patient-specific organ replacements, all of which is unfeasible without regulation. The Food and Drug Administration (FDA) is responsible for regulation of biologics, devices, and drugs in the United States. Due to the complexity of this therapeutic approach, the location of organ printing on the spectrum has not been discerned. Studies have characterized printed organs as multi-functional combination products, meaning they fall between the biologics and devices sectors of the FDA; this leads to more extensive processes for review and approval. In 2016, the FDA issued draft guidance on the Technical Considerations for Additive Manufactured Devices and is currently evaluating new submissions for 3D printed devices. However, the technology itself is not advanced enough for the FDA to mainstream it directly. Currently, the 3D printers, rather than the finished products, are the main focus in safety and efficacy evaluations in order to standardize the technology for personalized treatment approaches. From a global perspective, only South Korea and Japan's medical device regulation administrations have provided guidelines that are applicable to 3D bio-printing.
There are also concerns with intellectual property and ownership. These can have a large impact on more consequential matters such as piracy, quality control for manufacturing, and unauthorized use on the black market. These considerations are focused more on the materials and fabrication processes; they are more extensively explained in the legal aspects subsection of 3D printing.
Ethical considerations
From an ethical standpoint, there are concerns with respect to the availability of organ printing technologies, the cell sources, and public expectations. Although this approach may be less expensive than traditional surgical transplantation, there is skepticism in regards to social availability of these 3D printed organs. Contemporary research has found that there is potential social stratification for the wealthier population to have access to this therapy while the general population remains on the organ registry. The cell sources mentioned previously also need to be considered. Organ printing can decrease or eliminate animal studies and trials, but also raises questions on the ethical implications of autologous and allogenic sources. More specifically, studies have begun to examine future risks for humans undergoing experimental testing. Generally, this application can give rise to social, cultural, and religious differences, making it more difficult for worldwide integration and regulation. Overall, the ethical considerations of organ printing are similar to those of general ethics of bioprinting, but are extrapolated from tissue to organ. Altogether, organ printing possesses short- and long-term legal and ethical consequences that need to be considered before mainstream production can be feasible.
Impact
Organ printing for medical applications is still in the developmental stages. Thus, the long term impacts of organ printing have yet to be determined. Researchers hope that organ printing could decrease the organ transplant shortage. There is currently a shortage of available organs, including liver, kidneys, and lungs. The lengthy wait time to receive life saving organs is one of the leading causes of death in the United States, with nearly one third of deaths each year in the United States that could be delayed or prevented with organ transplants. Currently the only organ that has been 3D bioprinted and successfully transplanted into a human is a bladder. The bladder was formed from the host's bladder tissue. Researchers have proposed that a potential positive impact of 3D printed organs is the ability to customize organs for the recipient. Developments enabling an organ recipient’s host cells to be used to synthesize organs decreases the risk of organ rejection.
The ability to print organs has decreased the demand for animal testing. Animal testing is used to determine the safety of products ranging from makeup to medical devices. Cosmetic companies are already using smaller tissue models to test new products on skin. The ability to 3D print skin reduces the need for animal trials for makeup testing. In addition, the ability to print models of human organs to test the safety and efficacy of new drugs further reduces the necessity for animal trials. Researchers at Harvard University determined that drug safety can be accurately tested on smaller tissue models of lungs. The company Organovo, which designed one of the initial commercial bioprinters in 2009, has displayed that biodegradable 3D tissue models can be used to research and develop new drugs, including those to treat cancer. An additional impact of organ printing includes the ability to rapidly create tissue models, therefore increasing productivity.
Challenges
One of the challenges of 3D printing organs is to recreate the vasculature required to keep the organs alive. Designing a correct vasculature is necessary for the transport of nutrients, oxygen, and waste. Blood vessels, especially capillaries, are difficult due to the small diameter. Progress has been made in this area at Rice University, where researchers designed a 3D printer to make vessels in biocompatible hydrogels and designed a model of lungs that can oxygenate blood. However, accompanied with this technique is the challenge of replicating the other minute details of organs. It is difficult to replicate the entangled networks of airways, blood vessels, and bile ducts and complex geometry of organs.
The challenges faced in the organ printing field extends beyond the research and development of techniques to solve the issues of multivascularization and difficult geometries. Before organ printing can become widely available, a source for sustainable cell sources must be found and large-scale manufacturing processes need to be developed. Additional challenges include designing clinical trials to test the long-term viability and biocompatibility of synthetic organs. While many developments have been made in the field of organ printing, more research must be conducted.
References
Tissue engineering
3D printing | Organ printing | [
"Chemistry",
"Engineering",
"Biology"
] | 5,528 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
3,165,608 | https://en.wikipedia.org/wiki/Stress%20corrosion%20cracking | Stress corrosion cracking (SCC) is the growth of crack formation in a corrosive environment. It can lead to unexpected and sudden failure of normally ductile metal alloys subjected to a tensile stress, especially at elevated temperature. SCC is highly chemically specific in that certain alloys are likely to undergo SCC only when exposed to a small number of chemical environments. The chemical environment that causes SCC for a given alloy is often one which is only mildly corrosive to the metal. Hence, metal parts with severe SCC can appear bright and shiny, while being filled with microscopic cracks. This factor makes it common for SCC to go undetected prior to failure. SCC often progresses rapidly, and is more common among alloys than pure metals. The specific environment is of crucial importance, and only very small concentrations of certain highly active chemicals are needed to produce catastrophic cracking, often leading to devastating and unexpected failure.
The stresses can be the result of the crevice loads due to stress concentration, or can be caused by the type of assembly or residual stresses from fabrication (e.g. cold working); the residual stresses can be relieved by annealing or other surface treatments. Unexpected and premature failure of chemical process equipment, for example, due to stress corrosion cracking constitutes a serious hazard in terms of safety of personnel, operating facilities and the environment. By weakening the reliability of these types of equipment, such failures also adversely affect productivity and profitability.
Mechanisms
Stress corrosion cracking mainly affects metals and metallic alloys. A comparable effect also known as environmental stress cracking also affects other materials such as polymers, ceramics and glass.
Metals
Lower pH and lower applied redox potential facilitate the evolution and the enrichment of hydrogen during the process of SCC, thus increasing the SCC intensity.
Certain austenitic stainless steels and aluminium alloys crack in the presence of chlorides. This limits the usefulness of austenitic stainless steel for containing water with higher than a few parts per million content of chlorides at temperatures above ;
mild steel cracks in the presence of alkali (e.g. boiler cracking and caustic stress corrosion cracking) and nitrates;
copper alloys crack in ammoniacal solutions (season cracking);
high-tensile steels have been known to crack in an unexpectedly brittle manner in a whole variety of aqueous environments, especially when chlorides are present.
With the possible exception of the latter, which is a special example of hydrogen cracking, all the others display the phenomenon of subcritical crack growth, i.e. small surface flaws propagate (usually smoothly) under conditions where fracture mechanics predicts that failure should not occur. That is, in the presence of a corrodent, cracks develop and propagate well below critical stress intensity factor (). The subcritical value of the stress intensity, designated as , may be less than 1% of .
Polymers
A similar process (environmental stress cracking) occurs in polymers, when products are exposed to specific solvents or aggressive chemicals such as acids and alkalis. As with metals, attack is confined to specific polymers and particular chemicals. Thus polycarbonate is sensitive to attack by alkalis, but not by acids. On the other hand, polyesters are readily degraded by acids, and SCC is a likely failure mechanism. Polymers are susceptible to environmental stress cracking where attacking agents do not necessarily degrade the materials chemically.
Nylon is sensitive to degradation by acids, a process known as hydrolysis, and nylon mouldings will crack when attacked by strong acids.
For example, the fracture surface of a fuel connector showed the progressive growth of the crack from acid attack (Ch) to the final cusp (C) of polymer. In this case the failure was caused by hydrolysis of the polymer by contact with sulfuric acid leaking from a car battery. The degradation reaction is the reverse of the synthesis reaction of the polymer:
Cracks can be formed in many different elastomers by ozone attack, another form of SCC in polymers. Tiny traces of the gas in the air will attack double bonds in rubber chains, with natural rubber, styrene-butadiene rubber, and nitrile butadiene rubber being most sensitive to degradation. Ozone cracks form in products under tension, but the critical strain is very small. The cracks are always oriented at right angles to the strain axis, so will form around the circumference in a rubber tube bent over. Such cracks are dangerous when they occur in fuel pipes because the cracks will grow from the outside exposed surfaces into the bore of the pipe, so fuel leakage and fire may follow. Ozone cracking can be prevented by adding anti-ozonants to the rubber before vulcanization. Ozone cracks were commonly seen in automobile tire sidewalls, but are now seen rarely thanks to the use of these additives. On the other hand, the problem does recur in unprotected products such as rubber tubing and seals.
Ceramics
This effect is significantly less common in ceramics which are typically more resilient to chemical attack. Although phase changes are common in ceramics under stress these usually result in toughening rather than failure (see Zirconium dioxide). Recent studies have shown that the same driving force for this toughening mechanism can also enhance oxidation of reduced cerium oxide, resulting in slow crack growth and spontaneous failure of dense ceramic bodies.
Glass
Subcritical crack propagation in glasses falls into three regions. In region I, the velocity of crack propagation increases with ambient humidity due to stress-enhanced chemical reaction between the glass and water. In region II, crack propagation velocity is diffusion controlled and dependent on the rate at which chemical reactants can be transported to the tip of the crack. In region III, crack propagation is independent of its environment, having reached a critical stress intensity. Chemicals other than water, like ammonia, can induce subcritical crack propagation in silica glass, but they must have an electron donor site and a proton donor site.
Prevention
The compressive residual stresses imparted by laser peening are precisely controlled both in location and intensity and can be applied to mitigate sharp transitions into tensile regions. Laser peening imparts deep compressive residual stresses on the order of 10 to 20 times deeper than conventional shot peening, making them significantly more beneficial at preventing SCC. Laser peening is widely used in the aerospace and power generation industries in gas fired turbine engines.
Material Selection: Choosing the right material for a specific environment can help prevent SCC. Materials with higher resistance to corrosion and stress corrosion cracking should be used in corrosive environments. For example, using stainless steel instead of carbon steel in a marine environment can reduce the likelihood of SCC.
Protective Coatings: Applying a protective coating or barrier can help prevent corrosive substances from coming into contact with the metal surface, thus reducing the likelihood of SCC. For example, using an epoxy coating on the interior surface of a pipeline can reduce the likelihood of SCC.
Cathodic Protection: Cathodic protection is a technique used to protect metals from corrosion by applying a small electrical current to the metal surface. This technique can also help prevent SCC by reducing the corrosion potential of the metal.
Environmental Controls: Controlling the environment around the metal can help prevent SCC. For example, reducing the temperature or acidity of the environment can help prevent SCC.
Inspection and Maintenance: Regular inspections and maintenance can help detect SCC before it causes a failure. This includes visual inspections, non-destructive testing, and monitoring of environmental factors.
Notable failures
A 32-inch diameter gas transmission pipeline, north of Natchitoches, Louisiana, belonging to the Tennessee Gas Pipeline exploded and burned from SCC on March 4, 1965, killing 17 people. At least 9 others were injured, and 7 homes 450 feet from the rupture were destroyed.
SCC caused the catastrophic collapse of the Silver Bridge in December 1967, when an eyebar suspension bridge across the Ohio River at Point Pleasant, West Virginia, suddenly failed. The main chain joint failed and the entire structure fell into the river, killing 46 people who were traveling in vehicles across the bridge. Rust in the eyebar joint had caused a stress corrosion crack, which went critical as a result of high bridge loading and low temperature. The failure was exacerbated by a high level of residual stress in the eyebar. The disaster led to a nationwide reappraisal of bridges.
Aloha Airlines Flight 243: In 1988, Aloha Airlines Flight 243 experienced a partial fuselage failure due to SCC. The Boeing 737-200 was flying from Hilo to Honolulu, Hawaii when a section of the fuselage ruptured, causing a decompression event. The investigation into the failure found that SCC had occurred in the aluminum skin of the fuselage due to the repeated pressurization and depressurization cycles of the aircraft. The incident led to changes in maintenance procedures and inspections for aircraft to prevent similar failures in the future.
Trans-Alaska Pipeline: In 2001, a section of the Trans-Alaska Pipeline failed due to SCC. The pipeline is used to transport crude oil from the North Slope of Alaska to the Valdez Marine Terminal. The failure occurred when a 34-foot section of the pipeline ruptured, causing a spill of over 285,000 gallons of crude oil. The investigation into the failure found that SCC had occurred in the pipeline due to the presence of water and bacteria, which had created a corrosive environment.
USS Hartford submarine periscope: In 2009, the periscope of the submarine USS Hartford failed due to SCC. The periscope is used to provide a view of the surface while the submarine is submerged. The failure occurred when the periscope was extended through the hull of the submarine, causing seawater to enter the periscope's seal. The seawater caused SCC to occur in the periscope's steel support structure, which led to the periscope falling back into the submarine. Fortunately, there were no injuries, but the submarine had to be taken out of service for repairs.
NDK Crystal: In 2009, a 50-foot-tall quartz-growing autoclave in a plant in Belvidere, Illinois violently ruptured due to SCC, causing one fatality and one injury in nearby businesses. The autoclave was filled with hot, highly pressurized sodium hydroxide solution, and the thick steel pressure vessel was not rated to withstand corrosive environments. However, the plant's operators incorrectly believed that the vessel walls would be protected from corrosion by the formation of a self-passivating layer of iron sodium silicate. Despite a previous SCC incident in 2007 and specific warnings from NDK Crystal's insurance provider, none of the plant's eight autoclaves were ever given safety inspections.
See also
References
Citations
General and cited references
ASM International, Metals Handbook (Desk Edition) Chapter 32 ("Failure Analysis"), American Society for Metals, (1997) pp 32–24 to 32–26
ASM Handbook Volume 11 Failure Analysis and Prevention (2002) "Stress-Corrosion Cracking" Revised by W.R. Warke, American Society of Metals. Pages 1738-1820.
External links
Corrosion
Engineering failures
Fracture mechanics
Materials degradation | Stress corrosion cracking | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 2,298 | [
"Structural engineering",
"Systems engineering",
"Fracture mechanics",
"Reliability engineering",
"Metallurgy",
"Technological failures",
"Materials science",
"Corrosion",
"Engineering failures",
"Electrochemistry",
"Civil engineering",
"Materials degradation"
] |
11,164,335 | https://en.wikipedia.org/wiki/Carboxypeptidase%20B | Carboxypeptidase B (, protaminase, pancreatic carboxypeptidase B, tissue carboxypeptidase B, peptidyl-L-lysine [L-arginine]hydrolase) is a carboxypeptidase that preferentially cleaves off basic amino acids arginine and lysine from the C-terminus of a peptide. This enzyme is secreted by the pancreas, and it travels to the small intestine, where it aids in protein digestion. Plasma carboxypeptidase B (carboxypeptidase B2) is responsible for converting the C5a protein into C5a des-Arg, with one less amino acid.
References
External links
The MEROPS online database for peptidases and their inhibitors: M14.003
EC 3.4.17
Metabolism | Carboxypeptidase B | [
"Chemistry",
"Biology"
] | 191 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
11,167,164 | https://en.wikipedia.org/wiki/Integrated%20Biosphere%20Simulator | IBIS-2 is the version 2 of the land-surface model Integrated Biosphere Simulator (IBIS), which includes several major improvements and additions to the prototype model developed by Foley et al. [1996]. IBIS was designed to explicitly link land surface and hydrological processes, terrestrial biogeochemical cycles, and vegetation dynamics within a single physically consistent framework
IBIS Functionality
The model considers transient changes in vegetation composition and structure in response to environmental change and is, therefore, classified as a Dynamic Global Vegetation Model (DGVM) This new version of IBIS has improved representations of land surface physics, plant physiology, canopy phenology, plant functional type (PFT) differences, and carbon allocation. Furthermore, IBIS-2 includes a new belowground biogeochemistry submodel, which is coupled to detritus production (litterfall and fine root turnover). All process are organized in a hierarchical framework and operate at different time steps, ranging from 60 min to 1 year. Such an approach allows for explicit coupling among ecological, biophysical, and physiological processes occurring on different timescales.
IBIS Structure
The land surface module is based on the land surface transfer model (LSX) package of Thompson and Pollard, and simulates the energy, water, carbon, and momentum balance of the soil-vegetation-atmosphere system. The model represents two vegetation canopies (e.g., trees versus shrubs and grasses), eight soil layers, and three layers of snow (when required). The solar radiative transfer scheme of IBIS-2 has been simplified in comparison with LSX and IBIS-1; sunlit and shaded fractions of the canopies are no longer treated separately. The model now follows the approach of Sellers et al. [1986] and Bonan [1995]. Infrared radiation is simulated as if each vegetation layer is a semitransparent plane; canopy emissivity depends on foliage density. Another difference between IBIS-2 and IBIS-1 and LSX, is that IBIS-2 uses an empirical linear function of wind speed to estimate turbulent transfer between the soil surface and the lower vegetation canopy, and IBIS-1 and LSX use a logarithmic wind profile. The total evapotranspiration from the land surface is treated as the sum of three water vapor fluxes: evaporation from the soil surface, evaporation of water intercepted by vegetation canopies, and canopy transpiration.
IBIS simulates the variations of heat and moisture in the soil. The eight layers are described in terms of soil temperature, volumetric water content and ice content. All the process occurring in the soil are influenced by the soil texture and amount of organic matter within the soil. One difference from the physiological processes in the previous version of the model is that IBIS-1 calculates the maximum Rubisco carboxylation capacity (Vm) by optimizing the net assimilation of carbon by the leaf. IBIS-2 prescribes constant values of Vm for the plant functional typed (PFT). To scale photosynthesis and transpiration from the leaf level to canopy level, IBIS-2 assumes that the net photosynthesis within the canopy is proportional to the APAR within it.
Soil Biogeochemistry
In the original version of IBIS there was no explicit below ground biogeochemistry model to complete flow of carbon between the vegetation, detritus, and soil organic matter pools. IBIS-2 includes a new soil biogeochemistry module.
Further reading
Kucharik, C. J., J. A. Foley, C. Delire, V. A. Fisher, M. T. Coe, J. D. Lenters, C. Young-Molling, N. Ramankutty, J. M. Norman, S. T. Gower, Testing the performance of a Dynamic Global Ecosystem Model: Water balance, carbon balance, and vegetation structure, Global Biogeochem. Cycles, 14(3), 795-826, 10.1029/1999GB001138, 2000. http://www.agu.org/pubs/crossref/2000/1999GB001138.shtml
Foley, Jonathan A.; Prentice, I. Colin; Ramankutty, Navin; Levis, Samuel; Pollard, David; Sitch, Steven; Haxeltine, Alex, An integrated biosphere model of land surface processes, terrestrial carbon balance, and vegetation dynamics
Global Biogeochemical Cycles, Volume 10, Issue 4, p. 603-628. http://adsabs.harvard.edu/abs/1996GBioC..10..603F
Integrated Biosphere Simulator Model (IBIS), Version 2.5. Foley, J. A., C. J. Kucharik, and D. Polzin. 2005. Integrated Biosphere Simulator Model (IBIS), Version 2.5. Model product. Available on-line from Oak Ridge National Laboratory Distributed Active Archive Center, Oak Ridge, Tennessee, U.S.A. doi:10.3334/ORNLDAAC/808 http://daac.ornl.gov/MODELS/guides/IBIS_Guide.html
IBIS 2.6 (Integrated BIosphere Simulator). http://nelson.wisc.edu/sage/data-and-models/model-code.php
References
Biological models
Systems ecology | Integrated Biosphere Simulator | [
"Biology",
"Environmental_science"
] | 1,155 | [
"Environmental social science",
"Biological models",
"Systems ecology"
] |
11,167,824 | https://en.wikipedia.org/wiki/Saint-Venant%27s%20compatibility%20condition | In the mathematical theory of elasticity, Saint-Venant's compatibility condition defines the relationship between the strain and a displacement field by
where . Barré de Saint-Venant derived the compatibility condition for an arbitrary symmetric second rank tensor field to be of this form, this has now been generalized to higher rank symmetric tensor fields on spaces of dimension
Rank 2 tensor fields
For a symmetric rank 2 tensor field in n-dimensional Euclidean space () the integrability condition takes the form of the vanishing of the Saint-Venant's tensor defined by
The result that, on a simply connected domain W=0 implies that strain is the symmetric derivative of some vector field, was first described by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886. For non-simply connected domains there are finite dimensional spaces of symmetric tensors with vanishing Saint-Venant's tensor that are not the symmetric derivative of a vector field. The situation is analogous to de Rham cohomology
The Saint-Venant tensor is closely related to the Riemann curvature tensor . Indeed the first variation about the Euclidean metric with a perturbation in the metric is precisely . Consequently the number of independent components of is the same as specifically for dimension n. Specifically for , has only one independent component where as for there are six.
In its simplest form of course the components of must be assumed twice continuously differentiable, but more recent work proves the result in a much more general case.
The relation between Saint-Venant's compatibility condition and Poincaré's lemma can be understood more clearly using a reduced form of the Kröner tensor
where is the permutation symbol. For , is a symmetric rank 2 tensor field. The vanishing of is equivalent to the vanishing of and this also shows that there are six independent components for the important case of three dimensions. While this still involves two derivatives rather than the one in the Poincaré lemma, it is possible to reduce to a problem involving first derivatives by introducing more variables and it has been shown that the resulting 'elasticity complex' is equivalent to the de Rham complex.
In differential geometry the symmetrized derivative of a vector field appears also as the Lie derivative of the metric tensor g with respect to the vector field.
where indices following a semicolon indicate covariant differentiation. The vanishing of is thus the integrability condition for local existence of in the Euclidean case. As noted above this coincides with the vanishing of the linearization of the Riemann curvature tensor about the Euclidean metric.
Generalization to higher rank tensors
Saint-Venant's compatibility condition can be thought of as an analogue, for symmetric tensor fields, of Poincaré's lemma for skew-symmetric tensor fields (differential forms). The result can be generalized to higher rank symmetric tensor fields. Let F be a symmetric rank-k tensor field on an open set in n-dimensional Euclidean space, then the symmetric derivative is the rank k+1 tensor field defined by
where we use the classical notation that indices following a comma indicate differentiation and groups of indices enclosed in brackets indicate symmetrization over those indices. The Saint-Venant tensor of a symmetric rank-k tensor field is defined by
with
On a simply connected domain in Euclidean space implies that for some rank k-1 symmetric tensor field .
References
See also
Compatibility (mechanics)
Elasticity (physics)
Tensors
Partial differential equations | Saint-Venant's compatibility condition | [
"Physics",
"Materials_science",
"Engineering"
] | 705 | [
"Physical phenomena",
"Tensors",
"Elasticity (physics)",
"Deformation (mechanics)",
"Physical properties"
] |
11,168,921 | https://en.wikipedia.org/wiki/Topsides | The topsides on a boat, ship, watercraft, or floating production storage and offloading (FPSO) vessel, is that part of the hull between the waterline and the deck. It includes the visible parts of the bow, stern, sheer, and, if present, tumblehome.
On an offshore oil platform, topsides refers to the upper half of the structure, above the sea level, outside the splash zone, on which equipment is installed. This includes the oil production plant, the accommodation block and the drilling rig. They are often modular in design and so can be changed out if necessary allowing expensive platforms to be more readily updated with newer technology. It contrasts with the jacket structure, which constitutes the lower half of the platform structure (the supporting legs and lattice framework), partly submerged in sea.
References
Nautical terminology
Oil platforms | Topsides | [
"Chemistry",
"Engineering"
] | 173 | [
"Oil platforms",
"Petroleum technology",
"Natural gas technology",
"Structural engineering"
] |
634,785 | https://en.wikipedia.org/wiki/Dissipation%20factor | In physics, the dissipation factor (DF) is a measure of loss-rate of energy of a mode of oscillation (mechanical, electrical, or electromechanical) in a dissipative system. It is the reciprocal of quality factor, which represents the "quality" or durability of oscillation.
Explanation
Electrical potential energy is dissipated in all dielectric materials, usually in the form of heat. In a capacitor made of a dielectric placed between conductors, the typical lumped element model includes a lossless ideal capacitor in series with a resistor termed the equivalent series resistance (ESR) as shown below. The ESR represents losses in the capacitor. In a good capacitor the ESR is very small, and in a poor capacitor the ESR is large. However, ESR is sometimes a minimum value to be required. Note that the ESR is not simply the resistance that would be measured across a capacitor by an ohmmeter. The ESR is a derived quantity with physical origins in both the dielectric's conduction electrons and dipole relaxation phenomena. In dielectric only one of either the conduction electrons or the dipole relaxation typically dominates loss. For the case of the conduction electrons being the dominant loss, then
where
is the dielectric's bulk conductivity,
is the lossless permittivity of the dielectric, and
is the angular frequency of the AC current i,
is the lossless capacitance.
If the capacitor is used in an AC circuit, the dissipation factor due to the non-ideal capacitor is expressed as the ratio of the resistive power loss in the ESR to the reactive power oscillating in the capacitor, or
When representing the electrical circuit parameters as vectors in a complex plane, known as phasors, a capacitor's dissipation factor is equal to the tangent of the angle between the capacitor's impedance vector and the negative reactive axis, as shown in the adjacent diagram. This gives rise to the parameter known as the loss tangent tan δ where
Alternatively, can be derived from frequency at which loss tangent was determined and capacitance
Since the in a good capacitor is usually small, , and is often expressed as a percentage.
approximates to the power factor when is far less than , which is usually the case.
will vary depending on the dielectric material and the frequency of the electrical signals. In low dielectric constant (low-κ), temperature compensating ceramics, of 0.1–0.2% is typical. In high dielectric constant ceramics, can be 1–2%. However, lower is usually an indication of quality capacitors when comparing similar dielectric material.
See also
Dielectric withstand test
Impulse generator
References
Electromagnetism
Electrical engineering
Dynamical systems | Dissipation factor | [
"Physics",
"Mathematics",
"Engineering"
] | 614 | [
"Electromagnetism",
"Physical phenomena",
"Mechanics",
"Fundamental interactions",
"Electrical engineering",
"Dynamical systems"
] |
634,858 | https://en.wikipedia.org/wiki/Vacuum%20chamber | A vacuum chamber is a rigid enclosure from which air and other gases are removed by a vacuum pump. This results in a low-pressure environment within the chamber, commonly referred to as a vacuum. A vacuum environment allows researchers to conduct physical experiments or to test mechanical devices which must operate in outer space (for example) or for processes such as vacuum drying or vacuum coating. Chambers are typically made of metals which may or may not shield applied external magnetic fields depending on wall thickness, frequency, resistivity, and permeability of the material used. Only some materials are suitable for vacuum use.
Chambers often have multiple ports, covered with vacuum flanges, to allow instruments or windows to be installed in the walls of the chamber. In low to medium-vacuum applications, these are sealed with elastomer o-rings. In higher vacuum applications, the flanges have knife edges machined onto them, which cut into a copper gasket when the flange is bolted on.
A type of vacuum chamber frequently used in the field of spacecraft engineering is a thermal vacuum chamber, which provides a thermal environment representing what a spacecraft would experience in space.
Vacuum chamber materials
Vacuum chambers can be constructed of many materials. "Metals are arguably the most prevalent vacuum chamber materials." The strength, pressure, and permeability are considerations for selecting chamber material.
Common materials are:
Stainless Steel
Aluminum
Mild Steel
Brass
High density ceramic
Glass
Acrylic
Hardened steel
Vacuum degassing
"Vacuum degassing is the process of removing gases from compounds using vacuum which become entrapped in the
mixture when mixing the components." To assure a bubble-free mold when mixing resin and silicone rubbers and slower-setting harder resins, a vacuum chamber is required. A small vacuum chamber is needed for de-airing (eliminating air bubbles) for materials prior to their setting. The process is fairly straightforward. The casting or molding material is mixed according to the manufacturers directions.
Process
Since the material may expand 4–5 times under a vacuum, the mixing container must be large enough to hold a volume of four to five times the amount of the original material that is being vacuumed to allow for the expansion; if not, it will spill over the top of the container requiring clean-up that can be avoided. The material container is then placed into the vacuum chamber; a vacuum pump is connected and turned on. Once the vacuum reaches (at sea level) of mercury, the material will begin to rise (resembling foam). When the material falls, it will plateau and stop rising. The vacuuming is continued for another 2 to 3 minutes to make certain all of the air has been removed from the material. Once this interval is reached, the vacuum pump is shut off and the vacuum chamber release valve is opened to equalize air pressure. The vacuum chamber is opened, the material is removed and is ready to pour into the mold.
Though a maximum vacuum one can theoretically achieve at sea level is 29.921 inches of mercury (Hg,) this will vary significantly as altitude increases. For example, in Denver, Colorado, at one mile (1.6 km) above sea level, it is only possible to achieve a vacuum on the mercury scale of 24.896 Hg.
To keep the material air-free, it must be slowly poured in a high and narrow stream starting from the corner of the mold box, or mold, letting the material flow freely into the box or mold cavity. Usually, this method will not introduce any new bubbles into the vacuumed material. To ensure that the material is totally devoid of air bubbles, the entire mold/mold box may be placed in the chamber for an additional few minutes; this will assist the material in flowing into difficult areas of the mold/mold box.
Vacuum drying
Water and other liquids may accumulate on a product during the production process. "Vacuum is often employed as a process for removing bulk and absorbed water (or other solvents) from a product. Combined
with heat, vacuum can be an effective method for drying."
World's largest vacuum chamber
NASA's Space Power Facility houses the world's largest vacuum chamber. It was built in 1969 and stands high and in diameter, enclosing a bullet-shaped space. It was originally commissioned for nuclear-electric power studies under vacuum conditions, but was later decommissioned. Recently, it was recommissioned for use in testing spacecraft propulsion systems. Recent uses include testing the airbag landing systems for the Mars Pathfinder and the Mars Exploration Rovers, Spirit and Opportunity, under simulated Mars atmospheric conditions.
Each arm of the LIGO detectors in Livingston, Louisiana, and Hanford, Washington, is a vacuum chamber long, making them the longest vacuum chambers in the world.
See also
Bell jar
Optical window
Thermal vacuum chamber
Vacuum engineering
NASA Space Power Facility
References
Chamber
Laboratory equipment | Vacuum chamber | [
"Physics",
"Engineering"
] | 983 | [
"Vacuum systems",
"Vacuum",
"Matter"
] |
635,483 | https://en.wikipedia.org/wiki/Gromov%27s%20theorem%20on%20groups%20of%20polynomial%20growth | In geometric group theory, Gromov's theorem on groups of polynomial growth, first proved by Mikhail Gromov, characterizes finitely generated groups of polynomial growth, as those groups which have nilpotent subgroups of finite index.
Statement
The growth rate of a group is a well-defined notion from asymptotic analysis. To say that a finitely generated group has polynomial growth means the number of elements of length at most n (relative to a symmetric generating set) is bounded above by a polynomial function p(n). The order of growth is then the least degree of any such polynomial function p.
A nilpotent group G is a group with a lower central series terminating in the identity subgroup.
Gromov's theorem states that a finitely generated group has polynomial growth if and only if it has a nilpotent subgroup that is of finite index.
Growth rates of nilpotent groups
There is a vast literature on growth rates, leading up to Gromov's theorem. An earlier result of Joseph A. Wolf showed that if G is a finitely generated nilpotent group, then the group has polynomial growth. Yves Guivarc'h and independently Hyman Bass (with different proofs) computed the exact order of polynomial growth. Let G be a finitely generated nilpotent group with lower central series
In particular, the quotient group Gk/Gk+1 is a finitely generated abelian group.
The Bass–Guivarc'h formula states that the order of polynomial growth of G is
where:
rank denotes the rank of an abelian group, i.e. the largest number of independent and torsion-free elements of the abelian group.
In particular, Gromov's theorem and the Bass–Guivarc'h formula imply that the order of polynomial growth of a finitely generated group is always either an integer or infinity (excluding for example, fractional powers).
Another nice application of Gromov's theorem and the Bass–Guivarch formula is to the quasi-isometric rigidity of finitely generated abelian groups: any group which is quasi-isometric to a finitely generated abelian group contains a free abelian group of finite index.
Proofs of Gromov's theorem
In order to prove this theorem Gromov introduced a convergence for metric spaces. This convergence, now called the Gromov–Hausdorff convergence, is currently widely used in geometry.
A relatively simple proof of the theorem was found by Bruce Kleiner. Later, Terence Tao and Yehuda Shalom modified Kleiner's proof to make an essentially elementary proof as well as a version of the theorem with explicit bounds. Gromov's theorem also follows from the classification of approximate groups obtained by Breuillard, Green and Tao. A simple and concise proof based on functional analytic methods is given by Ozawa.
The gap conjecture
Beyond Gromov's theorem one can ask whether there exists a gap in the growth spectrum for finitely generated group just above polynomial growth, separating virtually nilpotent groups from others. Formally, this means that there would exist a function such that a finitely generated group is virtually nilpotent if and only if its growth function is an . Such a theorem was obtained by Shalom and Tao, with an explicit function for some . All known groups with intermediate growth (i.e. both superpolynomial and subexponential) are essentially generalizations of Grigorchuk's group, and have faster growth functions; so all known groups have growth faster than , with , where is the real root of the polynomial .
It is conjectured that the true lower bound on growth rates of groups with intermediate growth is . This is known as the Gap conjecture.
See also
Breuillard–Green–Tao theorem
References
Theorems in group theory
Nilpotent groups
Infinite group theory
Metric geometry
Geometric group theory | Gromov's theorem on groups of polynomial growth | [
"Physics"
] | 817 | [
"Geometric group theory",
"Group actions",
"Symmetry"
] |
635,546 | https://en.wikipedia.org/wiki/Unified%20neutral%20theory%20of%20biodiversity | The unified neutral theory of biodiversity and biogeography (here "Unified Theory" or "UNTB") is a theory and the title of a monograph by ecologist Stephen P. Hubbell. It aims to explain the diversity and relative abundance of species in ecological communities. Like other neutral theories of ecology, Hubbell assumes that the differences between members of an ecological community of trophically similar species are "neutral", or irrelevant to their success. This implies that niche differences do not influence abundance and the abundance of each species follows a random walk. The theory has sparked controversy, and some authors consider it a more complex version of other null models that fit the data better.
"Neutrality" means that at a given trophic level in a food web, species are equivalent in birth rates, death rates, dispersal rates and speciation rates, when measured on a per-capita basis. This can be considered a null hypothesis to niche theory. Hubbell built on earlier neutral models, including Robert MacArthur and E.O. Wilson's theory of island biogeography and Stephen Jay Gould's concepts of symmetry and null models.
An "ecological community" is a group of trophically similar, sympatric species that actually or potentially compete in a local area for the same or similar resources. Under the Unified Theory, complex ecological interactions are permitted among individuals of an ecological community (such as competition and cooperation), provided that all individuals obey the same rules. Asymmetric phenomena such as parasitism and predation are ruled out by the terms of reference; but cooperative strategies such as swarming, and negative interaction such as competing for limited food or light are allowed (so long as all individuals behave alike).
The theory predicts the existence of a fundamental biodiversity constant, conventionally written θ, that appears to govern species richness on a wide variety of spatial and temporal scales.
Saturation
Although not strictly necessary for a neutral theory, many stochastic models of biodiversity assume a fixed, finite community size (total number of individual organisms). There are unavoidable physical constraints on the total number of individuals that can be packed into a given space (although space per se isn't necessarily a resource, it is often a useful surrogate variable for a limiting resource that is distributed over the landscape; examples would include sunlight or hosts, in the case of parasites).
If a wide range of species are considered (say, giant sequoia trees and duckweed, two species that have very different saturation densities), then the assumption of constant community size might not be very good, because density would be higher if the smaller species were monodominant. Because the Unified Theory refers only to communities of trophically similar, competing species, it is unlikely that population density will vary too widely from one place to another.
Hubbell considers the fact that community sizes are constant and interprets it as a general principle: large landscapes are always biotically saturated with individuals. Hubbell thus treats communities as being of a fixed number of individuals, usually denoted by J.
Exceptions to the saturation principle include disturbed ecosystems such as the Serengeti, where saplings are trampled by elephants and Blue wildebeests; or gardens, where certain species are systematically removed.
Species abundances
When abundance data on natural populations are collected, two observations are almost universal:
The most common species accounts for a substantial fraction of the individuals sampled;
A substantial fraction of the species sampled are very rare. Indeed, a substantial fraction of the species sampled are singletons, that is, species which are sufficiently rare for only a single individual to have been sampled.
Such observations typically generate a large number of questions. Why are the rare species rare? Why is the most abundant species so much more abundant than the median species abundance?
A non neutral explanation for the rarity of rare species might suggest that rarity is a result of poor adaptation to local conditions. The UNTB suggests that it is not necessary to invoke adaptation or niche differences because neutral dynamics alone can generate such patterns.
Species composition in any community will change randomly with time. Any particular abundance structure will have an associated probability. The UNTB predicts that the probability of a community of J individuals composed of S distinct species with abundances for species 1, for species 2, and so on up to for species S is given by
where is the fundamental biodiversity number ( is the speciation rate), and is the number of species that have i individuals in the sample.
This equation shows that the UNTB implies a nontrivial dominance-diversity equilibrium between speciation and extinction.
As an example, consider a community with 10 individuals and three species "a", "b", and "c" with abundances 3, 6 and 1 respectively. Then the formula above would allow us to assess the likelihood of different values of θ. There are thus S = 3 species and , all other 's being zero. The formula would give
which could be maximized to yield an estimate for θ (in practice, numerical methods are used). The maximum likelihood estimate for θ is about 1.1478.
We could have labelled the species another way and counted the abundances being 1,3,6 instead (or 3,1,6, etc. etc.). Logic tells us that the probability of observing a pattern of abundances will be the same observing any permutation of those abundances. Here we would have
and so on.
To account for this, it is helpful to consider only ranked abundances (that is, to sort the abundances before inserting into the formula). A ranked dominance-diversity configuration is usually written as where is the abundance of the ith most abundant species: is the abundance of the most abundant, the abundance of the second most abundant species, and so on. For convenience, the expression is usually "padded" with enough zeros to ensure that there are J species (the zeros indicating that the extra species have zero abundance).
It is now possible to determine the expected abundance of the ith most abundant species:
where C is the total number of configurations, is the abundance of the ith ranked species in the kth configuration, and is the dominance-diversity probability. This formula is difficult to manipulate mathematically, but relatively simple to simulate computationally.
The model discussed so far is a model of a regional community, which Hubbell calls the metacommunity. Hubbell also acknowledged that on a local scale, dispersal plays an important role. For example, seeds are more likely to come from nearby parents than from distant parents. Hubbell introduced the parameter m, which denotes the probability of immigration in the local community from the metacommunity. If m = 1, dispersal is unlimited; the local community is just a random sample from the metacommunity and the formulas above apply. If m < 1, dispersal is limited and the local community is a dispersal-limited sample from the metacommunity for which different formulas apply.
It has been shown that , the expected number of species with abundance n, may be calculated by
where θ is the fundamental biodiversity number, J the community size, is the gamma function, and . This formula is an approximation. The correct formula is derived in a series of papers, reviewed and synthesized by Etienne and Alonso in 2005:
where is a parameter that measures dispersal limitation.
is zero for n > J, as there cannot be more species than individuals.
This formula is important because it allows a quick evaluation of the Unified Theory. It is not suitable for testing the theory. For this purpose, the appropriate likelihood function should be used. For the metacommunity this was given above. For the local community with dispersal limitation it is given by:
Here, the for are coefficients fully determined by the data, being defined as
This seemingly complicated formula involves Stirling numbers and Pochhammer symbols, but can be very easily calculated.
An example of a species abundance curve can be found in Scientific American.
Stochastic modelling of species abundances
UNTB distinguishes between a dispersal-limited local community of size and a so-called metacommunity from which species can (re)immigrate and which acts as a heat bath to the local community. The distribution of species in the metacommunity is given by a dynamic equilibrium of speciation and extinction. Both community dynamics are modelled by appropriate urn processes, where each individual is represented by a ball with a color corresponding to its species. With a certain rate randomly chosen individuals reproduce, i.e. add another ball of their own color to the urn. Since one basic assumption is saturation, this reproduction has to happen at the cost of another random individual from the urn which is removed. At a different rate single individuals in the metacommunity are replaced by mutants of an entirely new species. Hubbell calls this simplified model for speciation a point mutation, using the terminology of the Neutral theory of molecular evolution. The urn scheme for the metacommunity of individuals is the following.
At each time step take one of the two possible actions :
With probability draw an individual at random and replace another random individual from the urn with a copy of the first one.
With probability draw an individual and replace it with an individual of a new species.
The size of the metacommunity does not change. This is a point process in time. The length of the time steps is distributed exponentially. For simplicity one can assume that each time step is as long as the mean time between two changes which can be derived from the reproduction and mutation rates and . The probability is given as .
The species abundance distribution for this urn process is given by Ewens's sampling formula which was originally derived in 1972 for the distribution of alleles under neutral mutations. The expected number of species in the metacommunity having exactly individuals is:
where is called the fundamental biodiversity number. For large
metacommunities and one recovers the Fisher Log-Series as species distribution.
The urn scheme for the local community of fixed size is very similar to the one for the metacommunity.
At each time step take one of the two actions :
With probability draw an individual at random and replace another random individual from the urn with a copy of the first one.
With probability replace a random individual with an immigrant drawn from the metacommunity.
The metacommunity is changing on a much larger timescale and is assumed to be fixed during the evolution of the local community. The resulting distribution of species in the local community and expected values depend on four parameters, , , and (or ) and are derived by Etienne and Alonso (2005), including several simplifying limit cases like the one presented in the previous section (there called ). The parameter is a dispersal parameter. If then the local community is just a sample from the metacommunity. For the local community is completely isolated from the metacommunity and all species will go extinct except one. This case has been analyzed by Hubbell himself. The case is characterized by a unimodal species distribution in a Preston Diagram and often fitted by a log-normal distribution. This is understood as an intermediate state between domination of the most common species and a sampling from the metacommunity, where singleton species are most abundant. UNTB thus predicts that in dispersal limited communities rare species become even rarer. The log-normal distribution describes the maximum and the abundance of common species very well but underestimates the number of very rare species considerably which becomes only apparent for very large sample sizes.
Species-area relationships
The Unified Theory unifies biodiversity, as measured by species-abundance curves, with biogeography, as measured by species-area curves. Species-area relationships show the rate at which species diversity increases with area. The topic is of great interest to conservation biologists in the design of reserves, as it is often desired to harbour as many species as possible.
The most commonly encountered relationship is the power law given by
where S is the number of species found, A is the area sampled, and c and z are constants. This relationship, with different constants, has been found to fit a wide range of empirical data.
From the perspective of Unified Theory, it is convenient to consider S as a function of total community size J. Then for some constant k, and if this relationship were exactly true, the species area line would be straight on log scales. It is typically found that the curve is not straight, but the slope changes from being steep at small areas, shallower at intermediate areas, and steep at the largest areas.
The formula for species composition may be used to calculate the expected number of species present in a community under the assumptions of the Unified Theory. In symbols
where θ is the fundamental biodiversity number. This formula specifies the expected number of species sampled in a community of size J. The last term, , is the expected number of new species encountered when adding one new individual to the community. This is an increasing function of θ and a decreasing function of J, as expected.
By making the substitution (see section on saturation above), then the expected number of species becomes .
The formula above may be approximated to an integral giving
This formulation is predicated on a random placement of individuals.
Example
Consider the following (synthetic) dataset of 27 individuals:
a,a,a,a,a,a,a,a,a,a,b,b,b,b,c,c,c,c,d,d,d,d,e,f,g,h,i
There are thus 27 individuals of 9 species ("a" to "i") in the sample. Tabulating this would give:
a b c d e f g h i
10 4 4 4 1 1 1 1 1
indicating that species "a" is the most abundant with 10 individuals and species "e" to "i" are singletons. Tabulating the table gives:
species abundance 1 2 3 4 5 6 7 8 9 10
number of species 5 0 0 3 0 0 0 0 0 1
On the second row, the 5 in the first column means that five species, species "e" through "i", have abundance one. The following two zeros in columns 2 and 3 mean that zero species have abundance 2 or 3. The 3 in column 4 means that three species, species "b", "c", and "d", have abundance four. The final 1 in column 10 means that one species, species "a", has abundance 10.
This type of dataset is typical in biodiversity studies. Observe how more than half the biodiversity (as measured by species count) is due to singletons.
For real datasets, the species abundances are binned into logarithmic categories, usually using base 2, which gives bins of abundance 0–1, abundance 1–2, abundance 2–4, abundance 4–8, etc. Such abundance classes are called octaves; early developers of this concept included F. W. Preston and histograms showing number of species as a function of abundance octave are known as Preston diagrams.
These bins are not mutually exclusive: a species with abundance 4, for example, could be considered as lying in the 2-4 abundance class or the 4-8 abundance class. Species with an abundance of an exact power of 2 (i.e. 2,4,8,16, etc.) are conventionally considered as having 50% membership in the lower abundance class 50% membership in the upper class. Such species are thus considered to be evenly split between the two adjacent classes (apart from singletons which are classified into the rarest category). Thus in the example above, the Preston abundances would be
abundance class 1 1-2 2-4 4-8 8-16
species 5 0 1.5 1.5 1
The three species of abundance four thus appear, 1.5 in abundance class 2–4, and 1.5 in 4–8.
The above method of analysis cannot account for species that are unsampled: that is, species sufficiently rare to have been recorded zero times. Preston diagrams are thus truncated at zero abundance. Preston called this the veil line and noted that the cutoff point would move as more individuals are sampled.
Dynamics
All biodiversity patterns previously described are related to time-independent quantities. For biodiversity evolution and species preservation, it is crucial to compare the dynamics of ecosystems with models (Leigh, 2007). An easily accessible index of the underlying evolution is the so-called species turnover distribution (STD), defined as the probability P(r,t) that the population of any species has varied by a fraction r after a given time t.
A neutral model that can analytically predict both the relative species abundance (RSA) at steady-state and the STD at time t has been presented in Azaele et al. (2006). Within this framework the population of any species is represented by a continuous (random) variable x, whose evolution is governed by the following Langevin equation:
where b is the immigration rate from a large regional community, represents competition for finite resources and D is related to demographic stochasticity; is a Gaussian white noise. The model can also be derived as a continuous approximation of a master equation, where birth and death rates are independent of species, and predicts that at steady-state the RSA is simply a gamma distribution.
From the exact time-dependent solution of the previous equation, one can exactly calculate the STD at time t under stationary conditions:
This formula provides good fits of data collected in the Barro Colorado tropical forest from 1990 to 2000. From the best fit one can estimate ~ 3500 years with a broad uncertainty due to the relative short time interval of the sample. This parameter can be interpreted as the relaxation time of the system, i.e. the time the system needs to recover from a perturbation of species distribution. In the same framework, the estimated mean species lifetime is very close to the fitted temporal scale . This suggests that the neutral assumption could correspond to a scenario in which species originate and become extinct on the same timescales of fluctuations of the whole ecosystem.
Testing
The theory has provoked much controversy as it "abandons" the role of ecology when modelling ecosystems. The theory has been criticized as it requires an equilibrium, yet climatic and geographical conditions are thought to change too frequently for this to be attained.
Tests on bird and tree abundance data demonstrate that the theory is usually a poorer match to the data than alternative null hypotheses that use fewer parameters (a log-normal model with two tunable parameters, compared to the neutral theory's three), and are thus more parsimonious. The theory also fails to describe coral reef communities, studied by Dornelas et al., and is a poor fit to data in intertidal communities. It also fails to explain why families of tropical trees have statistically highly correlated numbers of species in phylogenetically unrelated and geographically distant forest plots in Central and South America, Africa, and South East Asia.
While the theory has been heralded as a valuable tool for palaeontologists, little work has so far been done to test the theory against the fossil record.
See also
Biodiversity Action Plan
Functional equivalence (ecology)
Ewens's sampling formula
Metabolic Scaling Theory (Metabolic theory of ecology)
Neutral theory of molecular evolution
Warren Ewens
References
Further reading
External links
Scientific American Interview with Steve Hubbell
R package for implementing UNTB
"Ecological neutral theory: useful model or statement of ignorance?" in Cell Press Discussions
Biodiversity
Ecological theories
Theoretical ecology
Neutral theory | Unified neutral theory of biodiversity | [
"Biology"
] | 4,028 | [
"Non-Darwinian evolution",
"Neutral theory",
"Biology theories",
"Biodiversity"
] |
636,008 | https://en.wikipedia.org/wiki/Lunitidal%20interval | The lunitidal interval measures the time lag from lunar culmination to the next high tide at a given location. It is also called the high water interval (HWI). Sometimes a term is not used for the time lag, but instead the terms age or establishment of the tide are used for the entry that is in tide tables.
Tides are known to be mainly caused by the Moon's gravity. Theoretically, peak tidal forces at a given location would concur when the Moon reaches the meridian, but a delay usually precedes high tide, depending largely on the shape of the coastline and the sea floor. Therefore, the lunitidal interval varies from place to place – from three hours over deep oceans to eight hours at New York Harbor. The lunitidal interval further varies within about 3h ± 30 minutes according to the lunar phase. (This is caused by the time interval associated with the solar tides.)
Hundreds of factors are involved in the lunitidal interval, especially near the shoreline. However, for those far away enough from the coast, the dominating consideration is the speed of gravity waves, which increases with the water's depth. It is proportional to the square root of the depth, for the extremely long gravity waves that transport the water that is following the Moon around the Earth. The oceans are about deep and would have to be at least deep for these waves to keep up with the Moon. As mentioned above, a similar time lag accompanies the solar tides, a complicating factor that varies with the lunar phases. By observing the age of leap tides, it becomes clear that the delay can actually exceed 24 hours in some locations.
The approximate lunitidal interval can be calculated if the moonrise, moonset, and high tide times are known for a location. In the Northern Hemisphere, the Moon reaches its highest point when it is southernmost in the sky. Lunar data are available from printed or online tables. Tide tables forecast the time of the next high water. The difference between these two times is the lunitidal interval. This value can be used to calibrate tide clock and wristwatches to allow for simple but crude tidal predictions. Unfortunately, the lunitidal intervals vary day-by-day even at a given location.
See also
Phase (waves)
References
Physical oceanography
Tides | Lunitidal interval | [
"Physics"
] | 485 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
636,035 | https://en.wikipedia.org/wiki/Convection%20oven | A convection oven (also known as a fan-assisted oven, turbo broiler or simply a fan oven or turbo) is an oven that has fans to circulate air around food to create an evenly heated environment. In an oven without a fan, natural convection circulates hot air unevenly, so that it will be cooler at the bottom and hotter at the top than in the middle. Fan ovens cook food faster, and are also used in non-food, industrial applications. Small countertop convection ovens for household use are often marketed as air fryers.
When cooking using a fan-assisted oven, the temperature is usually set lower than for a non-fan oven, often by , to avoid overcooking the outside of the food.
Principle of operation
Convection ovens distribute heat evenly around the food, removing the blanket of cooler air that surrounds food when it is first placed in an oven and allowing food to cook more evenly in less time and at a lower temperature than in a conventional oven.
History
The first oven with a fan to circulate air was invented in 1914 but it was never launched commercially.
The first convection oven in wide use was the Maxson Whirlwind Oven, introduced in 1945.
Convection ovens have been in wide use since 1945.
In 2006, Groupe SEB introduced the world's first air fryer, under the Actifry brand of convection ovens in the French market.
In 2010, Philips introduced the Airfryer brand of convection oven at the IFA Berlin consumer electronics fair. By 2018, the term "air fryer" was starting to be used generically.
In the United States convection ovens experienced a surge in popularity in the late 2010s and early 2020s with a reported 36% of U.S. households having one in 2020 and an estimated 60% of U.S. households having one in 2023. Food manufacturers have responded by adding air frying instructions on a number of products and pre air-fried products also coming to market.
In the UK, air fryers have surged in popularity since the early 2020s, with a 2024 study claiming that 1 in 5 Britons surveyed said that air fryers are their most commonly used cooking device.
Design
A convection oven has a fan with a heating element around it. A small fan circulates the air in the cooking chamber.
One effect of the fan is to reduce the thickness of the stationary thermal boundary layer of cooler air that naturally forms around the food. The boundary layer acts as an insulator and slows the rate at which heat is transferred to the food. By moving the cool air (convecting it) away from the food the layer is thinned, and cooking is faster. To prevent overcooking before the middle is cooked, the temperature is usually reduced by about below the setting used for a non-fan oven. In a non-fan oven the temperature varies significantly in different places; a fan distributes hot air evenly for a uniform temperature.
Convection ovens may include additional radiant heat sources at the top and bottom of the oven, which provide immediate heat without the warmup time of a (natural or fan-assisted) convection oven.
Effectiveness
A convection oven allows a reduction in cooking temperature compared to a conventional oven. This comparison will vary, depending on factors including, for example, how much food is being cooked at once or if airflow is being restricted, for example by an oversized baking tray. This difference in cooking temperature is offset as the circulating air transfers heat more quickly than still air of the same temperature. In order to transfer the same amount of heat in the same time, the temperature must be lowered to reduce the rate of heat transfer in order to compensate.
Variants
Another form of convection oven has hot air directed at a high flow rate from above and below food that passes through the oven on a conveyor belt; it is called an impingement oven. This cooks, for example, breaded products such as chicken nuggets or breaded chicken portions faster than a fan oven, and yields a crisp surface texture. Impinged air also prevents "shadowing" which occurs with infrared radiant heat sources. Impingement ovens can achieve a much higher heat transfer than a conventional oven.
Fully enclosed models can also use dual magnetrons, as used by microwave ovens. The most notable manufacturer of this type of oven is TurboChef. The differences between an impingement oven with magnetrons and a convection microwave oven are claimed to be cost, power consumption, and speed. Impingement ovens are designed to be used in restaurants, where speed is essential and power consumption and cost are less of a concern.
There are also convection microwave ovens which combine a convection oven with a microwave oven to cook food with the speed of a microwave oven and the browning ability of a convection oven.
A combi steamer is an oven that combines convection functionality with superheated steam to cook foods even faster and retain more nutrients and moisture.
Air fryer
An air fryer is a small countertop convection oven that is said to simulate deep frying without submerging the food in oil. A fan circulates hot air at a high speed, producing a crisp layer via browning reactions such as the Maillard reaction. Some product reviewers find that regular convection ovens or convection toaster ovens produce better results; others say that air frying is essentially the same as convection baking, while still others praise the devices for cooking faster, being easier to clean, and making it easier to produce crispy results than full size convection ovens.
The original Philips Air fryer used radiant heat from a heating element just above the food and convection heat from a strong air stream flowing upwards through the open bottom of the food chamber, delivering heat from all sides, with a small volume of hot air forced to pass from the heater surface and over the food, with no idle air circulating as in a convection oven. A shaped guide directed the airflow over the bottom of the food. The technique was patented as Rapid Air technology.
Traditional frying methods induce the Maillard reaction at temperatures of by completely submerging foods in hot oil, well above the boiling point of water. The air fryer works by circulating air at up to to apply sufficient heat to food coated with a thin layer of oil, causing the reaction.
Most air fryers have temperature and timer adjustments that allow precise cooking. Food is typically cooked in a basket that sits on a drip tray. For best results the basket must be periodically agitated, either manually or by the fryer mechanism. Convection ovens and air fryers are similar in the way they cook food, but air fryers are smaller and give off less heat to the room.
There are several types of household air fryer:
Paddle
In this type, a paddle machine moves throughout the heating chamber to move the air around more evenly. This is more convenient for the user because other types of air fryers require manual stirring throughout to ensure that all sides are fully cooked.
Cylindrical basket
A cylindrical basket is a small, single function air fryer that includes a drawer with a removable basket. A fan circulates from the top, and the food is cooked through holes in the basket. It can accommodate of food or less on average. Because of its compact size, it preheats faster than other types of air fryers.
Countertop convection oven
Countertop convection ovens come with an air frying feature that work the same way as basket type air fryers. They usually have multiple trays or racks, so multiple things can be cooked at the same time. It holds of food on average. They are more versatile than single function type because they have multiple features like baking, rotisserie, grilling, frying, broiling, and toasting.
Halogen
This type of air fryer cooks food with a halogen radiant heat source from above. The heat is spread evenly with a fan like other types of air fryers. This type is usually a large glass bowl with a hinged lid.
Oil-less turkey fryer
This is a large, barrel-shaped air fryer used to cook whole turkeys and other large pieces of meat. It circulates air around the drum to cook the meat evenly.
Industrial convection ovens
Industrial convection ovens can be very large.
Hot air ovens are convection ovens used to sterilize medical equipment.
References
External links
20th-century inventions
Convection
Ovens | Convection oven | [
"Physics",
"Chemistry"
] | 1,723 | [
"Transport phenomena",
"Physical phenomena",
"Convection",
"Thermodynamics"
] |
636,094 | https://en.wikipedia.org/wiki/Irreversible%20process | In science, a process that is not reversible is called irreversible. This concept arises frequently in thermodynamics. All complex natural processes are irreversible, although a phase transition at the coexistence temperature (e.g. melting of ice cubes in water) is well approximated as reversible.
In thermodynamics, a change in the thermodynamic state of a system and all of its surroundings cannot be precisely restored to its initial state by infinitesimal changes in some property of the system without expenditure of energy. A system that undergoes an irreversible process may still be capable of returning to its initial state. Because entropy is a state function, the change in entropy of the system is the same whether the process is reversible or irreversible. However, the impossibility occurs in restoring the environment to its own initial conditions. An irreversible process increases the total entropy of the system and its surroundings. The second law of thermodynamics can be used to determine whether a hypothetical process is reversible or not.
Intuitively, a process is reversible if there is no dissipation. For example, Joule expansion is irreversible because initially the system is not uniform. Initially, there is part of the system with gas in it, and part of the system with no gas. For dissipation to occur, there needs to be such a non uniformity. This is just the same as if in a system one section of the gas was hot, and the other cold. Then dissipation would occur; the temperature distribution would become uniform with no work being done, and this would be irreversible because you couldn't add or remove heat or change the volume to return the system to its initial state. Thus, if the system is always uniform, then the process is reversible, meaning that you can return the system to its original state by either adding or removing heat, doing work on the system, or letting the system do work. As another example, to approximate the expansion in an internal combustion engine as reversible, we would be assuming that the temperature and pressure uniformly change throughout the volume after the spark. Obviously, this is not true and there is a flame front and sometimes even engine knocking. One of the reasons that Diesel engines are able to attain higher efficiency is that the combustion is much more uniform, so less energy is lost to dissipation and the process is closer to reversible.
The phenomenon of irreversibility results from the fact that if a thermodynamic system, which is any system of sufficient complexity, of interacting molecules is brought from one thermodynamic state to another, the configuration or arrangement of the atoms and molecules in the system will change in a way that is not easily predictable. Some "transformation energy" will be used as the molecules of the "working body" do work on each other when they change from one state to another. During this transformation, there will be some heat energy loss or dissipation due to intermolecular friction and collisions. This energy will not be recoverable if the process is reversed.
Many biological processes that were once thought to be reversible have been found to actually be a pairing of two irreversible processes. Whereas a single enzyme was once believed to catalyze both the forward and reverse chemical changes, research has found that two separate enzymes of similar structure are typically needed to perform what results in a pair of thermodynamically irreversible processes.
Absolute versus statistical reversibility
Thermodynamics defines the statistical behaviour of large numbers of entities, whose exact behavior is given by more specific laws. While the fundamental theoretical laws of physics are all time-reversible, experimentally the probability of real reversibility is low and the former state of system and surroundings is recovered only to certain extent (see: uncertainty principle). The reversibility of thermodynamics must be statistical in nature; that is, it must be merely highly unlikely, but not impossible, that a system will lower in entropy. In other words, time reversibility is fulfilled if the process happens the same way if time were to flow in reverse or the order of states in the process is reversed (the last state becomes the first and vice versa).
History
The German physicist Rudolf Clausius, in the 1850s, was the first to mathematically quantify the discovery of irreversibility in nature through his introduction of the concept of entropy. In his 1854 memoir "On a Modified Form of the Second Fundamental Theorem in the Mechanical Theory of Heat," Clausius states:
Simply, Clausius states that it is impossible for a system to transfer heat from a cooler body to a hotter body. For example, a cup of hot coffee placed in an area of room temperature will transfer heat to its surroundings and thereby cool down with the temperature of the room slightly increasing (to ). However, that same initial cup of coffee will never absorb heat from its surroundings, causing it to grow even hotter, with the temperature of the room decreasing (to ). Therefore, the process of the coffee cooling down is irreversible unless extra energy is added to the system.
However, a paradox arose when attempting to reconcile microanalysis of a system with observations of its macrostate. Many processes are mathematically reversible in their microstate when analyzed using classical Newtonian mechanics. This paradox clearly taints microscopic explanations of macroscopic tendency towards equilibrium, such as James Clerk Maxwell's 1860 argument that molecular collisions entail an equalization of temperatures of mixed gases. From 1872 to 1875, Ludwig Boltzmann reinforced the statistical explanation of this paradox in the form of Boltzmann's entropy formula, stating that an increase of the number of possible microstates a system might be in, will increase the entropy of the system, making it less likely that the system will return to an earlier state. His formulas quantified the analysis done by William Thomson, 1st Baron Kelvin, who had argued that:
Another explanation of irreversible systems was presented by French mathematician Henri Poincaré. In 1890, he published his first explanation of nonlinear dynamics, also called chaos theory. Applying chaos theory to the second law of thermodynamics, the paradox of irreversibility can be explained in the errors associated with scaling from microstates to macrostates and the degrees of freedom used when making experimental observations. Sensitivity to initial conditions relating to the system and its environment at the microstate compounds into an exhibition of irreversible characteristics within the observable, physical realm.
Examples of irreversible processes
In the physical realm, many irreversible processes are present to which the inability to achieve 100% efficiency in energy transfer can be attributed. The following is a list of spontaneous events which contribute to the irreversibility of processes.
Ageing (this claim is disputed, as aging has been demonstrated to be reversed in mice. NAD+ and telomerase have also been demonstrated to reverse ageing.)
Death
Time
Heat transfer through a finite temperature difference
Friction
Plastic deformation
Flow of electric current through a resistance
Magnetization or polarization with a hysteresis
Unrestrained expansion of fluids
Spontaneous chemical reactions
Spontaneous mixing of matter of varying composition/states
A Joule expansion is an example of classical thermodynamics, as it is easy to work out the resulting increase in entropy. It occurs where a volume of gas is kept in one side of a thermally isolated container (via a small partition), with the other side of the container being evacuated; the partition between the two parts of the container is then opened, and the gas fills the whole container. The internal energy of the gas remains the same, while the volume increases. The original state cannot be recovered by simply compressing the gas to its original volume, since the internal energy will be increased by this compression. The original state can only be recovered by then cooling the re-compressed system, and thereby irreversibly heating the environment. The diagram to the right applies only if the first expansion is "free" (Joule expansion), i.e. there can be no atmospheric pressure outside the cylinder and no weight lifted.
Complex systems
The difference between reversible and irreversible events has particular explanatory value in complex systems (such as living organisms, or ecosystems). According to the biologists Humberto Maturana and Francisco Varela, living organisms are characterized by autopoiesis, which enables their continued existence. More primitive forms of self-organizing systems have been described by the physicist and chemist Ilya Prigogine. In the context of complex systems, events which lead to the end of certain self-organising processes, like death, extinction of a species or the collapse of a meteorological system can be considered as irreversible. Even if a clone with the same organizational principle (e.g. identical DNA-structure) could be developed, this would not mean that the former distinct system comes back into being. Events to which the self-organizing capacities of organisms, species or other complex systems can adapt, like minor injuries or changes in the physical environment are reversible. However, adaptation depends on import of negentropy into the organism, thereby increasing irreversible processes in its environment. Ecological principles, like those of sustainability and the precautionary principle can be defined with reference to the concept of reversibility.
See also
Entropy production
Entropy (arrow of time)
Exergy
Reversible process (thermodynamics)
One way function
Non-equilibrium thermodynamics
Symmetry breaking
References
Thermodynamics | Irreversible process | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,002 | [
"Thermodynamics",
"Dynamical systems"
] |
636,219 | https://en.wikipedia.org/wiki/Pressure%20vessel | A pressure vessel is a container designed to hold gases or liquids at a pressure substantially different from the ambient pressure.
Construction methods and materials may be chosen to suit the pressure application, and will depend on the size of the vessel, the contents, working pressure, mass constraints, and the number of items required.
Pressure vessels can be dangerous, and fatal accidents have occurred in the history of their development and operation. Consequently, pressure vessel design, manufacture, and operation are regulated by engineering authorities backed by legislation. For these reasons, the definition of a pressure vessel varies from country to country.
The design involves parameters such as maximum safe operating pressure and temperature, safety factor, corrosion allowance and minimum design temperature (for brittle fracture). Construction is tested using nondestructive testing, such as ultrasonic testing, radiography, and pressure tests. Hydrostatic pressure tests usually use water, but pneumatic tests use air or another gas. Hydrostatic testing is preferred, because it is a safer method, as much less energy is released if a fracture occurs during the test (water does not greatly increase its volume when rapid depressurisation occurs, unlike gases, which expand explosively). Mass or batch production products will often have a representative sample tested to destruction in controlled conditions for quality assurance. Pressure relief devices may be fitted if the overall safety of the system is sufficiently enhanced.
In most countries, vessels over a certain size and pressure must be built to a formal code. In the United States that code is the ASME Boiler and Pressure Vessel Code (BPVC). In Europe the code is the Pressure Equipment Directive. These vessels also require an authorised inspector to sign off on every new vessel constructed and each vessel has a nameplate with pertinent information about the vessel, such as maximum allowable working pressure, maximum temperature, minimum design metal temperature, what company manufactured it, the date, its registration number (through the National Board), and American Society of Mechanical Engineers's official stamp for pressure vessels (U-stamp). The nameplate makes the vessel traceable and officially an ASME Code vessel.
A special application is pressure vessels for human occupancy, for which more stringent safety rules apply.
Definition and scope
The ASME definition of a pressure vessel is a container designed to hold gases or liquids at a pressure substantially different from the ambient pressure.
The Australian and New Zealand standard "AS/NZS 1200:2000 Pressure equipment" defines a pressure vessel as a vessel subject to internal or external pressure, including connected components and accessories up to the connection to external piping.
This article may include information on pressure vessels in the broad sense, and is not restricted to any single definition.
Components
A pressure vessel comprises a shell, and usually one or more other components needed to pressurise, retain the pressure, depressurise, and provide access for maintenance and inspection. There may be other components and equipment provided to facilitate the intended use, and some of these may be considered parts of the pressure vessel, such as shell penetrations and their closures, and viewports and airlocks on a pressure vessel for human occupancy, as they affect the integrity and strength of the shell, are also part of the structure retaining the pressure. Pressure gauges and safety devices like pressure relief valves may also be deemed part of the pressure vessel. There may also be structural components permanently attached to the vessel for lifting, moving, or mounting it, like a foot ring, skids, handles, lugs, or mounting brackets.
Types
– system used in tall buildings and marine environments to maintain water pressure.
}
Dissolved gas storage
Fired pressure vessels
Liquefied gas (vapour over liquid) storage
Permanent gas storage
Supercritical fluid storage
Internal pressure vs external
Types by construction method
Types by construction material
Uses
Pressure vessels are used in a variety of applications in both industry and the private sector. They appear in these sectors as industrial compressed air receivers, boilers and domestic hot water storage tanks. Other examples of pressure vessels are diving cylinders, recompression chambers, distillation towers, pressure reactors, autoclaves, and many other vessels in mining operations, oil refineries and petrochemical plants, nuclear reactor vessels, submarine and space ship habitats, atmospheric diving suits, pneumatic reservoirs, hydraulic reservoirs under pressure, rail vehicle air brake reservoirs, road vehicle air brake reservoirs, and storage vessels for high pressure permanent gases and liquified gases such as ammonia, chlorine, and LPG (propane, butane).
A pressure vessel may also support structural loads. The passenger cabin of an airliner's outer skin carries both the structural and maneuvering loads of the aircraft, and the cabin pressurization loads. The pressure hull of a submarine also carries the hull structural and maneuvering loads.
Design
Working pressure
The working pressure, i.e. the pressure difference between the interior of the pressure vessel and the surroundings is the primary characteristic considered for design and construction. The concepts of high pressure and low pressure are somewhat flexible, and may be defined differently depending on context. There is also the matter of whether the internal pressure is greater or less than the external pressure, and its magnitude relative to normal atmospheric pressure. A vessel with internal pressure lower than atmospheric may also be called a hypobaric vessel or a vacuum vessel. A pressure vessel with high internal pressure can easily be made to be structurally stable, and will usually fail in tension, but failure due to excessive external pressure is usually by buckling instability and collapse.
Shape
Pressure vessels can theoretically be almost any shape, but shapes made of sections of spheres, cylinders, ellipsoids of revolution, and cones with circular sections are usually employed, though some other surfaces of revolution are also inherently stable. A common design is a cylinder with end caps called heads. Head shapes are frequently either hemispherical or dished (torispherical). More complicated shapes have historically been much harder to analyze for safe operation and are usually far more difficult to construct.
Theoretically, a spherical pressure vessel has approximately twice the strength of a cylindrical pressure vessel with the same wall thickness, and is the ideal shape to hold internal pressure. However, a spherical shape is difficult to manufacture, and therefore more expensive, so most pressure vessels are cylindrical with 2:1 semi-elliptical heads or end caps on each end. Smaller pressure vessels are assembled from a pipe and two covers. For cylindrical vessels with a diameter up to 600 mm (NPS of 24 in), it is possible to use seamless pipe for the shell, thus avoiding many inspection and testing issues, mainly the nondestructive examination of radiography for the long seam if required. A disadvantage of these vessels is that greater diameters are more expensive, so that for example the most economic shape of a , pressure vessel might be a diameter of and a length of including the 2:1 semi-elliptical domed end caps.
Scaling
No matter what shape it takes, the minimum mass of a pressure vessel scales with the pressure and volume it contains and is inversely proportional to the strength to weight ratio of the construction material (minimum mass decreases as strength increases).
Scaling of stress in walls of vessel
Pressure vessels are held together against the gas pressure due to tensile forces within the walls of the container. The normal (tensile) stress in the walls of the container is proportional to the pressure and radius of the vessel and inversely proportional to the thickness of the walls. Therefore, pressure vessels are designed to have a thickness proportional to the radius of tank and the pressure of the tank and inversely proportional to the maximum allowed normal stress of the particular material used in the walls of the container.
Because (for a given pressure) the thickness of the walls scales with the radius of the tank, the mass of a tank (which scales as the length times radius times thickness of the wall for a cylindrical tank) scales with the volume of the gas held (which scales as length times radius squared). The exact formula varies with the tank shape but depends on the density, ρ, and maximum allowable stress σ of the material in addition to the pressure P and volume V of the vessel. (See below for the exact equations for the stress in the walls.)
Spherical vessel
For a sphere, the minimum mass of a pressure vessel is
,
where:
is mass, (kg)
is the pressure difference from ambient (the gauge pressure), (Pa)
is volume,
is the density of the pressure vessel material, (kg/m3)
is the maximum working stress that material can tolerate. (Pa)
Other shapes besides a sphere have constants larger than 3/2 (infinite cylinders take 2), although some tanks, such as non-spherical wound composite tanks can approach this.
Cylindrical vessel with hemispherical ends
This is sometimes called a "bullet" for its shape, although in geometric terms it is a capsule.
For a cylinder with hemispherical ends,
,
where
R is the Radius (m)
W is the middle cylinder width only, and the overall width is W + 2R (m)
Cylindrical vessel with semi-elliptical ends
In a vessel with an aspect ratio of middle cylinder width to radius of 2:1,
.
Gas storage capacity
In looking at the first equation, the factor PV, in SI units, is in units of (pressurization) energy. For a stored gas, PV is proportional to the mass of gas at a given temperature, thus
. (see gas law)
The other factors are constant for a given vessel shape and material. So we can see that there is no theoretical "efficiency of scale", in terms of the ratio of pressure vessel mass to pressurization energy, or of pressure vessel mass to stored gas mass. For storing gases, "tankage efficiency" is independent of pressure, at least for the same temperature.
So, for example, a typical design for a minimum mass tank to hold helium (as a pressurant gas) on a rocket would use a spherical chamber for a minimum shape constant, carbon fiber for best possible , and very cold helium for best possible .
Stress in thin-walled pressure vessels
Stress in a thin-walled pressure vessel in the shape of a sphere is
,
where is hoop stress, or stress in the circumferential direction, is stress in the longitudinal direction, p is internal gauge pressure, r is the inner radius of the sphere, and t is thickness of the sphere wall. A vessel can be considered "thin-walled" if the diameter is at least 10 times (sometimes cited as 20 times) greater than the wall thickness.
Stress in a thin-walled pressure vessel in the shape of a cylinder is
,
,
where:
is hoop stress, or stress in the circumferential direction
is stress in the longitudinal direction
p is internal gauge pressure
r is the inner radius of the cylinder
t is thickness of the cylinder wall.
Almost all pressure vessel design standards contain variations of these two formulas with additional empirical terms to account for variation of stresses across thickness, quality control of welds and in-service corrosion allowances.
All formulae mentioned above assume uniform distribution of membrane stresses across thickness of shell but in reality, that is not the case. Deeper analysis is given by Lamé's theorem, which gives the distribution of stress in the walls of a thick-walled cylinder of a homogeneous and isotropic material. The formulae of pressure vessel design standards are extension of Lamé's theorem by putting some limit on ratio of inner radius and thickness.
For example, the ASME Boiler and Pressure Vessel Code (BPVC) (UG-27) formulas are:
Spherical shells: Thickness has to be less than 0.356 times inner radius
Cylindrical shells: Thickness has to be less than 0.5 times inner radius
where E is the joint efficiency, and all others variables as stated above.
The factor of safety is often included in these formulas as well, in the case of the ASME BPVC this term is included in the material stress value when solving for pressure or thickness.
Shell penetrations
Also sometimes called hull penetrations, depending on context, shell penetrations are intentional breaks in the structural integrity of the shell, and are usually significant local stress-raisers, so they must be accounted for in the design so they do not become failure points. It is usually necessary to reinforce the shell in the immediate vicinity of such penetrations. Shell penetrations are necessary to provide a variety of functions, including passage of the contents from the outside to the inside and back out, and in special applications for transmission of electricity, light, and other services through the shell. The simplest case is gas cylinders, which need only a neck penetration threaded to fit a valve, while a submarine or spacecraft may have a large number of penetrations for a large number of functions.
Penetration thread
The screw thread used for high pressure vessel shell penetrations is subject to high loads and must not leak.
High pressure cylinders are produced with conical (tapered) threads and parallel threads. Two sizes of tapered threads have dominated the full metal cylinders in industrial use from in volume.
For smaller fittings, taper thread standard 17E is used, with a 12% taper right hand thread, standard Whitworth 55° form with a pitch of 14 threads per inch (5.5 threads per cm) and pitch diameter at the top thread of the cylinder of . These connections are sealed using thread tape and torqued to between on steel cylinders, and between on aluminium cylinders.
For larger fittings, taper thread standard 25E is used. To screw in the valve, a higher torque of typically about is necessary, Until around 1950, hemp was used as a sealant. Later, a thin sheet of lead pressed to a hat form which closely fitted the external threads, with a hole on top was used. The fitter would squeeze the soft lead shim to conform better with the grooves and ridges of the fitting before screwing it into the hole. The lead would deform to form a thin layer between the internal and external thread, and thereby fill the gaps to create the seal. Since 2005, PTFE-tape has been used to avoid using lead.
A tapered thread provides simple assembly, but requires high torque for connecting and leads to high radial forces in the vessel neck, and has a limited number of times it can be used before it is excessively deformed. This could be extended a bit by always returning the same fitting to the same hole, and avoiding over-tightening.
All cylinders built for working pressure, all diving cylinders, and all composite cylinders use parallel threads.
Parallel threads for cylinder necks and similar penetrations of pressure vessels are made to several standards:
M25x2 ISO parallel thread, which is sealed by an O-ring and torqued to on steel, and on aluminium cylinders;
M18x1.5 parallel thread, which is sealed by an O-ring, and torqued to on steel cylinders, and on aluminium cylinders;
3/4"x14 BSP parallel thread, which has a 55° Whitworth thread form, a pitch diameter of and a pitch of 14 threads per inch (1.814 mm);
3/4"x14 NGS (NPSM) parallel thread, sealed by an O-ring, torqued to on aluminium cylinders, which has a 60° thread form, a pitch diameter of , and a pitch of 14 threads per inch (5.5 threads per cm);
3/4"x16 UNF, sealed by an O-ring, torqued to on aluminium cylinders.
7/8"x14 UNF, sealed by an O-ring.
The 3/4"NGS and 3/4"BSP are very similar, having the same pitch and a pitch diameter that only differs by about , but they are not compatible, as the thread forms are different.
All parallel thread valves are sealed using an elastomer O-ring at top of the neck thread which seals in a chamfer or step in the cylinder neck and against the flange of the valve.
Pressure vessel closures
Pressure vessel closures are pressure retaining structures designed to provide quick access to pipelines, pressure vessels, pig traps, filters and filtration systems. Typically pressure vessel closures allow access by maintenance personnel.
A commonly used maintenance access hole shape is elliptical, which allows the closure to be passed through the opening, and rotated into the working position, and is held in place by a bar on the outside, secured by a central bolt. The internal pressure prevents it from being inadvertently opened under load.
Placing the closure on the high pressure side of the opening uses the pressure difference to lock the closure when at service pressure. Where this is impracticable a safety interlock may be mandated.
An airlock is a room or compartment which permits passage between environments of differing atmospheric pressure or composition, while minimizing the changing of pressure or composition between the differing environments. It consists of a chamber with two airtight doors or hatches arranged in series, which are not opened simultaneously. Airlocks can be small or large enough for one or more people to pass through, which may take the form of an antechamber.
An airlock may also be used underwater to allow passage between the air environment in a pressure vessel, such as a submarine or diving bell, and the water environment outside. In such cases the airlock can contain air or water. This is called a floodable airlock or underwater airlock, and is used to prevent water from entering a submersible vessel or underwater habitat. A similar arrangement is used on spacecraft to facilitate extravehicular activity.
Construction materials
Many pressure vessels are made of steel. To manufacture a cylindrical or spherical pressure vessel, rolled and possibly forged parts would have to be welded together. Some mechanical properties of steel, achieved by rolling or forging, could be adversely affected by welding, unless special precautions are taken. In addition to adequate mechanical strength, current standards dictate the use of steel with a high impact resistance, especially for vessels used in low temperatures. In applications where carbon steel would suffer corrosion, special corrosion resistant material should also be used.
Some pressure vessels are made of composite materials, such as filament wound composite using carbon fibre held in place with a polymer. Due to the very high tensile strength of carbon fibre these vessels can be very light, but are much more difficult to manufacture. The composite material may be wound around a metal liner, forming a composite overwrapped pressure vessel.
Other very common materials include polymers such as PET in carbonated beverage containers and copper in plumbing.
Pressure vessels may be lined with various metals, ceramics, or polymers to prevent leaking and protect the structure of the vessel from the contained medium. This liner may also carry a significant portion of the pressure load.
Pressure Vessels may also be constructed from concrete (PCV) or other materials which are weak in tension. Cabling, wrapped around the vessel or within the wall or the vessel itself, provides the necessary tension to resist the internal pressure. A "leakproof steel thin membrane" lines the internal wall of the vessel. Such vessels can be assembled from modular pieces and so have "no inherent size limitations". There is also a high order of redundancy thanks to the large number of individual cables resisting the internal pressure.
The very small vessels used to make liquid butane fueled cigarette lighters are subjected to about 2 bar pressure, depending on ambient temperature. These vessels are often oval (1 x 2 cm ... 1.3 x 2.5 cm) in cross section but sometimes circular. The oval versions generally include one or two internal tension struts which appear to be baffles but which also provide additional cylinder strength.
Manufacturing processes
Riveted
The standard method of construction for boilers, compressed air receivers and other pressure vessels of iron or steel before gas and electrical welding of reliable quality became widespread was riveted sheets which had been rolled and forged into shape, then riveted together, often using butt straps along the joints, and caulked along the riveted seams by deforming the edges of the overlap with a blunt chisel to create a continuous line of high contact pressure along the joint. Hot riveting caused the rivets to contract on cooling, forming a tighter joint.
Welded
Large and low pressure vessels are commonly manufactured from formed plates welded together. Weld quality is critical to safety in pressure vessels for human occupancy.
Seamless
The typical circular-cylindrical high pressure gas cylinders for permanent gases (that do not liquify at storing pressure, like air, oxygen, nitrogen, hydrogen, argon, helium) have been manufactured by hot forging by pressing and rolling to get a seamless vessel of consistent material characteristics and minimised stress concentrations.
Working pressure of cylinders for use in industry, skilled craft, diving and medicine had a standardized working pressure (WP) of about in Europe until about 1950. From about 1975, the standard pressure rose to about . Firemen need slim, lightweight cylinders to move in confined spaces; since about 1995 cylinders for WP were used (first in pure steel).
A demand for reduced weight led to different generations of composite (fiber and matrix, over a liner) cylinders that are more vulnerable to impact damage. Composite cylinders for breathing gas are usually built for working pressure of .
Manufacturing methods for seamless metal pressure vessels are commonly used for relatively small diameter cylinders where large numbers will be produced, as the machinery and tooling require large capital outlay. The methods are well suited to high pressure gas transport and storage applications, and provide consistently high quality products.
Backward extrusion
Backward extrusion is a process by which the material is forced to flow back along the mandrel between the mandrel and die.
Cold extrusion (aluminium):
Seamless aluminium cylinders may be manufactured by cold backward extrusion of aluminium billets in a process which first presses the walls and base, then trims the top edge of the cylinder walls, followed by press forming the shoulder and neck.
Hot extrusion (steel):
In the hot extrusion process a billet of steel is cut to size, induction heated to the correct temperature for the alloy, descaled and placed in the die. The metal is backward extruded by forcing the mandrel into it, causing it to flow through the annular gap until a deep cup is formed. This cup is further drawn to diameter and wall thickness reduced and the bottom formed. After inspection and trimming of the open end, the cylinder is hot spun to close the end and form the neck.
Drawn
Seamless cylinders may also be cold drawn from steel plate discs to a cylindrical cup form, in two to four stages, depending on the final ratio of diameter to cylinder length. After forming the base and side walls, the top of the cylinder is trimmed to length, heated and hot spun to form the shoulder and close the neck. The spinning process thickens the material of the shoulder. The cylinder is heat-treated by quenching and tempering to provide the best strength and toughness.
Spun from seamless tube
A seamless steel cylinder can also be formed by hot spinning a closure at both ends. The base is first closed completely, and trimmed to form a smooth internal surface before the shoulder and neck are formed.
Regardless of the method used to form the cylinder, it will be machined to finish the neck and cut the neck threads, heat treated, cleaned, and surface finished, stamp marked, tested, and inspected for quality assurance.
Composite
Composite pressure vessels are generally laid up from filament wound rovings in a thermosetting polymer matrix. The mandrel may be removable after cure, or may remain a part of the finished product, often providing a more reliable gas or liquid-tight liner, or better chemical resistance to the intended contents than the resin matrix. Metallic inserts may be provided for attaching threaded accessories, such as valves and pipes.
Development of composite vessels
To classify the different structural principles of gas storage cylinders, 4 types are defined.
Type 1 – Full metal: Cylinder is made entirely from metal.
Type 2 – Hoop wrap: Metal cylinder, reinforced by a belt-like hoop wrap with fibre-reinforced resin.
Type 3 – Fully wrapped, over metal liner: Diagonally wrapped fibres form the load bearing shell on the cylindrical section and at the bottom and shoulder around the metal neck. The metal liner is thin and provides the gas tight barrier.
Type 4 – Fully wrapped, over non-metal liner: A lightweight thermoplastic liner provides the gas tight barrier, and the mandrel to wrap fibres and resin matrix around. Only the neck which carries the neck thread and its anchor to the liner is made of metal.
Type 2 and 3 cylinders have been in production since around 1995. Type 4 cylinders are commercially available at least since 2016.
Winding angle of composite vessels
Wound infinite cylindrical shapes optimally take a winding angle of 54.7 degrees to the cylindrical axis, as this gives the necessary twice the strength in the circumferential direction to the longitudinal.
Hoop wound fibre reinforcement is wound at an angle of nearly 90° to the cylinder axis.
Safety
Oveerpressure relief
As the pressure vessel is designed to a pressure, there is typically a safety valve or relief valve to ensure that this pressure is not exceeded in operation.
There may be a rupture disc fitted to the vessel or the cylinder valve or a fusible plug to protect in case of overheating.
Leak before burst
Leak before burst describes a pressure vessel designed such that a crack in the vessel will grow through the wall, allowing the contained fluid to escape and reducing the pressure, prior to growing so large as to cause catastrophic fracture at the operating pressure.
Many pressure vessel standards, including the ASME Boiler and Pressure Vessel Code and the AIAA metallic pressure vessel standard, either require pressure vessel designs to be leak before burst, or require pressure vessels to meet more stringent requirements for fatigue and fracture if they are not shown to be leak before burst.
Testing and inspection
Hydrostatic test (filled with water) pressure is usually 1.5 times working pressure, but DOT test pressure for scuba cylinders is 5/3 (1.66) times working pressure.
Operation standards
Pressure vessels are designed to operate safely at a specific pressure and temperature, technically referred to as the "Design Pressure" and "Design Temperature". A vessel that is inadequately designed to handle a high pressure constitutes a very significant safety hazard. Because of that, the design and certification of pressure vessels is governed by design codes such as the ASME Boiler and Pressure Vessel Code in North America, the Pressure Equipment Directive of the EU (PED), Japanese Industrial Standard (JIS), CSA B51 in Canada, Australian Standards in Australia and other international standards like Lloyd's, Germanischer Lloyd, Det Norske Veritas, Société Générale de Surveillance (SGS S.A.), Lloyd's Register Energy Nederland (formerly known as Stoomwezen) etc.
Note that where the pressure-volume product is part of a safety standard, any incompressible liquid in the vessel can be excluded as it does not contribute to the potential energy stored in the vessel, so only the volume of the compressible part such as gas is used.
List of standards
EN 13445: The current European Standard, harmonized with the Pressure Equipment Directive (Originally "97/23/EC", since 2014 "2014/68/EU"). Extensively used in Europe.
ASME Boiler and Pressure Vessel Code Section VIII: Rules for Construction of Pressure Vessels.
BS 5500: Former British Standard, replaced in the UK by BS EN 13445 but retained under the name PD 5500 for the design and construction of export equipment.
AD Merkblätter: German standard, harmonized with the Pressure Equipment Directive.
EN 286 (Parts 1 to 4): European standard for simple pressure vessels (air tanks), harmonized with Council Directive 87/404/EEC.
BS 4994: Specification for design and construction of vessels and tanks in reinforced plastics.
ASME PVHO: US standard for Pressure Vessels for Human Occupancy.
CODAP: French Code for Construction of Unfired Pressure Vessel.
AS/NZS 1200: Australian and New Zealand Standard for the requirements of Pressure equipment including Pressure Vessels, boilers and pressure piping.
AS 1210: Australian Standard for the design and construction of Pressure Vessels
AS/NZS 3788: Australian and New Zealand Standard for the inspection of pressure vessels
API 510.
ISO 11439: Compressed natural gas (CNG) cylinders
IS 2825–1969 (RE1977)_code_unfired_Pressure_vessels.
FRP tanks and vessels.
AIAA S-080-1998: AIAA Standard for Space Systems – Metallic Pressure Vessels, Pressurized Structures, and Pressure Components.
AIAA S-081A-2006: AIAA Standard for Space Systems – Composite Overwrapped Pressure Vessels (COPVs).
ECSS-E-ST-32-02C Rev.1: Space engineering – Structural design and verification of pressurized hardware
B51-09 Canadian Boiler, pressure vessel, and pressure piping code.
HSE guidelines for pressure systems.
Stoomwezen: Former pressure vessels code in the Netherlands, also known as RToD: Regels voor Toestellen onder Druk (Dutch Rules for Pressure Vessels).
SANS 10019:2021 South African National Standard: Transportable pressure receptacles for compressed, dissolved and liquefied gases - Basic design, manufacture, use and maintenance.
SANS 1825:2010 Edition 3: South African National Standard: Gas cylinder test stations ― General requirements for periodic inspection and testing of transportable refillable gas pressure receptacles. ISBN 978-0-626-23561-1
History
The earliest documented design of pressure vessels was described in 1495 in the book by Leonardo da Vinci, the Codex Madrid I, in which containers of pressurized air were theorized to lift heavy weights underwater. However, vessels resembling those used today did not come about until the 1800s, when steam was generated in boilers' helping to spur the Industrial Revolution. However, with poor material quality and manufacturing techniques along with improper knowledge of design, operation and maintenance there was a large number of damaging and often deadly explosions associated with these boilers and pressure vessels, with a death occurring on a nearly daily basis in the United States. Local provinces and states in the US began enacting rules for constructing these vessels after some particularly devastating vessel failures occurred killing dozens of people at a time, which made it difficult for manufacturers to keep up with the varied rules from one location to another. The first pressure vessel code was developed starting in 1911 and released in 1914, starting the ASME Boiler and Pressure Vessel Code (BPVC).
In an early effort to design a tank capable of withstanding pressures up to , a diameter tank was developed in 1919 that was spirally-wound with two layers of high tensile strength steel wire to prevent sidewall rupture, and the end caps longitudinally reinforced with lengthwise high-tensile rods. The need for high pressure and temperature vessels for petroleum refineries and chemical plants gave rise to vessels joined with welding instead of rivets (which were unsuitable for the pressures and temperatures required) and in the 1920s and 1930s the BPVC included welding as an acceptable means of construction; welding is the main means of joining metal vessels today.
There have been many advancements in the field of pressure vessel engineering such as advanced non-destructive examination, phased array ultrasonic testing and radiography, new material grades with increased corrosion resistance and stronger materials, and new ways to join materials such as explosion welding, friction stir welding, advanced theories and means of more accurately assessing the stresses encountered in vessels such as with the use of Finite Element Analysis, allowing the vessels to be built safer and more efficiently. Pressure vessels in the USA require BPVC stamping, but the BPVC is not just a domestic code, many other countries have adopted the BPVC as their official code. There are, however, other official codes in some countries, such as Japan, Australia, Canada, Britain, and other countries in the European Union. Nearly all recognize the inherent potential hazards of pressure vessels and the need for standards and codes regulating their design and construction.
Gallery
Alternatives
Natural gas storage
Gas holder
Depending on the application and local circumstances, alternatives to pressure vessels exist. Examples can be seen in domestic water collection systems, where the following may be used:
Gravity-controlled systems which typically consist of an unpressurized water tank at an elevation higher than the point of use. Pressure at the point of use is the result of the hydrostatic pressure caused by the elevation difference. Gravity systems produce per foot of water head (elevation difference). A municipal water supply or pumped water is typically around .
Inline pump controllers or pressure-sensitive pumps.
In nuclear reactors, pressure vessels are primarily used to keep the coolant (water) liquid at high temperatures to increase Carnot efficiency. Other coolants can be kept at high temperatures with much less pressure, explaining the interest in molten salt reactors, lead cooled fast reactors and gas cooled reactors. However, the benefits of not needing a pressure vessel or one of less pressure are in part compensated by drawbacks unique to each alternative approach.
See also
- a small, inexpensive, disposable metal gas cylinder for providing pneumatic power
– a device for measuring leaf water potentials
or Knock-out drum
Notes
References
Sources
A.C. Ugural, S.K. Fenster, Advanced Strength and Applied Elasticity, 4th ed.
E.P. Popov, Engineering Mechanics of Solids, 1st ed.
Megyesy, Eugene F. "Pressure Vessel Handbook, 14th Edition." PV Publishing, Inc. Oklahoma City, OK
Further reading
Megyesy, Eugene F. (2008, 14th ed.) Pressure Vessel Handbook. PV Publishing, Inc.: Oklahoma City, Oklahoma, US. www.pressurevesselhandbook.com Design handbook for pressure vessels based on the ASME code.
External links
Use of pressure vessels in oil and gas industry
Basic formulas for thin walled pressure vessels, with examples
Educational Excel spreadsheets for ASME head, shell and nozzle designs
ASME boiler and pressure vessel website
Journal of Pressure Vessel Technology
EU Pressure Equipment Directive website
EU Simple Pressure Vessel Directive
EU classification
Pressure vessel attachments
Image of a carbon-fiber composite gas cylinder, showing construction details
Image of a carbon-fiber composite oxygen cylinder for an industrial breathing set
Gas technologies | Pressure vessel | [
"Physics",
"Chemistry",
"Engineering"
] | 7,064 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
636,427 | https://en.wikipedia.org/wiki/Mahler%27s%20compactness%20theorem | In mathematics, Mahler's compactness theorem, proved by , is a foundational result on lattices in Euclidean space, characterising sets of lattices that are 'bounded' in a certain definite sense. Looked at another way, it explains the ways in which a lattice could degenerate (go off to infinity) in a sequence of lattices. In intuitive terms it says that this is possible in just two ways: becoming coarse-grained with a fundamental domain that has ever larger volume; or containing shorter and shorter vectors. It is also called his selection theorem, following an older convention used in naming compactness theorems, because they were formulated in terms of sequential compactness (the possibility of selecting a convergent subsequence).
Let X be the space
that parametrises lattices in , with its quotient topology. There is a well-defined function Δ on X, which is the absolute value of the determinant of a matrix – this is constant on the cosets, since an invertible integer matrix has determinant 1 or −1.
Mahler's compactness theorem states that a subset Y of X is relatively compact if and only if Δ is bounded on Y, and there is a neighbourhood N of 0 in such that for all Λ in Y, the only lattice point of Λ in N is 0 itself.
The assertion of Mahler's theorem is equivalent to the compactness of the space of unit-covolume lattices in whose systole is larger or equal than any fixed .
Mahler's compactness theorem was generalized to semisimple Lie groups by David Mumford; see Mumford's compactness theorem.
References
William Andrew Coppel (2006), Number theory, p. 418.
Geometry of numbers
Discrete groups
Compactness theorems
Theorems in number theory | Mahler's compactness theorem | [
"Mathematics"
] | 381 | [
"Compactness theorems",
"Geometry of numbers",
"Theorems in topology",
"Theorems in number theory",
"Mathematical problems",
"Mathematical theorems",
"Number theory"
] |
636,555 | https://en.wikipedia.org/wiki/Fluvastatin | Fluvastatin is a member of the statin drug class, used to treat hypercholesterolemia and to prevent cardiovascular disease.
It was patented in 1982 and approved for medical use in 1994. It is on the World Health Organization's List of Essential Medicines.
Adverse effects
Adverse effects are comparable to other statins. Common are nausea, indigestion, insomnia and headache. Myalgia (muscle pain), and rarely rhabdomyolysis, characteristic side effects for statins, can also occur.
Interactions
Contrary to lovastatin, simvastatin and atorvastatin, fluvastatin has no relevant interactions with drugs that inhibit the liver enzyme CYP3A4, and a generally lower potential for interactions than most other statins. Fluconazole, a potent inhibitor of CYP2C9, does increase fluvastatin levels.
Pharmacology
Mechanism of action
Fluvastatin works by blocking the liver enzyme HMG-CoA reductase, which facilitates an important step in cholesterol synthesis.
Pharmacodynamics
In a Cochrane systematic review the dose-related magnitudes of fluvastatin on blood lipids was determined. Over the dose range of 10 to 80 mg/day total cholesterol was reduced by 10.7% to 24.9%, LDL cholesterol by 15.2% to 34.9%, and triglycerides by 3% to 17.5%.
Pharmacokinetics
The drug is quickly and almost completely (98%) absorbed from the gut. Food intake slows down absorption, but does not decrease it. Due to its first-pass effect, bioavailability is lower: about 24–30% according to different sources. Over 98% of the substance is bound to plasma proteins.
Several cytochrome P450 enzymes (mainly CYP2C9, but also CYP3A4 and CYP2C8) are involved in the metabolism of fluvastatin, which makes is less liable to interactions than most other statins. The main metabolite is inactive and is called "N-desisopropyl propionic acid" in the literature.
93–95% of the drug is excreted via the feces, less than 2% of which in form of the original substance.
Names
Fluvastatin is the INN. Brandnames include Lescol, Canef, Vastin.
Research
Data from the Cholesterol Treatment Trialists' (CTT) publication was used to determine the effects of fluvastatin, atorvastatin and rosuvastatin on LDL cholesterol lowering and reduction of myocardial infarction. In two RCTs an average dose of 72 mg/day fluvastatin reduced LDL cholesterol by 31.9%, and reduced myocardial infarction, relative risk, 0.68 (95% CI 0.55 to 0.85) as compared to placebo. In five RCTs a mean atorvastatin dose of 26 mg/day reduced LDL cholesterol by 44.0% and reduced myocardial infarction, relative risk, 0.67 (95% CI 0.58 to 0.77) as compared to placebo. In four RCTs a mean rosuvastatin dose of 16 mg/day reduced LDL cholesterol by 48.8% and reduced myocardial infarction, relative risk, 0.82 (95% CI 0.73 to 0.93) as compared to placebo. Thus despite reducing LDL cholesterol by a much lesser amount with fluvastatin than atorvastatin and rosuvastatin, fluvastatin reduced myocardial infarction similarly to atorvastatin and to a greater degree than rosuvastatin.
References
Carboxylic acids
Diols
Indoles
4-Fluorophenyl compounds
Statins
Isopropyl compounds
Drugs developed by Novartis
Hydroxy acids
Secondary alcohols | Fluvastatin | [
"Chemistry"
] | 850 | [
"Carboxylic acids",
"Functional groups"
] |
637,102 | https://en.wikipedia.org/wiki/Plant%20physiology | Plant physiology is a subdiscipline of botany concerned with the functioning, or physiology, of plants.
Plant physiologists study fundamental processes of plants, such as photosynthesis, respiration, plant nutrition, plant hormone functions, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, environmental stress physiology, seed germination, dormancy and stomata function and transpiration. Plant physiology interacts with the fields of plant morphology (structure of plants), plant ecology (interactions with the environment), phytochemistry (biochemistry of plants), cell biology, genetics, biophysics and molecular biology.
Aims
The field of plant physiology includes the study of all the internal activities of plants—those chemical and physical processes associated with life as they occur in plants. This includes study at many levels of scale of size and time. At the smallest scale are molecular interactions of photosynthesis and internal diffusion of water, minerals, and nutrients. At the largest scale are the processes of plant development, seasonality, dormancy, and reproductive control. Major subdisciplines of plant physiology include phytochemistry (the study of the biochemistry of plants) and phytopathology (the study of disease in plants). The scope of plant physiology as a discipline may be divided into several major areas of research.
First, the study of phytochemistry (plant chemistry) is included within the domain of plant physiology. To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Photosynthesis requires a large array of pigments, enzymes, and other compounds to function. Because they cannot move, plants must also defend themselves chemically from herbivores, pathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds.
Secondly, plant physiology includes the study of biological and chemical processes of individual plant cells. Plant cells have a number of features that distinguish them from cells of animals, and which lead to major differences in the way that plant life behaves and responds differently from animal life. For example, plant cells have a cell wall which maintains the shape of plant cells. Plant cells also contain chlorophyll, a chemical compound that interacts with light in a way that enables plants to manufacture their own nutrients rather than consuming other living things as animals do.
Thirdly, plant physiology deals with interactions between cells, tissues, and organs within a plant. Different cells and tissues are physically and chemically specialized to perform different functions. Roots and rhizoids function to anchor the plant and acquire minerals in the soil. Leaves catch light in order to manufacture nutrients. For both of these organs to remain living, minerals that the roots acquire must be transported to the leaves, and the nutrients manufactured in the leaves must be transported to the roots. Plants have developed a number of ways to achieve this transport, such as vascular tissue, and the functioning of the various modes of transport is studied by plant physiologists.
Fourthly, plant physiologists study the ways that plants control or regulate internal functions. Like animals, plants produce chemicals called hormones which are produced in one part of the plant to signal cells in another part of the plant to respond. Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism. The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant.
Finally, plant physiology includes the study of plant response to environmental conditions and their variation, a field known as environmental physiology. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors.
Biochemistry of plants
The chemical elements of which plants are constructed—principally carbon, oxygen, hydrogen, nitrogen, phosphorus, sulfur, etc.—are the same as for all other life forms: animals, fungi, bacteria and even viruses. Only the details of their individual molecular structures vary.
Despite this underlying similarity, plants produce a vast array of chemical compounds with unique properties which they use to cope with their environment. Pigments are used by plants to absorb or detect light, and are extracted by humans for use in dyes. Other plant products may be used for the manufacture of commercially important rubber or biofuel. Perhaps the most celebrated compounds from plants are those with pharmacological activity, such as salicylic acid from which aspirin is made, morphine, and digoxin. Drug companies spend billions of dollars each year researching plant compounds for potential medicinal benefits.
Constituent elements
Plants require some nutrients, such as carbon and nitrogen, in large quantities to survive. Some nutrients are termed macronutrients, where the prefix macro- (large) refers to the quantity needed, not the size of the nutrient particles themselves. Other nutrients, called micronutrients, are required only in trace amounts for plants to remain healthy. Such micronutrients are usually absorbed as ions dissolved in water taken from the soil, though carnivorous plants acquire some of their micronutrients from captured prey.
The following tables list element nutrients essential to plants. Uses within plants are generalized.
Pigments
Among the most important molecules for plant function are the pigments. Plant pigments include a variety of different kinds of molecules, including porphyrins, carotenoids, and anthocyanins. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment appears to the eye.
Chlorophyll is the primary pigment in plants; it is a porphyrin that absorbs red and blue wavelengths of light while reflecting green. It is the presence and relative abundance of chlorophyll that gives plants their green color. All land plants and green algae possess two forms of this pigment: chlorophyll a and chlorophyll b. Kelps, diatoms, and other photosynthetic heterokonts contain chlorophyll c instead of b, red algae possess chlorophyll a. All chlorophylls serve as the primary means plants use to intercept light to fuel photosynthesis.
Carotenoids are red, orange, or yellow tetraterpenoids. They function as accessory pigments in plants, helping to fuel photosynthesis by gathering wavelengths of light not readily absorbed by chlorophyll. The most familiar carotenoids are carotene (an orange pigment found in carrots), lutein (a yellow pigment found in fruits and vegetables), and lycopene (the red pigment responsible for the color of tomatoes). Carotenoids have been shown to act as antioxidants and to promote healthy eyesight in humans.
Anthocyanins (literally "flower blue") are water-soluble flavonoid pigments that appear red to blue, according to pH. They occur in all tissues of higher plants, providing color in leaves, stems, roots, flowers, and fruits, though not always in sufficient quantities to be noticeable. Anthocyanins are most visible in the petals of flowers, where they may make up as much as 30% of the dry weight of the tissue. They are also responsible for the purple color seen on the underside of tropical shade plants such as Tradescantia zebrina. In these plants, the anthocyanin catches light that has passed through the leaf and reflects it back towards regions bearing chlorophyll, in order to maximize the use of available light
Betalains are red or yellow pigments. Like anthocyanins they are water-soluble, but unlike anthocyanins they are indole-derived compounds synthesized from tyrosine. This class of pigments is found only in the Caryophyllales (including cactus and amaranth), and never co-occur in plants with anthocyanins. Betalains are responsible for the deep red color of beets, and are used commercially as food-coloring agents. Plant physiologists are uncertain of the function that betalains have in plants which possess them, but there is some preliminary evidence that they may have fungicidal properties.
Signals and regulators
Plants produce hormones and other growth regulators which act to signal a physiological response in their tissues. They also produce compounds such as phytochrome that are sensitive to light and which serve to trigger growth or development in response to environmental signals.
Plant hormones
Plant hormones, known as plant growth regulators (PGRs) or phytohormones, are chemicals that regulate a plant's growth. According to a standard animal definition, hormones are signal molecules produced at specific locations, that occur in very low concentrations, and cause altered processes in target cells at other locations. Unlike animals, plants lack specific hormone-producing tissues or organs. Plant hormones are often not transported to other parts of the plant and production is not limited to specific locations.
Plant hormones are chemicals that in small amounts promote and influence the growth, development and differentiation of cells and tissues. Hormones are vital to plant growth; affecting processes in plants from flowering to seed development, dormancy, and germination. They regulate which tissues grow upwards and which grow downwards, leaf formation and stem growth, fruit development and ripening, as well as leaf abscission and even plant death.
The most important plant hormones are abscissic acid (ABA), auxins, ethylene, gibberellins, and cytokinins, though there are many other substances that serve to regulate plant physiology.
Photomorphogenesis
While most people know that light is important for photosynthesis in plants, few realize that plant sensitivity to light plays a role in the control of plant structural development (morphogenesis). The use of light to control structural development is called photomorphogenesis, and is dependent upon the presence of specialized photoreceptors, which are chemical pigments capable of absorbing specific wavelengths of light.
Plants use four kinds of photoreceptors: phytochrome, cryptochrome, a UV-B photoreceptor, and protochlorophyllide a. The first two of these, phytochrome and cryptochrome, are photoreceptor proteins, complex molecular structures formed by joining a protein with a light-sensitive pigment. Cryptochrome is also known as the UV-A photoreceptor, because it absorbs ultraviolet light in the long wave "A" region. The UV-B receptor is one or more compounds not yet identified with certainty, though some evidence suggests carotene or riboflavin as candidates. Protochlorophyllide a, as its name suggests, is a chemical precursor of chlorophyll.
The most studied of the photoreceptors in plants is phytochrome. It is sensitive to light in the red and far-red region of the visible spectrum. Many flowering plants use it to regulate the time of flowering based on the length of day and night (photoperiodism) and to set circadian rhythms. It also regulates other responses including the germination of seeds, elongation of seedlings, the size, shape and number of leaves, the synthesis of chlorophyll, and the straightening of the epicotyl or hypocotyl hook of dicot seedlings.
Photoperiodism
Many flowering plants use the pigment phytochrome to sense seasonal changes in day length, which they take as signals to flower. This sensitivity to day length is termed photoperiodism. Broadly speaking, flowering plants can be classified as long day plants, short day plants, or day neutral plants, depending on their particular response to changes in day length. Long day plants require a certain minimum length of daylight to start flowering, so these plants flower in the spring or summer. Conversely, short day plants flower when the length of daylight falls below a certain critical level. Day neutral plants do not initiate flowering based on photoperiodism, though some may use temperature sensitivity (vernalization) instead.
Although a short day plant cannot flower during the long days of summer, it is not actually the period of light exposure that limits flowering. Rather, a short day plant requires a minimal length of uninterrupted darkness in each 24-hour period (a short daylength) before floral development can begin. It has been determined experimentally that a short day plant (long night) does not flower if a flash of phytochrome activating light is used on the plant during the night.
Plants make use of the phytochrome system to sense day length or photoperiod. This fact is utilized by florists and greenhouse gardeners to control and even induce flowering out of season, such as the poinsettia (Euphorbia pulcherrima).
Environmental physiology
Paradoxically, the subdiscipline of environmental physiology is on the one hand a recent field of study in plant ecology and on the other hand one of the oldest. Environmental physiology is the preferred name of the subdiscipline among plant physiologists, but it goes by a number of other names in the applied sciences. It is roughly synonymous with ecophysiology, crop ecology, horticulture and agronomy. The particular name applied to the subdiscipline is specific to the viewpoint and goals of research. Whatever name is applied, it deals with the ways in which plants respond to their environment and so overlaps with the field of ecology.
Environmental physiologists examine plant response to physical factors such as radiation (including light and ultraviolet radiation), temperature, fire, and wind. Of particular importance are water relations (which can be measured with the Pressure bomb) and the stress of drought or inundation, exchange of gases with the atmosphere, as well as the cycling of nutrients such as nitrogen and carbon.
Environmental physiologists also examine plant response to biological factors. This includes not only negative interactions, such as competition, herbivory, disease and parasitism, but also positive interactions, such as mutualism and pollination.
While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain as members of the animal kingdom do simply because of the lack of any pain receptors, nerves, or a brain, and, by extension, lack of consciousness. Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not, are known for their "obvious sensory abilities". Nevertheless, the plant kingdom as a whole do not feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites, since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death.
Tropisms and nastic movements
Plants may respond both to directional and non-directional stimuli. A response to a directional stimulus, such as gravity or sun light, is called a tropism. A response to a nondirectional stimulus, such as temperature or humidity, is a nastic movement.
Tropisms in plants are the result of differential cell growth, in which the cells on one side of the plant elongates more than those on the other side, causing the part to bend toward the side with less growth. Among the common tropisms seen in plants is phototropism, the bending of the plant toward a source of light. Phototropism allows the plant to maximize light exposure in plants which require additional light for photosynthesis, or to minimize it in plants subjected to intense light and heat. Geotropism allows the roots of a plant to determine the direction of gravity and grow downwards. Tropisms generally result from an interaction between the environment and production of one or more plant hormones.
Nastic movements results from differential cell growth (e.g. epinasty and hiponasty), or from changes in turgor pressure within plant tissues (e.g., nyctinasty), which may occur rapidly. A familiar example is thigmonasty (response to touch) in the Venus fly trap, a carnivorous plant. The traps consist of modified leaf blades which bear sensitive trigger hairs. When the hairs are touched by an insect or other animal, the leaf folds shut. This mechanism allows the plant to trap and digest small insects for additional nutrients. Although the trap is rapidly shut by changes in internal cell pressures, the leaf must grow slowly to reset for a second opportunity to trap insects.
Plant disease
Economically, one of the most important areas of research in environmental physiology is that of phytopathology, the study of diseases in plants and the manner in which plants resist or cope with infection. Plant are susceptible to the same kinds of disease organisms as animals, including viruses, bacteria, and fungi, as well as physical invasion by insects and roundworms.
Because the biology of plants differs with animals, their symptoms and responses are quite different. In some cases, a plant can simply shed infected leaves or flowers to prevent the spread of disease, in a process called abscission. Most animals do not have this option as a means of controlling disease. Plant diseases organisms themselves also differ from those causing disease in animals because plants cannot usually spread infection through casual physical contact. Plant pathogens tend to spread via spores or are carried by animal vectors.
One of the most important advances in the control of plant disease was the discovery of Bordeaux mixture in the nineteenth century. The mixture is the first known fungicide and is a combination of copper sulfate and lime. Application of the mixture served to inhibit the growth of downy mildew that threatened to seriously damage the French wine industry.
History
Early history
Francis Bacon published one of the first plant physiology experiments in 1627 in the book, Sylva Sylvarum. Bacon grew several terrestrial plants, including a rose, in water and concluded that soil was only needed to keep the plant upright. Jan Baptist van Helmont published what is considered the first quantitative experiment in plant physiology in 1648. He grew a willow tree for five years in a pot containing 200 pounds of oven-dry soil. The soil lost just two ounces of dry weight and van Helmont concluded that plants get all their weight from water, not soil. In 1699, John Woodward published experiments on growth of spearmint in different sources of water. He found that plants grew much better in water with soil added than in distilled water.
Stephen Hales is considered the Father of Plant Physiology for the many experiments in the 1727 book, Vegetable Staticks; though Julius von Sachs unified the pieces of plant physiology and put them together as a discipline. His Lehrbuch der Botanik was the plant physiology bible of its time.
Researchers discovered in the 1800s that plants absorb essential mineral nutrients as inorganic ions in water. In natural conditions, soil acts as a mineral nutrient reservoir but the soil itself is not essential to plant growth. When the mineral nutrients in the soil are dissolved in water, plant roots absorb nutrients readily, soil is no longer required for the plant to thrive. This observation is the basis for hydroponics, the growing of plants in a water solution rather than soil, which has become a standard technique in biological research, teaching lab exercises, crop production and as a hobby.
Economic applications
Food production
In horticulture and agriculture along with food science, plant physiology is an important topic relating to fruits, vegetables, and other consumable parts of plants. Topics studied include: climatic requirements, fruit drop, nutrition, ripening, fruit set. The production of food crops also hinges on the study of plant physiology covering such topics as optimal planting and harvesting times and post harvest storage of plant products for human consumption and the production of secondary products like drugs and cosmetics.
Crop physiology steps back and looks at a field of plants as a whole, rather than looking at each plant individually. Crop physiology looks at how plants respond to each other and how to maximize results like food production through determining things like optimal planting density.
See also
Biomechanics
Hyperaccumulator
Phytochemistry
Plant anatomy
Plant morphology
Plant secondary metabolism
Branches of botany
References
Further reading
Lincoln Taiz, Eduardo Zeiger, Ian Max Møller, Angus Murphy: Fundamentals of Plant Physiology. Sinauer, 2018.
Branches of botany | Plant physiology | [
"Biology"
] | 4,337 | [
"Plant physiology",
"Plants",
"Branches of botany"
] |
637,240 | https://en.wikipedia.org/wiki/Round-trip%20engineering | Round-trip engineering (RTE) in the context of model-driven architecture is a functionality of software development tools that synchronizes two or more related software artifacts, such as, source code, models, configuration files, documentation, etc. between each other. The need for round-trip engineering arises when the same information is present in multiple artifacts and when an inconsistency may arise in case some artifacts are updated. For example, some piece of information was added to/changed in only one artifact (source code) and, as a result, it became missing in/inconsistent with the other artifacts (in models).
Overview
Round-trip engineering is closely related to traditional software engineering disciplines: forward engineering (creating software from specifications), reverse engineering (creating specifications from existing software), and reengineering (understanding existing software and modifying it). Round-trip engineering is often wrongly defined as simply supporting both forward and reverse engineering. In fact, the key characteristic of round-trip engineering that distinguishes it from forward and reverse engineering is the ability to synchronize existing artifacts that evolved concurrently by incrementally updating each artifact to reflect changes made to the other artifacts. Furthermore, forward engineering can be seen as a special instance of RTE in which only the specification is present and reverse engineering can be seen as a special instance of RTE in which only the software is present. Many reengineering activities can also be understood as RTE when the software is updated to reflect changes made to the previously reverse engineered specification.
Types
Various books describe two types of RTE:
partial or uni-directional RTE: changes made to a higher level representation of a code and model are reflected in lower level, but not otherwise; the latter might be allowed, but with limitations that may not affect higher-level abstractions
full or bi-directional RTE: regardless of changes, both higher and lower-level code and model representations are synchronized if any of them altered
Auto synchronization
Another characteristic of round-trip engineering is automatic update of the artifacts in response to automatically detected inconsistencies. In that sense, it is different from forward- and reverse engineering which can be both manual (traditionally) and automatic (via automatic generation or analysis of the artifacts). The automatic update can be either instantaneous or on-demand. In instantaneous RTE, all related artifacts are immediately updated after each change made to one of them. In on-demand RTE, authors of the artifacts may concurrently update the artifacts (even in a distributed setting) and at some point choose to execute matching to identify inconsistencies and choose to propagate some of them and reconcile potential conflicts.
Iterative approach
Round trip engineering may involve an iterative development process. After you have synchronized your model with revised code, you are still free to choose the best way to work – make further modifications to the code or make changes to your model. You can synchronize in either direction at any time and you can repeat the cycle as many times as necessary.
Software
Many commercial tools and research prototypes support this form of RTE; a 2007 book lists Rational Rose, Micro Focus Together, ESS-Model, BlueJ, and Fujaba among those capable, with Fujaba said to be capable to also identify design patterns.
Limitations
A 2005 book on Visual Studio notes for instance that a common problem in RTE tools is that the model reversed is not the same as the original one, unless the tools are aided by leaving laborious annotations in the source code. The behavioral parts of UML impose even more challenges for RTE.
Usually, UML class diagrams are supported to some degree; however, certain UML concepts, such as associations and containment do not have straightforward representations in many programming languages which limits the usability of the created code and accuracy of code analysis/reverse engineering (e.g., containment is hard to recognize in the code).
A more tractable form of round-trip engineering is implemented in the context of framework application programming interfaces (APIs), whereby a model describing the usage of a framework API by an application is synchronized with that application's code. In this setting, the API prescribes all correct ways the framework can be used in applications, which allows precise and complete detection of API usages in the code as well as creation of useful code implementing correct API usages. Two prominent RTE implementations in this category are framework-specific modeling languages and Spring Roo (Java).
Round-trip engineering is critical for maintaining consistency among multiple models and between the models and the code in Object Management Group's (OMG) Model-driven architecture. OMG proposed the QVT (query/view/transformation) standard to handle model transformations required for MDA. To date, a few implementations of the standard have been created. (Need to present practical experiences with MDA in relation to RTE).
Controversies
Code generation controversy
Code generation (forward-engineering) from models means that the user abstractly models solutions, which are connoted by some model data, and then an automated tool derives from the models parts or all of the source code for the software system. In some tools, the user can provide a skeleton of the program source code, in the form of a source code template where predefined tokens are then replaced with program source code parts during the code generation process.
UML (if used for MDA) diagrams specification was criticized for lack the detail which is needed to contain the same information as is covered with the program source. Some developers even claim that "the Code is the design".
Disadvantages
There is a serious risk that the generated code will rapidly differ from the model or that the reverse-engineered model will lose its reflection on the code or a mix of these two problems as result of cycled reengineering efforts.
Regarding behavioral/dynamic part of UML for features like statechart diagram there is no equivalents in programming languages. Their translation during code-generation will result in common programming statement (.e.g ) being either missing or misinterpreted. If edited and imported back may result in different or incomplete model. The same goes for code snippets used for code generation stage for the pattern-implementation and user-specific logic: intermixed they may not be easily reverse-engineered back.
There is also general lack of advanced tooling for modelling that are comparable to that of modern IDEs (for testing, debugging, navigation, etc.) for general-purpose programming languages and domain-specific languages.
Examples in software engineering
Perhaps the most common form of round-trip engineering is synchronization between UML (Unified Modeling Language) models and the corresponding source code and entity–relationship diagrams in data modelling and database modelling.
Round-trip engineering based on Unified Modeling Language (UML) needs three basic tools for software development:
Source Code Editor;
UML Editor for the Attributes and Methods;
Visualisation of UML structure
References
Programming tools
Reverse engineering | Round-trip engineering | [
"Engineering"
] | 1,426 | [
"Reverse engineering"
] |
637,242 | https://en.wikipedia.org/wiki/Mokume-gane | is a Japanese metalworking procedure which produces a mixed-metal laminate with distinctive layered patterns; the term is also used to refer to the resulting laminate itself. The term translates closely to 'wood grain metal' or 'wood eye metal' and describes the way metal takes on the appearance of natural wood grain. fuses several layers of differently coloured precious metals together to form a sandwich of alloys called a "billet." The billet is then manipulated in such a way that a pattern resembling wood grain emerges over its surface. Numerous ways of working create diverse patterns. Once the metal has been rolled into a sheet or bar, several techniques are used to produce a range of effects.
has been used to create many artistic objects. Though the technique was first developed for production of decorative sword fittings, the craft is today mostly used in the production of jewelry and hollowware.
History
Origins
First developed in 17th-century Japan, was originally used for swords. As the customary Japanese sword stopped serving as a weapon and became largely a status symbol, a demand arose for elaborate decorative handles and sheaths.
To meet this demand, Denbei Shoami (1651–1728), a master metalworker from Akita prefecture, invented the process. He initially called his product , as the technique in its simplest form resembled , a type of carved lacquerwork with alternating layers of red and black. Other historical names for it were , , and .
The early components of were relatively soft metals and alloys (gold, copper, silver, , , and ) which would form liquid phase diffusion bonds with one another without completely melting. This was useful in the traditional techniques of fusing and soldering the layers together.
Over time, the practice of faded. The katana industry dried up in the late 19th century, with the Meiji Restoration returning ruling power to the emperor, following the dissolution of the shogunate government and the end of the samurai class. The public display of swords as a sign of samurai status was outlawed. After this, the few metalsmiths who practiced along with most other sword related artisans largely transferred their skills to create other objects.
Adoption of in the West
Tiffany & Co's silver division under the direction of Edward C. Moore began to experiment with techniques around 1877, and at the Paris exposition of 1878, Tiffany's grand prize-winning display of Moore's "Japanesque" silver wares included a magnificent "Conglomerate Vase" with asymmetrical panels of . Moore and Tiffany's silver smiths continued to develop its popular techniques in preparation for the Paris exposition of 1889, where it displayed a vast array of Japanesque silver, using ever more complex alloys of , and , along with gold and silver, to make laminates of up to twenty-four layers. Tiffany's display again won the grand prize for silver wares, and the company continued to produce its Japanesque silver with techniques up into the 20th century.
20th-21st century development
By the mid 20th century, had fallen into heavy obscurity. Japan's movement away from traditional craftwork, paired with the great difficulty of mastering , had brought artisans to the brink of extinction. It reached a point where only scholars and collectors of metalwork were aware of the technique. It was not until the 1970s, when Hiroko Sato Pijanowski – who learned the craft from Norio Tamagawa – that the craft was reignited in the public eye, as Hiroko and her husband Eugene Pijanowski brought the craft of back to the United States and began teaching it to their students.
Present day
Today, jewelry, flatware, hollowware, spinning tops and other artistic objects are made using .
Modern processes are highly controlled and include a compressive force on the billet. This has allowed the technique to include many nontraditional components such as titanium, platinum, iron, bronze, brass, nickel silver, and various colors of karat gold including yellow, white, sage, and rose hues as well as sterling silver. At the Santa Fe Symposium, a major annual gathering of jewelers from around the world, there have been several papers presented on new, more predictable, and more economic, methods of producing materials, along with new possibilities for laminating metals such as the use of friction-stir welding.
Techniques
Liquid phase fusion (historic)
In liquid phase fusion, metal sheets were stacked and carefully heated; the solid billet of simple stripes could be forged and carved to increase the pattern's complexity. Successful lamination using this process requires a highly skilled smith with a great deal of experience. Bonding in the traditional process is achieved when some or all of the alloys in the stack are heated to the point of becoming partially molten (above solidus) this liquid alloy is what fuses the layers together. Careful heat control and skillful forging are required for this process.
Soldering (brazing)
In attempting to recreate the appearance of traditional , some artisans tried brazing layers together. The sheets were soldered using silver solder or some other brazing alloy. This technique joined the metals, but is difficult to perfect, particularly on larger sheets. Flux inclusions could be trapped or bubbles could form. Commonly, imperfections need to be cut out, and the metal re-soldered. Ultimately the brazed sheets do not display the ductility and work-ability of diffusion bonded material.
Solid-state bonding (contemporary)
The modernized process of solid-state bonding typically uses a controlled atmosphere in a temperature-controlled furnace. Mechanical aids such as a hydraulic press or torque plates (bolted clamps) are also typically used to apply compressive force on the billet during lamination. These provide for the implementation of lower temperature solid-state diffusion between the interleaved layers, thus allowing the inclusion of non-traditional materials.
Development of the pattern
After the fusion of layers, the surface of the billet is cut with chisel to expose lower layers, then flattened. This cutting and flattening process will be repeated over and over again to develop intricate patterns.
Coloring
To increase the contrast between the laminate layers, many items are colored by the application of a patina (a controlled corrosion layer) to accentuate or even totally change the colors of the metal's surface.
patination and
One example of a traditional Japanese patination for is the use of the process, usually involving , a complex copper verdigris compound produced specifically for use as a patina. The piece to be patinated is prepared, then immersed in a boiling solution until it reaches the desired color, and each element of a compound piece may be transformed to a different color. Historically, a paste of ground daikon radish was also used to prepare the work for the patina. The paste was applied immediately before the piece is boiled in the to protect the surface against tarnish and uneven coloring.
Similar laminates
In an accidental but parallel development, Sheffield plate was developed in England. It follows a similar principal of bonded layers, without use of solder, but typically had 2–3 layers, whereas could have many more.
See also
References
External links
"Can you make copper & nickel Damascus?!? Mokume-gane," You Tube video showing the attempt of making a combined metal of copper and nickel, https://www.youtube.com/watch?v=8XgDHIx9LvQ (April 18, 2018), Retrieved December 15, 2018
Composite materials
Alloys
Metalworking
Artworks in metal | Mokume-gane | [
"Physics",
"Chemistry"
] | 1,523 | [
"Composite materials",
"Materials",
"Chemical mixtures",
"Alloys",
"Matter"
] |
637,510 | https://en.wikipedia.org/wiki/Bipolar%20violation | A bipolar violation, bipolarity violation, or BPV, is a violation of the bipolar encoding rules where two pulses of the same polarity occur without an intervening pulse of the opposite polarity. This indicates an error in the transmission of the signal.
T-carrier and E-carrier signals are transmitted using a scheme called bipolar encoding, a.k.a. Alternate Mark Inversion (AMI), where ONE is represented by a pulse, and a ZERO is represented by no pulse. Pulses (which represent ones) always alternate in polarity, so that if, for example two positive pulses are received in succession, the receiver knows that an error occurred (a violation) in that one or more bits were either added or deleted from the original signal.
Reliable transmission of data using this scheme requires a regular stream of pulses; too many zero bits in succession can cause a loss of synchronization between transmitter and receiver. To ensure that this is always present, there exist a number of modified AMI codes which use judiciously placed bipolar violations to encode long strings of consecutive zeroes.
References
Error detection and correction
Line codes | Bipolar violation | [
"Engineering"
] | 227 | [
"Error detection and correction",
"Reliability engineering"
] |
637,733 | https://en.wikipedia.org/wiki/Pockels%20effect | In optics, the Pockels effect, or Pockels electro-optic effect, is a directionally-dependent linear variation in the refractive index of an optical medium that occurs in response to the application of an electric field. It is named after the German physicist Friedrich Carl Alwin Pockels, who studied the effect in 1893. The non-linear counterpart, the Kerr effect, causes changes in the refractive index at a rate proportional to the square of the applied electric field. In optical media, the Pockels effect causes changes in birefringence that vary in proportion to the strength of the applied electric field.
The Pockels effect occurs in crystals that lack inversion symmetry, such as monopotassium phosphate (, abbr. KDP), potassium dideuterium phosphate (, abbr. KD*P or DKDP), lithium niobate (), beta-barium borate (BBO), barium titanate (BTO) and in other non-centrosymmetric media such as electric-field poled polymers or glasses. The Pockels effect has been elucidated through extensive study of electro-optic properties in materials like KDP.
Pockels cells
The key component of a Pockels cell is a non-centrosymmetric single crystal with an optic axis whose refractive index is controlled by an external electric field. In other words, the Pockels effect is the basis of the operation of Pockels cells. By controlling the refractive index, the optical retardance of the crystal is altered so the polarization state of incident light beam is changed. Therefore, Pockels cells are used as voltage-controlled wave plates as well as other photonics applications. See applications below for uses. Pockels cells are divided into two configurations depending on the crystals' electro-optic properties: longitudinal and transverse.
Longitudinal Pockels cells operate with electric field applied along the crystal optic axis or along incident beam propagation. Such crystals include KDP, KD*P, and ADP. Electrodes are coated as transparent metal oxide films on crystal faces where the beam is propagating through or metal rings (usually made out of gold) coated around the crystal body. Terminals for voltage application are in contact with the electrodes. The optical retardance Δφ for longitudinal Pockels cells proportional to the ordinary refractive index no, electro-optic constant r63 (units of m/V), and applied voltage V and inversely proportional to the incident beam wavelength λ0. For an example, the halfwave voltage is approximately 7.6 kV for a KDP crystal with a no = 1.51, r63 = 10.6X10-12 m/V at λ0, and Δφ = π. The advantage of using longitudinal Pockels cells is that the voltage requirements for quarter wave or half wave retardance is not dependent on crystal length or diameter.
Transverse Pockels cells operate with electric field being applied perpendicular to beam propagation. Crystals used in transverse Pockels cells include BBO, LiNbO3, CdTe, ZnSe, and CdSe.
The long sides of the crystal are coated with electrodes. Optical retardance Δφ for transverse Pockels cells is similar to that of longitudinal Pockels cells but it is dependent on crystal dimensions. The quarter wave or half wave voltage requirements increase with crystal aperture size, but the requirements can be reduced by lengthening the crystal.
Two or more crystal can be incorporated into a transverse Pockels cell. One reason is to reduce the voltage requirement by extending the overall length of the Pockels cell. Another reason is the fact that KDP is biaxial and possesses two electro-optic constants, r 63 for longitudinal configuration and r 41 for transverse configuration. A transverse Pockels cell that uses a KDP (or one of its isomorphs) consists of two crystals in opposite orientation, which together give a zero-order waveplate when the voltage is turned off. This is often not perfect and drifts with temperature. But the mechanical alignment of the crystal axis is not so critical and is often done by hand without screws; while misalignment leads to some energy in the wrong ray (either e or o for example, horizontal or vertical), in contrast to the longitudinal case, the loss is not amplified through the length of the crystal.
Alignment of the crystal axis with the ray axis is critical, regardless of configuration. Misalignment leads to birefringence and to a large phase shift across the long crystal. This leads to polarization rotation if the alignment is not exactly parallel or perpendicular to the polarization.
Dynamics within the cell
Because of the high relative dielectric constant of εr ≈ 36 inside the crystal, changes in the electric field propagate at a speed of only c/6. Fast non-fiber optic cells are thus embedded into a matched transmission line. Putting it at the end of a transmission line leads to reflections and doubled switching time. The signal from the driver is split into parallel lines that lead to both ends of the crystal. When they meet in the crystal, their voltages add up.
Pockels cells for fiber optics may employ a traveling wave design to reduce current requirements and increase speed.
Usable crystals also exhibit the piezoelectric effect to some degree (RTP () has the lowest, BBO and lithium niobate are the highest). After a voltage change, sound waves start propagating from the sides of the crystal to the middle. This is important not for pulse pickers, but for boxcar windows. Guard space between the light and the faces of the crystals needs to be larger for longer holding times.
Behind the sound wave the crystal stays deformed in the equilibrium position for the high electric field.
This increases the polarization. Due to the growing of the polarized volume the electric field in the crystal in front of the wave increases
linearly, or the driver has to provide a constant current leakage.
The driver electronics
A Pockels cell, by design, is a capacitor, and often require high voltages to change the state of the polarization of the laser beam to effectively operate as a switchable waveplate. The voltage required depends on the type of Pockels cell, the wavelength of the light, and the size of the crystal; but typically, the voltage range is in the order of 1-10 kV. Pockels cell drivers provide this high voltage in the form of very fast pulses, which typically have rise times of less than 10 nanoseconds.
There are basically two types of drivers: a quick or Q drive which has a fast rise time, then slowly decays. A Pockels cell that uses a Q-drive is sometimes referred to as a Q-switch. The other type of driver is referred to as a regenerative or R drive. R drives will have a fast rise time and a fast fall time. The driver's output pulse width can be from nanoseconds to microseconds long, depending on the application. The type of drive and its repetition rate will depend on the laser and the intended application.
Applications
Pockels cells are used in a variety of scientific and technical applications. A Pockels cell, combined with a polarizer, can be used for switching between initial polarization state and half-wave phase retardance, creating a fast shutter capable of "opening" and "closing" in nanoseconds. The same technique can be used to impress information on the beam by modulating the rotation between 0° and 90°; the exiting beam's intensity, when viewed through the polarizer, contains an amplitude-modulated signal. This modulated signal can be used for time-resolved electric field measurements when a crystal is exposed to an unknown electric field.
Pockels cells are used as a Q-switch to generate short, high-intensity laser pulse. The Pockels cell prevents optical amplification by introducing a polarization dependent loss in the laser cavity. This allows the gain medium to have a high population inversion. When the gain medium has the desired population inversion, the Pockels cell is switched "open", and a short high energy laser pulse is created. Q-switched lasers are used in a variety of applications, such as medical aesthetics, metrology, manufacturing, and holography.
Pulse picking is another application that uses a Pockels cell. A pulse picker is typically composed of an oscillator, electro-optic modulator, amplifiers, high voltage driver, and a frequency doubling modulator along with a Pockels cell. The Pockels cell can pick up a pulse from a laser induced bunch while blocking the rest by synchronized electro-optic switching.
Pockels cells are also used in regenerative amplifiers, chirped pulse amplification, and cavity dumping to let optical power in and out of lasers and optical amplifiers.
Pockels cells can be used for quantum key distribution by polarizing photons.
Pockels cells in conjunction with other EO elements can be combined to form electro-optic probes.
A Pockels cell was used by MCA Disco-Vision (DiscoVision) engineers in the optical videodisc mastering system. Light from an argon-ion laser was passed through the Pockels cell to create pulse modulations corresponding to the original FM video and audio signals to be recorded on the master videodisc. MCA used the Pockels cell in videodisc mastering until the sale to Pioneer Electronics. To increase the quality of the recordings, MCA patented a Pockels cell stabilizer that reduced the second-harmonic distortion that could be created by the Pockels cell during mastering. MCA used either a DRAW (Direct Read After Write) mastering system or a photoresist system. The DRAW system was originally preferred, since it didn't require clean-room conditions during disc recording and allowed instant quality checking during mastering. The original single-sided test pressings from 1976/77 were mastered with the DRAW system as were the "educational", non-feature titles at the format's release in December 1978.
Pockels cells are used in two-photon microscopy to adjust the transmitted laser intensity at a time scale of microseconds.
In recent years, Pockels cells are employed at the National Ignition Facility located at Lawrence Livermore National Laboratory. Each Pockels cell for one of the 192 lasers acts as an optical trap before exiting through an amplifier. The beams from all of the 192 lasers eventually converge onto a single target of deuterium-tritium fuel in hopes to trigger a fusion reaction.
See also
Electro-optic modulator
Acousto-optic modulator
Kerr effect
References
Nonlinear optics
Polarization (waves)
Quantum information science | Pockels effect | [
"Physics"
] | 2,231 | [
"Polarization (waves)",
"Astrophysics"
] |
14,824,044 | https://en.wikipedia.org/wiki/Trace%20identity | In mathematics, a trace identity is any equation involving the trace of a matrix.
Properties
Trace identities are invariant under simultaneous conjugation.
Uses
They are frequently used in the invariant theory of matrices to find the generators and relations of the ring of invariants, and therefore are useful in answering questions similar to that posed by Hilbert's fourteenth problem.
Examples
The Cayley–Hamilton theorem says that every square matrix satisfies its own characteristic polynomial. This also implies that all square matrices satisfy where the coefficients are given by the elementary symmetric polynomials of the eigenvalues of .
All square matrices satisfy
See also
References
.
Invariant theory
Linear algebra | Trace identity | [
"Physics",
"Mathematics"
] | 130 | [
"Symmetry",
"Group actions",
"Invariant theory",
"Linear algebra",
"Algebra"
] |
365,001 | https://en.wikipedia.org/wiki/Hypnic%20jerk | A hypnic jerk, hypnagogic jerk, sleep start, sleep twitch, myoclonic jerk, or night start is a brief and sudden involuntary contraction of the muscles of the body which occurs when a person is beginning to fall asleep, often causing the person to jump and awaken suddenly for a moment. Hypnic jerks are one form of involuntary muscle twitches called myoclonus.
Physically, hypnic jerks resemble the "jump" experienced by a person when startled, sometimes accompanied by a falling sensation. Hypnic jerks are associated with a rapid heartbeat, quickened breathing, sweat, and sometimes "a peculiar sensory feeling of 'shock' or 'falling into the void. It can also be accompanied by a vivid dream experience or hallucination. A higher occurrence is reported in people with irregular sleep schedules. When they are particularly frequent and severe, hypnic jerks have been reported as a cause of sleep-onset insomnia.
Hypnic jerks are common physiological phenomena. Around 70% of people experience them at least once in their lives with 10% experiencing them daily. They are benign and do not cause any neurological sequelae.
Causes
According to the American Academy of Sleep Medicine (AASM), there is a wide range of potential causes, including anxiety, stimulants like caffeine and nicotine, stress, and strenuous activities in the evening. It also may be facilitated by fatigue or sleep deprivation. However, most hypnic jerks occur essentially at random in healthy people. Nevertheless, these repeated, intensifying twitches can cause anxiety in some individuals and a disruption to their sleep onset.
Sometimes, hypnic jerks are mistaken for another form of movement during sleep. For example, hypnic jerks can be confused with restless leg syndrome, periodic limb movement disorder, hypnagogic foot tremor, rhythmic movement disorder, and hereditary or essential startle syndrome, including the hyperplexia syndrome. But some phenomena can help to distinguish hypnic jerk from these other conditions. For example, the occurrence of hypnic jerk arises only at sleep onset and it happens without any rhythmicity or periodicity of the movements and EMG bursts. Also, other pertinent history allows to differentiate it.
This physiological phenomenon can also be mistaken for myoclonic seizure, but it can also be distinguished by different criteria such as the fact that hypnic jerk occurs at sleep onset only or that the EEG is normal and constant. In addition, unlike seizures, there are no tongue bites, urinary incontinence and postictal confusion in hypnic jerk. This phenomenon can therefore be distinguished from other more serious conditions.
The causes of hypnic jerk are yet unclear and under study. None of the several theories that have attempted to explain it have been fully accepted.
One hypothesis posits that the hypnic jerk is a form of reflex, initiated in response to normal bodily events during the lead-up to the first stages of sleep, including a decrease in blood pressure and the relaxation of muscle tissue. Another theory postulates that the body mistakes the sense of relaxation that is felt when falling asleep as a sign that the body is falling. As a consequence, it causes a jerk to wake the sleeper up so they can catch themselves. A researcher at the University of Colorado suggested that a hypnic jerk could be "an archaic reflex to the brain's misinterpretation of muscle relaxation with the onset of sleep as a signal that a sleeping primate is falling out of a tree. The reflex may also have had selective value by having the sleeper readjust or review his or her sleeping position in a nest or on a branch in order to assure that a fall did not occur", but evidence is lacking.
During an epilepsy and intensive care study, the lack of a preceding spike discharge measured on an epilepsy monitoring unit, along with the presence only at sleep onset, helped differentiate hypnic jerks from epileptic myoclonus.
According to a study on sleep disturbances in the Journal of Neural Transmission, a hypnic jerk occurs during the non-rapid eye movement sleep cycle and is an "abrupt muscle action flexing movement, generalized or partial and asymmetric, which may cause arousal, with an illusion of falling". Hypnic jerks are more frequent in childhood with 4 to 7 per hour in the age range from 8 to 12 years old, and they decrease toward 1 or 2 per hour by 65 to 80 years old.
Treatment
There are ways to reduce hypnic jerks, including reducing consumption of stimulants such as nicotine or caffeine, avoiding physical exertion prior to sleep, and consuming sufficient magnesium.
Some medication can also help to reduce or eliminate the hypnic jerks. For example, low-dose clonazepam at bedtime may make the twitches disappear over time.
In addition, some people may develop a fixation on these hypnic jerks leading to increased anxiety, worrying about the disruptive experience. This increased anxiety and fatigue increases the likelihood of experiencing these jerks, resulting in a positive feedback loop.
See also
References
Sleep disorders | Hypnic jerk | [
"Biology"
] | 1,073 | [
"Behavior",
"Sleep",
"Sleep disorders"
] |
365,262 | https://en.wikipedia.org/wiki/Parasitic%20drag | Parasitic drag, also known as profile drag, is a type of aerodynamic drag that acts on any object when the object is moving through a fluid. Parasitic drag is defined as the combination of form drag and skin friction drag.
It is named as such because it is not useful, in contrast with lift-induced drag which is created when an airfoil generates lift. All objects experience parasitic drag, regardless of whether they generate lift. Parasitic drag comprises all types of drag except lift-induced drag, and the total drag on an aircraft or other object which generates lift is the sum of parasitic drag and lift-induced drag.
Form drag
Form drag arises because of the shape of the object. The general size and shape of the body are the most important factors in form drag; bodies with a larger presented cross-section will have a higher drag than thinner bodies; sleek ("streamlined") objects have lower form drag. Form drag follows the drag equation, meaning that it increases with the square of the velocity, and thus becomes more important for high-speed aircraft.
Form drag depends on the longitudinal section of the body. A prudent choice of body profile is essential for a low drag coefficient. Streamlines should be continuous, and separation of the boundary layer with its attendant vortices should be avoided.
Form drag includes interference drag, caused by the mixing of airflow streams. For example, where the wing and fuselage meet at the wing root, two airstreams merge into one. This mixing can cause eddy currents, turbulence, or restrict smooth airflow. Interference drag is greater when two surfaces meet at perpendicular angles, and can be minimised by the use of fairings.
Wave drag, also known as supersonic wave drag or compressibility drag, is a component of form drag caused by shock waves generated when an aircraft is moving at transonic and supersonic speeds.
Form drag is a type of pressure drag, a term which also includes lift-induced drag. Form drag is pressure drag due to separation.
Skin friction drag
Skin friction drag arises from the friction of the fluid against the "skin" of the object that is moving through it. Skin friction arises from the interaction between the fluid and the skin of the body, and is directly related to the wetted surface, the area of the surface of the body that is in contact with the fluid. Air in contact with a body will stick to the body's surface and that layer will tend to stick to the next layer of air and that in turn to further layers, hence the body is dragging some amount of air with it. The force required to drag an "attached" layer of air with the body is called skin friction drag. Skin friction drag imparts some momentum to a mass of air as it passes through it and that air applies a retarding force on the body. As with other components of parasitic drag, skin friction follows the drag equation and rises with the square of the velocity.
Skin friction is caused by viscous drag in the boundary layer around the object. The boundary layer at the front of the object is usually laminar and relatively thin, but becomes turbulent and thicker towards the rear. The position of the transition point from laminar to turbulent flow depends on the shape of the object. There are two ways to decrease friction drag: the first is to shape the moving body so that laminar flow is possible. The second method is to increase the length and decrease the cross-section of the moving object as much as practicable. To do so, a designer can consider the fineness ratio, which is the length of the aircraft divided by its diameter at the widest point (L/D). It is mostly kept 6:1 for subsonic flows. Increase in length increases Reynolds number (). With in the denominator for skin friction coefficient's relation, as its value is increased (in laminar range), total friction drag is reduced. While decrease in cross-sectional area decreases drag force on the body as the disturbance in air flow is less.
The skin friction coefficient, , is defined by
where is the local wall shear stress, and q is the free-stream dynamic pressure. For boundary layers without a pressure gradient in the x direction, it is related to the momentum thickness as
For comparison, the turbulent empirical relation known as the One-seventh Power Law'' (derived by Theodore von Kármán) is:
where is the Reynolds number.
For a laminar flow over a plate, the skin friction coefficient can be determined using the formula:
See also
NACA duct
Jet engine ram drag
Skin friction line
References
Drag (physics) | Parasitic drag | [
"Chemistry"
] | 941 | [
"Drag (physics)",
"Fluid dynamics"
] |
365,811 | https://en.wikipedia.org/wiki/Banach%E2%80%93Mazur%20game | In general topology, set theory and game theory, a Banach–Mazur game is a topological game played by two players, trying to pin down elements in a set (space). The concept of a Banach–Mazur game is closely related to the concept of Baire spaces. This game was the first infinite positional game of perfect information to be studied. It was introduced by Stanisław Mazur as problem 43 in the Scottish book, and Mazur's questions about it were answered by Banach.
Definition
Let be a non-empty topological space, a fixed subset of and a family of subsets of that have the following properties:
Each member of has non-empty interior.
Each non-empty open subset of contains a member of .
Players, and alternately choose elements from to form a sequence
wins if and only if
Otherwise, wins.
This is called a general Banach–Mazur game and denoted by
Properties
has a winning strategy if and only if is of the first category in (a set is of the first category or meagre if it is the countable union of nowhere-dense sets).
If is a complete metric space, has a winning strategy if and only if is comeager in some non-empty open subset of
If has the Baire property in , then is determined.
The siftable and strongly-siftable spaces introduced by Choquet can be defined in terms of stationary strategies in suitable modifications of the game. Let denote a modification of where is the family of all non-empty open sets in and wins a play if and only if
Then is siftable if and only if has a stationary winning strategy in
A Markov winning strategy for in can be reduced to a stationary winning strategy. Furthermore, if has a winning strategy in , then has a winning strategy depending only on two preceding moves. It is still an unsettled question whether a winning strategy for can be reduced to a winning strategy that depends only on the last two moves of .
is called weakly -favorable if has a winning strategy in . Then, is a Baire space if and only if has no winning strategy in . It follows that each weakly -favorable space is a Baire space.
Many other modifications and specializations of the basic game have been proposed: for a thorough account of these, refer to [1987].
The most common special case arises when and consist of all closed intervals in the unit interval. Then wins if and only if and wins if and only if . This game is denoted by
A simple proof: winning strategies
It is natural to ask for what sets does have a winning strategy in . Clearly, if is empty, has a winning strategy, therefore the question can be informally rephrased as how "small" (respectively, "big") does (respectively, the complement of in ) have to be to ensure that has a winning strategy. The following result gives a flavor of how the proofs used to derive the properties in the previous section work:
Proposition. has a winning strategy in if is countable, is T1, and has no isolated points.
Proof. Index the elements of X as a sequence: Suppose has chosen if is the non-empty interior of then is a non-empty open set in so can choose Then chooses and, in a similar fashion, can choose that excludes . Continuing in this way, each point will be excluded by the set so that the intersection of all will not intersect .
The assumptions on are key to the proof: for instance, if is equipped with the discrete topology and consists of all non-empty subsets of , then has no winning strategy if (as a matter of fact, her opponent has a winning strategy). Similar effects happen if is equipped with indiscrete topology and
A stronger result relates to first-order sets.
Proposition. has a winning strategy in if and only if is meagre.
This does not imply that has a winning strategy if is not meagre. In fact, if is a complete metric space, then has a winning strategy if and only if there is some such that is a comeagre subset of It may be the case that neither player has a winning strategy: let be the unit interval and be the family of closed intervals in the unit interval. The game is determined if the target set has the property of Baire, i.e. if it differs from an open set by a meagre set (but the converse is not true). Assuming the axiom of choice, there are subsets of the unit interval for which the Banach–Mazur game is not determined.
See also
Choquet game
References
[1957] Oxtoby, J.C. The Banach–Mazur game and Banach category theorem, Contribution to the Theory of Games, Volume III, Annals of Mathematical Studies 39 (1957), Princeton, 159–163
[1987] Telgársky, R. J. Topological Games: On the 50th Anniversary of the Banach–Mazur Game, Rocky Mountain J. Math. 17 (1987), pp. 227–276.
[2003] Julian P. Revalski The Banach–Mazur game: History and recent developments, Seminar notes, Pointe-a-Pitre, Guadeloupe, France, 2003–2004
External links
Topological games
General topology
Descriptive set theory
Determinacy | Banach–Mazur game | [
"Mathematics"
] | 1,096 | [
"General topology",
"Topological games",
"Game theory",
"Topology",
"Determinacy"
] |
365,876 | https://en.wikipedia.org/wiki/Distribution%20function%20%28physics%29 | In molecular kinetic theory in physics, a system's distribution function is a function of seven variables, , which gives the number of particles per unit volume in single-particle phase space. It is the number of particles per unit volume having approximately the velocity near the position and time . The usual normalization of the distribution function is
where N is the total number of particles and n is the number density of particles – the number of particles per unit volume, or the density divided by the mass of individual particles.
A distribution function may be specialised with respect to a particular set of dimensions. E.g. take the quantum mechanical six-dimensional phase space, and multiply by the total space volume, to give the momentum distribution, i.e. the number of particles in the momentum phase space having approximately the momentum .
Particle distribution functions are often used in plasma physics to describe wave–particle interactions and velocity-space instabilities. Distribution functions are also used in fluid mechanics, statistical mechanics and nuclear physics.
The basic distribution function uses the Boltzmann constant and temperature with the number density to modify the normal distribution:
Related distribution functions may allow bulk fluid flow, in which case the velocity origin is shifted, so that the exponent's numerator is , where is the bulk velocity of the fluid. Distribution functions may also feature non-isotropic temperatures, in which each term in the exponent is divided by a different temperature.
Plasma theories such as magnetohydrodynamics may assume the particles to be in thermodynamic equilibrium. In this case, the distribution function is Maxwellian. This distribution function allows fluid flow and different temperatures in the directions parallel to, and perpendicular to, the local magnetic field. More complex distribution functions may also be used, since plasmas are rarely in thermal equilibrium.
The mathematical analogue of a distribution is a measure; the time evolution of a measure on a phase space is the topic of study in dynamical systems.
References
Statistical mechanics
Dynamical systems | Distribution function (physics) | [
"Physics",
"Mathematics"
] | 404 | [
"Statistical mechanics stubs",
"Statistical mechanics",
"Mechanics",
"Dynamical systems"
] |
365,920 | https://en.wikipedia.org/wiki/Point-to-point%20%28telecommunications%29 | In telecommunications, a point-to-point connection refers to a communications connection between two communication endpoints or nodes. An example is a telephone call, in which one telephone is connected with one other, and what is said by one caller can only be heard by the other. This is contrasted with a point-to-multipoint or broadcast connection, in which many nodes can receive information transmitted by one node. Other examples of point-to-point communications links are leased lines and microwave radio relay.
The term is also used in computer networking and computer architecture to refer to a wire or other connection that links only two computers or circuits, as opposed to other network topologies such as buses or crossbar switches which can connect many communications devices.
Point-to-point is sometimes abbreviated as P2P. This usage of P2P is distinct from P2P meaning peer-to-peer in the context of file sharing networks or other data-sharing protocols between peers.
Basic data link
A traditional point-to-point data link is a communications medium with exactly two endpoints and no data or packet formatting. The host computers at either end take full responsibility for formatting the data transmitted between them. The connection between the computer and the communications medium was generally implemented through an RS-232 or similar interface. Computers in close proximity may be connected by wires directly between their interface cards.
When connected at a distance, each endpoint would be fitted with a modem to convert analog telecommunications signals into a digital data stream. When the connection uses a telecommunications provider, the connection is called a dedicated, leased, or private line. The ARPANET used leased lines to provide point-to-point data links between its packet-switching nodes, which were called Interface Message Processors.
Modern links
With the exception of passive optical networks, modern Ethernet is exclusively point-to-point on the physical layer – any cable only connects two devices. The term point-to-point telecommunications can also mean a wireless data link between two fixed points. The wireless communication is typically bi-directional and either time-division multiple access (TDMA) or channelized. This can be a microwave relay link consisting of a transmitter which transmits a narrow beam of microwaves with a parabolic dish antenna to a second parabolic dish at the receiver. It also includes technologies such as lasers which transmit data modulated on a light beam. These technologies require an unobstructed line of sight between the two points and thus are limited by the visual horizon to distances of about .
Networking
In a local network, repeater hubs or switches provide basic connectivity. A hub provides a point-to-multipoint (or simply multipoint) circuit in which all connected client nodes share the network bandwidth. A switch on the other hand provides a series of point-to-point circuits, via microsegmentation, which allows each client node to have a dedicated circuit and the added advantage of having full-duplex connections.
From the OSI model's layer perspective, both switches and repeater hubs provide point-to-point connections on the physical layer. However, on the data link layer, a repeater hub provides point-to-multipoint connectivity – each frame is forwarded to all nodes – while a switch provides virtual point-to-point connections – each unicast frame is only forwarded to the destination node.
Within many switched telecommunications systems, it is possible to establish a permanent circuit. One example might be a telephone in the lobby of a public building, which is programmed to ring only the number of a telephone dispatcher. "Nailing down" a switched connection saves the cost of running a physical circuit between the two points. The resources in such a connection can be released when no longer needed, for example, a television circuit from a parade route back to the studio.
See also
IP shuffling
Notes
References
Network topology
Telecommunication services | Point-to-point (telecommunications) | [
"Mathematics"
] | 790 | [
"Network topology",
"Topology"
] |
365,935 | https://en.wikipedia.org/wiki/Point-to-multipoint%20communication | In telecommunications, point-to-multipoint communication (P2MP, PTMP or PMP) is communication which is accomplished via a distinct type of one-to-many connection, providing multiple paths from a single location to multiple locations.
Point-to-multipoint telecommunications is typically used in wireless Internet and IP telephony via gigahertz radio frequencies. P2MP systems have been designed with and without a return channel from the multiple receivers. A central antenna or antenna array broadcasts to several receiving antennas and the system uses a form of time-division multiplexing to allow for the return channel traffic.
Modern point-to-multipoint links
In contemporary usage, the term point-to-multipoint wireless communications relates to fixed wireless data communications for Internet or voice over IP via radio or microwave frequencies in the gigahertz range.
Point-to-multipoint is the most popular approach for wireless communications that have a large number of nodes, end destinations or end users. Point to Multipoint generally assumes there is a central base station to which remote subscriber units or customer premises equipment (CPE) (a term that was originally used in the wired telephone industry) are connected over the wireless medium. Connections between the base station and subscriber units can be either line-of-sight or, for lower-frequency radio systems, non-line-of-sight where link budgets permit. Generally, lower frequencies can offer non-line-of-sight connections. Various software planning tools can be used to determine feasibility of potential connections using topographic data as well as link budget simulation. Often the point to multipoint links are installed to reduce the cost of infrastructure and increase the number of CPE's and connectivity.
Point-to-multipoint wireless networks employing directional antennas are affected by the hidden node problem (also called hidden terminal) in case they employ a CSMA/CA medium access control protocol. The negative impact of the hidden node problem can be mitigated using a time-division multiple access (TDMA) based protocol or a polling protocol rather than the CSMA/CA protocol.
The telecommunications signal in a point-to-multipoint system is typically bi-directional, TDMA or channelized. Systems using frequency-division duplexing (FDD) offer full-duplex connections between base station and remote sites, and time-division duplex (TDD) systems offer half-duplex connections.
Point-to-multipoint systems can be implemented in licensed, semi-licensed or unlicensed frequency bands depending on the specific application. point-to-point and point-to-multipoint links are very popular in the wireless industry and when paired with other high-capacity wireless links or technologies such as free space optics (FSO) can be referred to as backhaul.
The base station may have a single omnidirectional antenna or multiple sector antennas, the latter of which allowing greater range and capacity.
See also
Broadcasting (networking)
Local Multipoint Distribution Service
Multichannel Multipoint Distribution Service
Wireless access point
References
Telecommunication services
Wireless networking
Network topology | Point-to-multipoint communication | [
"Mathematics",
"Technology",
"Engineering"
] | 630 | [
"Network topology",
"Wireless networking",
"Topology",
"Computer networks engineering"
] |
366,023 | https://en.wikipedia.org/wiki/Wear | Wear is the damaging, gradual removal or deformation of material at solid surfaces. Causes of wear can be mechanical (e.g., erosion) or chemical (e.g., corrosion). The study of wear and related processes is referred to as tribology.
Wear in machine elements, together with other processes such as fatigue and creep, causes functional surfaces to degrade, eventually leading to material failure or loss of functionality. Thus, wear has large economic relevance as first outlined in the Jost Report. Abrasive wear alone has been estimated to cost 1–4% of the gross national product of industrialized nations.
Wear of metals occurs by plastic displacement of surface and near-surface material and by detachment of particles that form wear debris. The particle size may vary from millimeters to nanometers. This process may occur by contact with other metals, nonmetallic solids, flowing liquids, solid particles or liquid droplets entrained in flowing gasses.
The wear rate is affected by factors such as type of loading (e.g., impact, static, dynamic), type of motion (e.g., sliding, rolling), temperature, and lubrication, in particular by the process of deposition and wearing out of the boundary lubrication layer. Depending on the tribosystem, different wear types and wear mechanisms can be observed.
Wear types and mechanisms
Types of wear are identified by relative motion, the nature of disturbance at the worn surface or "mechanism", and whether it effects a self regenerative or base layer.
Wear mechanisms are the physical disturbance. For example, the mechanism of adhesive wear is adhesion. Wear mechanisms and/or sub-mechanisms frequently overlap and occur in a synergistic manner, producing a greater rate of wear than the sum of the individual wear mechanisms.
Adhesive wear
Adhesive wear can be found between surfaces during frictional contact and generally refers to unwanted displacement and attachment of wear debris and material compounds from one surface to another. Two adhesive wear types can be distinguished:
Adhesive wear is caused by relative motion, "direct contact" and plastic deformation which create wear debris and material transfer from one surface to another.
Cohesive adhesive forces, holds two surfaces together even though they are separated by a measurable distance, with or without any actual transfer of material.
Generally, adhesive wear occurs when two bodies slide over or are pressed into each other, which promote material transfer. This can be described as plastic deformation of very small fragments within the surface layers. The asperities or microscopic high points (surface roughness) found on each surface affect the severity of how fragments of oxides are pulled off and added to the other surface, partly due to strong adhesive forces between atoms, but also due to accumulation of energy in the plastic zone between the asperities during relative motion.
The type of mechanism and the amplitude of surface attraction varies between different materials but are amplified by an increase in the density of "surface energy". Most solids will adhere on contact to some extent. However, oxidation films, lubricants and contaminants naturally occurring generally suppress adhesion, and spontaneous exothermic chemical reactions between surfaces generally produce a substance with low energy status in the absorbed species.
Adhesive wear can lead to an increase in roughness and the creation of protrusions (i.e., lumps) above the original surface. In industrial manufacturing, this is referred to as galling, which eventually breaches the oxidized surface layer and connects to the underlying bulk material, enhancing the possibility for a stronger adhesion and plastic flow around the lump.
A simple model for the wear volume for adhesive wear, , can be described by:
where is the load, is the wear coefficient, is the sliding distance, and is the hardness.
Abrasive wear
Abrasive wear occurs when a hard rough surface slides across a softer surface. ASTM International defines it as the loss of material due to hard particles or hard protuberances that are forced against and move along a solid surface.
Abrasive wear is commonly classified according to the type of contact and the contact environment. The type of contact determines the mode of abrasive wear. The two modes of abrasive wear are known as two-body and three-body abrasive wear. Two-body wear occurs when the grits or hard particles remove material from the opposite surface. The common analogy is that of material being removed or displaced by a cutting or plowing operation. Three-body wear occurs when the particles are not constrained, and are free to roll and slide down a surface. The contact environment determines whether the wear is classified as open or closed. An open contact environment occurs when the surfaces are sufficiently displaced to be independent of one another
There are a number of factors which influence abrasive wear and hence the manner of material removal. Several different mechanisms have been proposed to describe the manner in which the material is removed. Three commonly identified mechanisms of abrasive wear are:
Plowing
Cutting
Fragmentation
Plowing occurs when material is displaced to the side, away from the wear particles, resulting in the formation of grooves that do not involve direct material removal. The displaced material forms ridges adjacent to grooves, which may be removed by subsequent passage of abrasive particles.
Cutting occurs when material is separated from the surface in the form of primary debris, or microchips, with little or no material displaced to the sides of the grooves. This mechanism closely resembles conventional machining.
Fragmentation occurs when material is separated from a surface by a cutting process and the indenting abrasive causes localized fracture of the wear material. These cracks then freely propagate locally around the wear groove, resulting in additional material removal by spalling.
Abrasive wear can be measured as loss of mass by the Taber Abrasion Test according to ISO 9352 or ASTM D 4060.
The wear volume for single-abrasive wear, , can be described by:
where is the load, is the shape factor of an asperity (typically ~ 0.1), is the degrees of wear by an asperity (typically 0.1 to 1.0), is the wear coefficient, is the sliding distance, and is the hardness.
Surface fatigue
Surface fatigue is a process in which the surface of a material is weakened by cyclic loading, which is one type of general material fatigue. Fatigue wear is produced when the wear particles are detached by cyclic crack growth of microcracks on the surface. These microcracks are either superficial cracks or subsurface cracks.
Fretting wear
Fretting wear is the repeated cyclical rubbing between two surfaces. Over a period of time fretting which will remove material from one or both surfaces in contact. It occurs typically in bearings, although most bearings have their surfaces hardened to resist the problem. Another problem occurs when cracks in either surface are created, known as fretting fatigue. It is the more serious of the two phenomena because it can lead to catastrophic failure of the bearing. An associated problem occurs when the small particles removed by wear are oxidized in air. The oxides are usually harder than the underlying metal, so wear accelerates as the harder particles abrade the metal surfaces further. Fretting corrosion acts in the same way, especially when water is present. Unprotected bearings on large structures like bridges can suffer serious degradation in behaviour, especially when salt is used during the winter to deice the highways carried by the bridges. The problem of fretting corrosion was involved in the Silver Bridge tragedy and the Mianus River Bridge accident.
Erosive wear
Erosive wear can be defined as an extremely short sliding motion and is executed within a short time interval. Erosive wear is caused by the impact of particles of solid or liquid against the surface of an object. The impacting particles gradually remove material from the surface through repeated deformations and cutting actions. It is a widely encountered mechanism in industry. Due to the nature of the conveying process, piping systems are prone to wear when abrasive particles have to be transported.
The rate of erosive wear is dependent upon a number of factors. The material characteristics of the particles, such as their shape, hardness, impact velocity and impingement angle are primary factors along with the properties of the surface being eroded. The impingement angle is one of the most important factors and is widely recognized in literature. For ductile materials, the maximum wear rate is found when the impingement angle is approximately 30°, whilst for non-ductile materials the maximum wear rate occurs when the impingement angle is normal to the surface. A detailed theoretical analysis of dependency of the erosive wear on the inclination angle and material properties is provided in.
For a given particle morphology, the erosion rate, , can be fit with a power law dependence on velocity:
where is a constant, is velocity, and is a velocity exponent. is typically between 2 - 2.5 for metals and 2.5 - 3 for ceramics.
Corrosion and oxidation wear
Corrosion and oxidation wear occurs both in lubricated and dry contacts. The fundamental cause are chemical reactions between the worn material and the corroding medium. Wear caused by a synergistic action of tribological stresses and corrosion is also called tribocorrosion.
Impact Wear
Impact wear is caused by contact between two bodies. Unlike erosive wear, impact wear always occurs at the same, well-defined place. If the impact is repeated, then usually with constant kinetic energy at the moment of impact. The frequency of impacts can vary. Wear can occur on both bodies, but usually, one body has significantly higher hardness and toughness and its wear is neglected.
Other Types of Wear
Other, less common types of wear are cavitation and diffusive wear.
Wear stages
Under nominal operation conditions, the wear rate normally changes in three different stages:
Primary stage or early run-in period, where surfaces adapt to each other and the wear-rate might vary between high and low.
Secondary stage or mid-age process, where steady wear can be observed. Most of the component's operational life is spent in this stage.
Tertiary stage or old-age period, where surfaces are subjected to rapid failure due to a high rate of wear.
The wear rate is strongly influenced by the operating conditions and the formation of tribofilms. The secondary stage is shortened with increasing severity of environmental conditions, such as high temperatures, strain rates and stresses.
So-called wear maps, demonstrating wear rate under different operation condition, are used to determine stable operation points for tribological contacts. Wear maps also show dominating wear modes under different loading conditions.
In explicit wear tests simulating industrial conditions between metallic surfaces, there are no clear chronological distinction between different wear-stages due to big overlaps and symbiotic relations between various friction mechanisms. Surface engineering and treatments are used to minimize wear and extend the components working life.
Wear testing
Several standard test methods exist for different types of wear to determine the amount of material removal during a specified time period under well-defined conditions. ASTM International Committee G-2 standardizes wear testing for specific applications, which are periodically updated. The Society for Tribology and Lubrication Engineers (STLE) has documented a large number of frictional, wear and lubrication tests. Standardized wear tests are used to create comparative material rankings for a specific set of test parameter as stipulated in the test description. To obtain more accurate predictions of wear in industrial applications it is necessary to conduct wear testing under conditions simulating the exact wear process.
An attrition test is a test that is carried out to measure the resistance of a granular material to wear.
Modeling of wear
The Reye–Archard–Khrushchov wear law is the classic wear prediction model.
Measuring wear
Wear coefficient
The wear coefficient is a physical coefficient used to measure, characterize and correlate the wear of materials.
Lubricant analysis
Lubricant analysis is an alternative, indirect way of measuring wear. Here, wear is detected by the presence of wear particles in a liquid lubricant. To gain further insights into the nature of the particles, chemical (such as XRF, ICP-OES), structural (such as ferrography) or optical analysis (such as light microscopy) can be performed.
See also
— Equipment used to measure friction and wear
References
Further reading
Bowden, Tabor: Friction and Lubrication of Solids (Oxford:Clarendon Press 1950).
Kleis I. and Kulu P.: Solid Particle Erosion. Springer-Verlag, London, 2008, 206 pp.
Zum Gahr K.-H.: Microstructure and wear of materials, Elsevier, Amsterdam, 1987, 560 pp.
Jones J. R.:Lubrication, Friction, and Wear, NASA-SP-8063, 1971, 75 pp. A nice, free and good document available here.
S. C. Lim. Recent Development in Wear Mechanism Maps. Trib. Intl. 1998; 31; 87–97.
H.C. Meng and K. C Ludema. Wear 1995; 183; 443–457.
R. Bosman and D. J. Schipper. Wear 2012; 280; 54–62.
M. W. Akram, K. Polychronopoulou, A. A. Polycarpou. Trib. Int.: 2013; 57;9 2–100.
P. J. Blau, Tribosystem Analysis - A Practical Approach to the Diagnosis of Wear Problems. CRC Press, 2016.
External links
University of Miskolc: Wear and wear mechanism
Materials degradation
Tribology | Wear | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,824 | [
"Tribology",
"Materials science",
"Surface science",
"Mechanical engineering",
"Materials degradation"
] |
366,136 | https://en.wikipedia.org/wiki/Pontryagin%20duality | In mathematics, Pontryagin duality is a duality between locally compact abelian groups that allows generalizing Fourier transform to all such groups, which include the circle group (the multiplicative group of complex numbers of modulus one), the finite abelian groups (with the discrete topology), and the additive group of the integers (also with the discrete topology), the real numbers, and every finite-dimensional vector space over the reals or a -adic field.
The Pontryagin dual of a locally compact abelian group is the locally compact abelian topological group formed by the continuous group homomorphisms from the group to the circle group with the operation of pointwise multiplication and the topology of uniform convergence on compact sets. The Pontryagin duality theorem establishes Pontryagin duality by stating that any locally compact abelian group is naturally isomorphic with its bidual (the dual of its dual). The Fourier inversion theorem is a special case of this theorem.
The subject is named after Lev Pontryagin who laid down the foundations for the theory of locally compact abelian groups and their duality during his early mathematical works in 1934. Pontryagin's treatment relied on the groups being second-countable and either compact or discrete. This was improved to cover the general locally compact abelian groups by Egbert van Kampen in 1935 and André Weil in 1940.
Introduction
Pontryagin duality places in a unified context a number of observations about functions on the real line or on finite abelian groups:
Suitably regular complex-valued periodic functions on the real line have Fourier series and these functions can be recovered from their Fourier series;
Suitably regular complex-valued functions on the real line have Fourier transforms that are also functions on the real line and, just as for periodic functions, these functions can be recovered from their Fourier transforms; and
Complex-valued functions on a finite abelian group have discrete Fourier transforms, which are functions on the dual group, which is a (non-canonically) isomorphic group. Moreover, any function on a finite abelian group can be recovered from its discrete Fourier transform.
The theory, introduced by Lev Pontryagin and combined with the Haar measure introduced by John von Neumann, André Weil and others depends on the theory of the dual group of a locally compact abelian group.
It is analogous to the dual vector space of a vector space: a finite-dimensional vector space and its dual vector space are not naturally isomorphic, but the endomorphism algebra (matrix algebra) of one is isomorphic to the opposite of the endomorphism algebra of the other: via the transpose. Similarly, a group and its dual group are not in general isomorphic, but their endomorphism rings are opposite to each other: . More categorically, this is not just an isomorphism of endomorphism algebras, but a contravariant equivalence of categories – see .
Definition
A topological group is a locally compact group if the underlying topological space is locally compact and Hausdorff; a topological group is abelian if the underlying group is abelian.
Examples of locally compact abelian groups include finite abelian groups, the integers (both for the discrete topology, which is also induced by the usual metric), the real numbers, the circle group T (both with their usual metric topology), and also the p-adic numbers (with their usual p-adic topology).
For a locally compact abelian group , the Pontryagin dual is the group of continuous group homomorphisms from to the circle group . That is,
The Pontryagin dual is usually endowed with the topology given by uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from to ).
For example,
Pontryagin duality theorem
Canonical means that there is a naturally defined map ; more importantly, the map should be functorial in . For the multiplicative character of the group , the canonical isomorphism is defined on as follows:
That is,
In other words, each group element is identified to the evaluation character on the dual. This is strongly analogous to the canonical isomorphism between a finite-dimensional vector space and its double dual, , and it is worth mentioning that any vector space is an abelian group. If is a finite abelian group, then but this isomorphism is not canonical. Making this statement precise (in general) requires thinking about dualizing not only on groups, but also on maps between the groups, in order to treat dualization as a functor and prove the identity functor and the dualization functor are not naturally equivalent. Also the duality theorem implies that for any group (not necessarily finite) the dualization functor is an exact functor.
Pontryagin duality and the Fourier transform
Haar measure
One of the most remarkable facts about a locally compact group is that it carries an essentially unique natural measure, the Haar measure, which allows one to consistently measure the "size" of sufficiently regular subsets of . "Sufficiently regular subset" here means a Borel set; that is, an element of the σ-algebra generated by the compact sets. More precisely, a right Haar measure on a locally compact group is a countably additive measure μ defined on the Borel sets of which is right invariant in the sense that for an element of and a Borel subset of and also satisfies some regularity conditions (spelled out in detail in the article on Haar measure). Except for positive scaling factors, a Haar measure on is unique.
The Haar measure on allows us to define the notion of integral for (complex-valued) Borel functions defined on the group. In particular, one may consider various Lp spaces associated to the Haar measure . Specifically,
Note that, since any two Haar measures on are equal up to a scaling factor, this -space is independent of the choice of Haar measure and thus perhaps could be written as . However, the -norm on this space depends on the choice of Haar measure, so if one wants to talk about isometries it is important to keep track of the Haar measure being used.
Fourier transform and Fourier inversion formula for L1-functions
The dual group of a locally compact abelian group is used as the underlying space for an abstract version of the Fourier transform. If , then the Fourier transform is the function on defined by
where the integral is relative to Haar measure on . This is also denoted . Note the Fourier transform depends on the choice of Haar measure. It is not too difficult to show that the Fourier transform of an function on is a bounded continuous function on which vanishes at infinity.
The inverse Fourier transform of an integrable function on is given by
where the integral is relative to the Haar measure on the dual group . The measure on that appears in the Fourier inversion formula is called the dual measure to and may be denoted .
The various Fourier transforms can be classified in terms of their domain and transform domain (the group and dual group) as follows (note that is Circle group):
As an example, suppose , so we can think about as by the pairing If is the Lebesgue measure on Euclidean space, we obtain the ordinary Fourier transform on and the dual measure needed for the Fourier inversion formula is . If we want to get a Fourier inversion formula with the same measure on both sides (that is, since we can think about as its own dual space we can ask for to equal ) then we need to use
However, if we change the way we identify with its dual group, by using the pairing
then Lebesgue measure on is equal to its own dual measure. This convention minimizes the number of factors of that show up in various places when computing Fourier transforms or inverse Fourier transforms on Euclidean space. (In effect it limits the only to the exponent rather than as a pre-factor outside the integral sign.) Note that the choice of how to identify with its dual group affects the meaning of the term "self-dual function", which is a function on equal to its own Fourier transform: using the classical pairing the function is self-dual. But using the pairing, which keeps the pre-factor as unity, makes self-dual instead. This second definition for the Fourier transform has the advantage that it maps the multiplicative identity to the convolution identity, which is useful as is a convolution algebra. See the next section on the group algebra. In addition, this form is also necessarily isometric on spaces. See below at Plancherel and L2 Fourier inversion theorems.
Group algebra
The space of integrable functions on a locally compact abelian group is an algebra, where multiplication is convolution: the convolution of two integrable functions and is defined as
This algebra is referred to as the Group Algebra of . By the Fubini–Tonelli theorem, the convolution is submultiplicative with respect to the norm, making a Banach algebra. The Banach algebra has a multiplicative identity element if and only if is a discrete group, namely the function that is 1 at the identity and zero elsewhere. In general, however, it has an approximate identity which is a net (or generalized sequence) indexed on a directed set such that
The Fourier transform takes convolution to multiplication, i.e. it is a homomorphism of abelian Banach algebras (of norm ≤ 1):
In particular, to every group character on corresponds a unique multiplicative linear functional on the group algebra defined by
It is an important property of the group algebra that these exhaust the set of non-trivial (that is, not identically zero) multiplicative linear functionals on the group algebra; see section 34 of . This means the Fourier transform is a special case of the Gelfand transform.
Plancherel and L2 Fourier inversion theorems
As we have stated, the dual group of a locally compact abelian group is a locally compact abelian group in its own right and thus has a Haar measure, or more precisely a whole family of scale-related Haar measures.
Since the complex-valued continuous functions of compact support on are -dense, there is a unique extension of the Fourier transform from that space to a unitary operator
and we have the formula
Note that for non-compact locally compact groups the space does not contain , so the Fourier transform of general -functions on is "not" given by any kind of integration formula (or really any explicit formula). To define the Fourier transform one has to resort to some technical trick such as starting on a dense subspace like the continuous functions with compact support and then extending the isometry by continuity to the whole space. This unitary extension of the Fourier transform is what we mean by the Fourier transform on the space of square integrable functions.
The dual group also has an inverse Fourier transform in its own right; it can be characterized as the inverse (or adjoint, since it is unitary) of the Fourier transform. This is the content of the Fourier inversion formula which follows.
In the case the dual group is naturally isomorphic to the group of integers and the Fourier transform specializes to the computation of coefficients of Fourier series of periodic functions.
If is a finite group, we recover the discrete Fourier transform. Note that this case is very easy to prove directly.
Bohr compactification and almost-periodicity
One important application of Pontryagin duality is the following characterization of compact abelian topological groups:
That being compact implies is discrete or that being discrete implies that is compact is an elementary consequence of the definition of the compact-open topology on and does not need Pontryagin duality. One uses Pontryagin duality to prove the converses.
The Bohr compactification is defined for any topological group , regardless of whether is locally compact or abelian. One use made of Pontryagin duality between compact abelian groups and discrete abelian groups is to characterize the Bohr compactification of an arbitrary abelian locally compact topological group. The Bohr compactification of is , where H has the group structure , but given the discrete topology. Since the inclusion map
is continuous and a homomorphism, the dual morphism
is a morphism into a compact group which is easily shown to satisfy the requisite universal property.
Categorical considerations
Pontryagin duality can also profitably be considered functorially. In what follows, LCA is the category of locally compact abelian groups and continuous group homomorphisms. The dual group construction of is a contravariant functor LCA → LCA, represented (in the sense of representable functors) by the circle group as In particular, the double dual functor is covariant.
A categorical formulation of Pontryagin duality then states that the natural transformation between the identity functor on LCA and the double dual functor is an isomorphism. Unwinding the notion of a natural transformation, this means that the maps are isomorphisms for any locally compact abelian group , and these isomorphisms are functorial in . This isomorphism is analogous to the double dual of finite-dimensional vector spaces (a special case, for real and complex vector spaces).
An immediate consequence of this formulation is another common categorical formulation of Pontryagin duality: the dual group functor is an equivalence of categories from LCA to LCAop.
The duality interchanges the subcategories of discrete groups and compact groups. If is a ring and is a left –module, the dual group will become a right –module; in this way we can also see that discrete left –modules will be Pontryagin dual to compact right –modules. The ring of endomorphisms in LCA is changed by duality into its opposite ring (change the multiplication to the other order). For example, if is an infinite cyclic discrete group, is a circle group: the former has so this is true also of the latter.
Generalizations
Generalizations of Pontryagin duality are constructed in two main directions: for commutative topological groups that are not locally compact, and for noncommutative topological groups. The theories in these two cases are very different.
Dualities for commutative topological groups
When is a Hausdorff abelian topological group, the group with the compact-open topology is a Hausdorff abelian topological group and the natural mapping from to its double-dual makes sense. If this mapping is an isomorphism, it is said that satisfies Pontryagin duality (or that is a reflexive group, or a reflective group). This has been extended in a number of directions beyond the case that is locally compact.
In particular, Samuel Kaplan showed in 1948 and 1950 that arbitrary products and countable inverse limits of locally compact (Hausdorff) abelian groups satisfy Pontryagin duality. Note that an infinite product of locally compact non-compact spaces is not locally compact.
Later, in 1975, Rangachari Venkataraman showed, among other facts, that every open subgroup of an abelian topological group which satisfies Pontryagin duality itself satisfies Pontryagin duality.
More recently, Sergio Ardanza-Trevijano and María Jesús Chasco have extended the results of Kaplan mentioned above. They showed that direct and inverse limits of sequences of abelian groups satisfying Pontryagin duality also satisfy Pontryagin duality if the groups are metrizable or -spaces but not necessarily locally compact, provided some extra conditions are satisfied by the sequences.
However, there is a fundamental aspect that changes if we want to consider Pontryagin duality beyond the locally compact case. Elena Martín-Peinador proved in 1995 that if is a Hausdorff abelian topological group that satisfies Pontryagin duality, and the natural evaluation pairing
is (jointly) continuous, then is locally compact. As a corollary, all non-locally compact examples of Pontryagin duality are groups where the pairing is not (jointly) continuous.
Another way to generalize Pontryagin duality to wider classes of commutative topological groups is to endow the dual group with a bit different topology, namely the topology of uniform convergence on totally bounded sets. The groups satisfying the identity under this assumption are called stereotype groups. This class is also very wide (and it contains locally compact abelian groups), but it is narrower than the class of reflective groups.
Pontryagin duality for topological vector spaces
In 1952 Marianne F. Smith noticed that Banach spaces and reflexive spaces, being considered as topological groups (with the additive group operation), satisfy Pontryagin duality. Later B. S. Brudovskiĭ, William C. Waterhouse and K. Brauner showed that this result can be extended to the class of all quasi-complete barreled spaces (in particular, to all Fréchet spaces). In the 1990s Sergei Akbarov gave a description of the class of the topological vector spaces that satisfy a stronger property than the classical Pontryagin reflexivity, namely, the identity
where means the space of all linear continuous functionals endowed with the topology of uniform convergence on totally bounded sets in (and means the dual to in the same sense). The spaces of this class are called stereotype spaces, and the corresponding theory found a series of applications in Functional analysis and Geometry, including the generalization of Pontryagin duality for non-commutative topological groups.
Dualities for non-commutative topological groups
For non-commutative locally compact groups the classical Pontryagin construction stops working for various reasons, in particular, because the characters don't always separate the points of , and the irreducible representations of are not always one-dimensional. At the same time it is not clear how to introduce multiplication on the set of irreducible unitary representations of , and it is even not clear whether this set is a good choice for the role of the dual object for . So the problem of constructing duality in this situation requires complete rethinking.
Theories built to date are divided into two main groups: the theories where the dual object has the same nature as the source one (like in the Pontryagin duality itself), and the theories where the source object and its dual differ from each other so radically that it is impossible to count them as objects of one class.
The second type theories were historically the first: soon after Pontryagin's work Tadao Tannaka (1938) and Mark Krein (1949) constructed a duality theory for arbitrary compact groups known now as the Tannaka–Krein duality. In this theory the dual object for a group is not a group but a category of its representations .
The theories of first type appeared later and the key example for them was the duality theory for finite groups. In this theory the category of finite groups is embedded by the operation of taking group algebra (over ) into the category of finite dimensional Hopf algebras, so that the Pontryagin duality functor turns into the operation of taking the dual vector space (which is a duality functor in the category of finite dimensional Hopf algebras).
In 1973 Leonid I. Vainerman, George I. Kac, Michel Enock, and Jean-Marie Schwartz built a general theory of this type for all locally compact groups. From the 1980s the research in this area was resumed after the discovery of quantum groups, to which the constructed theories began to be actively transferred. These theories are formulated in the language of C*-algebras, or Von Neumann algebras, and one of its variants is the recent theory of locally compact quantum groups.
One of the drawbacks of these general theories, however, is that in them the objects generalizing the concept of a group are not Hopf algebras in the usual algebraic sense. This deficiency can be corrected (for some classes of groups) within the framework of duality theories constructed on the basis of the notion of envelope of topological algebra.
See also
Peter–Weyl theorem
Cartier duality
Stereotype space
Notes
Citations
References
Harmonic analysis
Duality theories
Theorems in analysis
Fourier analysis
Lp spaces | Pontryagin duality | [
"Mathematics"
] | 4,167 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical structures",
"Category theory",
"Duality theories",
"Geometry",
"Mathematical problems",
"Mathematical theorems"
] |
366,208 | https://en.wikipedia.org/wiki/Typed%20lambda%20calculus | A typed lambda calculus is a typed formalism that uses the lambda-symbol () to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see kinds below). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus, but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type.
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here, typability usually captures desirable properties of the program (e.g., the program will not cause a memory access violation).
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of certain classes of categories. For example, the simply typed lambda calculus is the language of Cartesian closed categories (CCCs)
Kinds of typed lambda calculi
Various typed lambda calculi have been studied. The simply typed lambda calculus has only one type constructor, the arrow , and its only types are basic types and function types . System T extends the simply typed lambda calculus with a type of natural numbers and higher-order primitive recursion; in this system all functions provably recursive in Peano arithmetic are definable. System F allows polymorphism by using universal quantification over all types; from a logical perspective it can describe all functions that are provably total in second-order logic. Lambda calculi with dependent types are the base of intuitionistic type theory, the calculus of constructions and the logical framework (LF), a pure lambda calculus with dependent types. Based on work by Berardi on pure type systems, Henk Barendregt proposed the Lambda cube to systematize the relations of pure typed lambda calculi (including simply typed lambda calculus, System F, LF and the calculus of constructions).
Some typed lambda calculi introduce a notion of subtyping, i.e. if is a subtype of , then all terms of type also have type . Typed lambda calculi with subtyping are the simply typed lambda calculus with conjunctive types and System F<:.
All the systems mentioned so far, with the exception of the untyped lambda calculus, are strongly normalizing: all computations terminate. Therefore, they cannot describe all Turing-computable functions. As another consequence they are consistent as a logic, i.e. there are uninhabited types. There exist, however, typed lambda calculi that are not strongly normalizing. For example the dependently typed lambda calculus with a type of all types (Type : Type) is not normalizing due to Girard's paradox. This system is also the simplest pure type system, a formalism which generalizes the Lambda cube. Systems with explicit recursion combinators, such as Plotkin's "Programming language for Computable Functions" (PCF), are not normalizing, but they are not intended to be interpreted as a logic. Indeed, PCF is a prototypical, typed functional programming language, where types are used to ensure that programs are well-behaved but not necessarily that they are terminating.
Applications to programming languages
In computer programming, the routines (functions, procedures, methods) of strongly typed programming languages closely correspond to typed lambda expressions.
See also
Kappa calculus—an analogue of typed lambda calculus which excludes higher-order functions
Notes
Further reading
Brandl, Helmut (2022). Calculus of Constructions / Typed Lambda Calculus
Lambda calculus
Logic in computer science
Theory of computation
Type theory | Typed lambda calculus | [
"Mathematics"
] | 818 | [
"Mathematical structures",
"Logic in computer science",
"Mathematical logic",
"Mathematical objects",
"Type theory"
] |
366,555 | https://en.wikipedia.org/wiki/Biomolecule | A biomolecule or biological molecule is loosely defined as a molecule produced by a living organism and essential to one or more typically biological processes. Biomolecules include large macromolecules such as proteins, carbohydrates, lipids, and nucleic acids, as well as small molecules such as vitamins and hormones. A general name for this class of material is biological materials. Biomolecules are an important element of living organisms. They are often endogenous, i.e. produced within the organism, but organisms usually also need exogenous biomolecules, for example certain nutrients, to survive.
Biomolecules and their reactions are studied in biology and its subfields of biochemistry and molecular biology. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are also present in small amounts.
The uniformity of both specific types of molecules (the biomolecules) and of certain metabolic pathways are invariant features among the wide diversity of life forms; thus these biomolecules and metabolic pathways are referred to as "biochemical universals" or "theory of material unity of the living beings", a unifying concept in biology, along with cell theory and evolution theory.
Types of biomolecules
A diverse range of biomolecules exist, including:
Small molecules:
Lipids, fatty acids, glycolipids, sterols, monosaccharides
Vitamins
Hormones, neurotransmitters
Metabolites
Monomers, oligomers and polymers:
Nucleosides and nucleotides
Nucleosides are molecules formed by attaching a nucleobase to a ribose or deoxyribose ring. Examples of these include cytidine (C), uridine (U), adenosine (A), guanosine (G), and thymidine (T).
Nucleosides can be phosphorylated by specific kinases in the cell, producing nucleotides.
Both DNA and RNA are polymers, consisting of long, linear molecules assembled by polymerase enzymes from repeating structural units, or monomers, of mononucleotides. DNA uses the deoxynucleotides C, G, A, and T, while RNA uses the ribonucleotides (which have an extra hydroxyl(OH) group on the pentose ring) C, G, A, and U. Modified bases are fairly common (such as with methyl groups on the base ring), as found in ribosomal RNA or transfer RNAs or for discriminating the new from old strands of DNA after replication.
Each nucleotide is made of an acyclic nitrogenous base, a pentose and one to three phosphate groups. They contain carbon, nitrogen, oxygen, hydrogen and phosphorus. They serve as sources of chemical energy (adenosine triphosphate and guanosine triphosphate), participate in cellular signaling (cyclic guanosine monophosphate and cyclic adenosine monophosphate), and are incorporated into important cofactors of enzymatic reactions (coenzyme A, flavin adenine dinucleotide, flavin mononucleotide, and nicotinamide adenine dinucleotide phosphate).
DNA and RNA structure
DNA structure is dominated by the well-known double helix formed by Watson-Crick base-pairing of C with G and A with T. This is known as B-form DNA, and is overwhelmingly the most favorable and common state of DNA; its highly specific and stable base-pairing is the basis of reliable genetic information storage. DNA can sometimes occur as single strands (often needing to be stabilized by single-strand binding proteins) or as A-form or Z-form helices, and occasionally in more complex 3D structures such as the crossover at Holliday junctions during DNA replication.
RNA, in contrast, forms large and complex 3D tertiary structures reminiscent of proteins, as well as the loose single strands with locally folded regions that constitute messenger RNA molecules. Those RNA structures contain many stretches of A-form double helix, connected into definite 3D arrangements by single-stranded loops, bulges, and junctions. Examples are tRNA, ribosomes, ribozymes, and riboswitches. These complex structures are facilitated by the fact that RNA backbone has less local flexibility than DNA but a large set of distinct conformations, apparently because of both positive and negative interactions of the extra OH on the ribose. Structured RNA molecules can do highly specific binding of other molecules and can themselves be recognized specifically; in addition, they can perform enzymatic catalysis (when they are known as "ribozymes", as initially discovered by Tom Cech and colleagues).
Saccharides
Monosaccharides are the simplest form of carbohydrates with only one simple sugar. They essentially contain an aldehyde or ketone group in their structure. The presence of an aldehyde group in a monosaccharide is indicated by the prefix aldo-. Similarly, a ketone group is denoted by the prefix keto-. Examples of monosaccharides are the hexoses, glucose, fructose, Trioses, Tetroses, Heptoses, galactose, pentoses, ribose, and deoxyribose. Consumed fructose and glucose have different rates of gastric emptying, are differentially absorbed and have different metabolic fates, providing multiple opportunities for two different saccharides to differentially affect food intake. Most saccharides eventually provide fuel for cellular respiration.
Disaccharides are formed when two monosaccharides, or two single simple sugars, form a bond with removal of water. They can be hydrolyzed to yield their saccharin building blocks by boiling with dilute acid or reacting them with appropriate enzymes. Examples of disaccharides include sucrose, maltose, and lactose.
Polysaccharides are polymerized monosaccharides, or complex carbohydrates. They have multiple simple sugars. Examples are starch, cellulose, and glycogen. They are generally large and often have a complex branched connectivity. Because of their size, polysaccharides are not water-soluble, but their many hydroxy groups become hydrated individually when exposed to water, and some polysaccharides form thick colloidal dispersions when heated in water. Shorter polysaccharides, with 3 to 10 monomers, are called oligosaccharides.
A fluorescent indicator-displacement molecular imprinting sensor was developed for discriminating saccharides. It successfully discriminated three brands of orange juice beverage. The change in fluorescence intensity of the sensing films resulting is directly related to the saccharide concentration.
Lignin
Lignin is a complex polyphenolic macromolecule composed mainly of beta-O4-aryl linkages. After cellulose, lignin is the second most abundant biopolymer and is one of the primary structural components of most plants. It contains subunits derived from p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol, and is unusual among biomolecules in that it is racemic. The lack of optical activity is due to the polymerization of lignin which occurs via free radical coupling reactions in which there is no preference for either configuration at a chiral center.
Lipid
Lipids (oleaginous) are chiefly fatty acid esters, and are the basic building blocks of biological membranes. Another biological role is energy storage (e.g., triglycerides). Most lipids consist of a polar or hydrophilic head (typically glycerol) and one to three non polar or hydrophobic fatty acid tails, and therefore they are amphiphilic. Fatty acids consist of unbranched chains of carbon atoms that are connected by single bonds alone (saturated fatty acids) or by both single and double bonds (unsaturated fatty acids). The chains are usually 14-24 carbon groups long, but it is always an even number.
For lipids present in biological membranes, the hydrophilic head is from one of three classes:
Glycolipids, whose heads contain an oligosaccharide with 1-15 saccharide residues.
Phospholipids, whose heads contain a positively charged group that is linked to the tail by a negatively charged phosphate group.
Sterols, whose heads contain a planar steroid ring, for example, cholesterol.
Other lipids include prostaglandins and leukotrienes which are both 20-carbon fatty acyl units synthesized from arachidonic acid.
They are also known as fatty acids
Amino acids
Amino acids contain both amino and carboxylic acid functional groups. (In biochemistry, the term amino acid is used when referring to those amino acids in which the amino and carboxylate functionalities are attached to the same carbon, plus proline which is not actually an amino acid).
Modified amino acids are sometimes observed in proteins; this is usually the result of enzymatic modification after translation (protein synthesis). For example, phosphorylation of serine by kinases and dephosphorylation by phosphatases is an important control mechanism in the cell cycle. Only two amino acids other than the standard twenty are known to be incorporated into proteins during translation, in certain organisms:
Selenocysteine is incorporated into some proteins at a UGA codon, which is normally a stop codon.
Pyrrolysine is incorporated into some proteins at a UAG codon. For instance, in some methanogens in enzymes that are used to produce methane.
Besides those used in protein synthesis, other biologically important amino acids include carnitine (used in lipid transport within a cell), ornithine, GABA and taurine.
Protein structure
The particular series of amino acids that form a protein is known as that protein's primary structure. This sequence is determined by the genetic makeup of the individual. It specifies the order of side-chain groups along the linear polypeptide "backbone".
Proteins have two types of well-classified, frequently occurring elements of local structure defined by a particular pattern of hydrogen bonds along the backbone: alpha helix and beta sheet. Their number and arrangement is called the secondary structure of the protein. Alpha helices are regular spirals stabilized by hydrogen bonds between the backbone CO group (carbonyl) of one amino acid residue and the backbone NH group (amide) of the i+4 residue. The spiral has about 3.6 amino acids per turn, and the amino acid side chains stick out from the cylinder of the helix. Beta pleated sheets are formed by backbone hydrogen bonds between individual beta strands each of which is in an "extended", or fully stretched-out, conformation. The strands may lie parallel or antiparallel to each other, and the side-chain direction alternates above and below the sheet. Hemoglobin contains only helices, natural silk is formed of beta pleated sheets, and many enzymes have a pattern of alternating helices and beta-strands. The secondary-structure elements are connected by "loop" or "coil" regions of non-repetitive conformation, which are sometimes quite mobile or disordered but usually adopt a well-defined, stable arrangement.
The overall, compact, 3D structure of a protein is termed its tertiary structure or its "fold". It is formed as result of various attractive forces like hydrogen bonding, disulfide bridges, hydrophobic interactions, hydrophilic interactions, van der Waals force etc.
When two or more polypeptide chains (either of identical or of different sequence) cluster to form a protein, quaternary structure of protein is formed. Quaternary structure is an attribute of polymeric (same-sequence chains) or heteromeric (different-sequence chains) proteins like hemoglobin, which consists of two "alpha" and two "beta" polypeptide chains.
Apoenzymes
An apoenzyme (or, generally, an apoprotein) is the protein without any small-molecule cofactors, substrates, or inhibitors bound. It is often important as an inactive storage, transport, or secretory form of a protein. This is required, for instance, to protect the secretory cell from the activity of that protein.
Apoenzymes become active enzymes on addition of a cofactor. Cofactors can be either inorganic (e.g., metal ions and iron-sulfur clusters) or organic compounds, (e.g., [Flavin group|flavin] and heme). Organic cofactors can be either prosthetic groups, which are tightly bound to an enzyme, or coenzymes, which are released from the enzyme's active site during the reaction.
Isoenzymes
Isoenzymes, or isozymes, are multiple forms of an enzyme, with slightly different protein sequence and closely similar but usually not identical functions. They are either products of different genes, or else different products of alternative splicing. They may either be produced in different organs or cell types to perform the same function, or several isoenzymes may be produced in the same cell type under differential regulation to suit the needs of changing development or environment. LDH (lactate dehydrogenase) has multiple isozymes, while fetal hemoglobin is an example of a developmentally regulated isoform of a non-enzymatic protein. The relative levels of isoenzymes in blood can be used to diagnose problems in the organ of secretion .
See also
Biomolecular engineering
List of biomolecules
Metabolism
Multi-state modeling of biomolecules
References
External links
Society for Biomolecular Sciences provider of a forum for education and information exchange among professionals within drug discovery and related disciplines.
Molecules
Biochemistry
Organic compounds | Biomolecule | [
"Physics",
"Chemistry",
"Biology"
] | 2,979 | [
"Molecular physics",
"Natural products",
"Biochemistry",
"Molecules",
"Organic compounds",
"Physical objects",
"Biomolecules",
"Molecular biology",
"Structural biology",
"nan",
"Atoms",
"Matter"
] |
366,719 | https://en.wikipedia.org/wiki/Applied%20probability | Applied probability is the application of probability theory to statistical problems and other scientific and engineering domains.
Scope
Much research involving probability is done under the auspices of applied probability. However, while such research is motivated (to some degree) by applied problems, it is usually the mathematical aspects of the problems that are of most interest to researchers (as is typical of applied mathematics in general).
Applied probabilists are particularly concerned with the application of stochastic processes, and probability more generally, to the natural, applied and social sciences, including biology, physics (including astronomy), chemistry, medicine, computer science and information technology, and economics.
Another area of interest is in engineering: particularly in areas of uncertainty, risk management, probabilistic design, and Quality assurance.
History
Having initially been defined at a symposium of the American Mathematical Society in the later 1950s, the term "applied probability" was popularized by Maurice Bartlett through the name of a Methuen monograph series he edited, Applied Probability and Statistics. The area did not have an established outlet until 1964, when the Journal of Applied Probability came into existence through the efforts of Joe Gani.
See also
Areas of application:
Ruin theory
Statistical physics
Stoichiometry and modelling chemical reactions
Ecology, particularly population modelling
Evolutionary biology
Optimization in computer science
Telecommunications
Reliability engineering
Quality control
Options pricing in economics
Ewens's sampling formula in population genetics
Operations research
Gaming mathematics
Stochastic processes:
Markov chain
Poisson process
Brownian motion and other diffusion processes
Queueing theory
Renewal theory
Additional information and resources
Applied Probability Trust
INFORMS Institute for Operations Research and the Management Sciences
References
Further reading
Baeza-Yates, R. (2005) Recent advances in applied probability, Springer.
Blake, I.F. (1981) Introduction to Applied Probability, Wiley.
External links
The Applied Probability Trust. | Applied probability | [
"Mathematics"
] | 368 | [
"Applied mathematics",
"Applied probability"
] |
368,319 | https://en.wikipedia.org/wiki/Inertial%20electrostatic%20confinement | Inertial electrostatic confinement, or IEC, is a class of fusion power devices that use electric fields to confine the plasma rather than the more common approach using magnetic fields found in magnetic confinement fusion (MCF) designs. Most IEC devices directly accelerate their fuel to fusion conditions, thereby avoiding energy losses seen during the longer heating stages of MCF devices. In theory, this makes them more suitable for using alternative aneutronic fusion fuels, which offer a number of major practical benefits and makes IEC devices one of the more widely studied approaches to fusion.
IEC devices were the very first fusion products to reach the commercial market in 2000, as neutron generators. A company called NSD-Gradel developed a compact IEC device that fused ions and created neutrons and sold the product for several hundred thousand dollars.
As the negatively charged electrons and positively charged ions in the plasma move in different directions in an electric field, the field has to be arranged in some fashion so that the two particles remain close together. Most IEC designs achieve this by pulling the electrons or ions across a potential well, beyond which the potential drops and the particles continue to move due to their inertia. Fusion occurs in this lower-potential area when ions moving in different directions collide. Because the motion provided by the field creates the energy levels needed for fusion, not random collisions with the rest of the fuel, the bulk of the plasma does not have to be hot and the systems as a whole work at much lower temperatures and energy levels than MCF devices.
One of the simpler IEC devices is the fusor, which consists of two concentric metal wire spherical grids. When the grids are charged to a high voltage, the fuel gas ionizes. The field between the two then accelerates the fuel inward, and when it passes the inner grid the field drops and the ions continue inward toward the center. If they impact with another ion they may undergo fusion. If they do not, they travel out of the reaction area into the charged area again, where they are re-accelerated inward. Overall the physical process is similar to the colliding beam fusion, although beam devices are linear instead of spherical. Other IEC designs, like the polywell, differ largely in the arrangement of the fields used to create the potential well.
A number of detailed theoretical studies have pointed out that the IEC approach is subject to a number of energy loss mechanisms that are not present if the fuel is evenly heated, or "Maxwellian". These loss mechanisms appear to be greater than the rate of fusion in such devices, meaning they can never reach fusion breakeven and thus be used for power production. These mechanisms are more powerful when the atomic mass of the fuel increases, which suggests IEC also does not have any advantage with aneutronic fuels. Whether these critiques apply to specific IEC devices remains highly contentious.
Mechanism
For every volt that an ion is accelerated across, its kinetic energy gain corresponds to an increase of temperature of 11,604 kelvins (K). For example, a typical magnetic confinement fusion plasma is 15 keV, which corresponds to 170 megakelvin (MK). An ion with a charge of one can reach this temperature by being accelerated across a 15,000 V drop. This sort of voltage is easily achieved in common electrical devices; a typical cathode-ray tube operates in this range.
In fusors, the voltage drop is made with a wire cage. However high conduction losses occur in fusors because most ions fall into the cage before fusion can occur. This prevents current fusors from ever producing net power.
History
1930s
Mark Oliphant adapts Cockcroft and Walton's particle accelerator at the Cavendish Laboratory to create tritium and helium-3 by nuclear fusion.
1950s
Three researchers at LANL including Jim Tuck first explored the idea, theoretically, in a 1959 paper. The idea had been proposed by a colleague. The concept was to capture electrons inside a positive cage. The electrons would accelerate the ions to fusion conditions.
Other concepts were being developed which would later merge into the IEC field. These include the publication of the Lawson criterion by John D. Lawson in 1957 in England. This puts on minimum criteria on power plant designs which do fusion using hot Maxwellian plasma clouds. Also, work exploring how electrons behave inside the biconic cusp, done by Harold Grad group at the Courant Institute in 1957. A biconic cusp is a device with two alike magnetic poles facing one another (i.e. north-north). Electrons and ions can be trapped between these.
1960s
In his work with vacuum tubes, Philo Farnsworth observed that electric charge would accumulate in regions of the tube. Today, this effect is known as the multipactor effect. Farnsworth reasoned that if ions were concentrated high enough they could collide, and fuse. In 1962, he filed a patent on a design using a positive inner cage to concentrate plasma, in order to achieve nuclear fusion. During this time, Robert L. Hirsch joined the Farnsworth Television labs and began work on what became the fusor. Hirsch patented the design in 1966 and published the design in 1967. The Hirsch machine was a 17.8 cm diameter machine with 150 kV voltage drop across it and used ion beams to help inject material.
Simultaneously, a key plasma physics text was published by Lyman Spitzer at Princeton in 1963. Spitzer took the ideal gas laws and adapted them to an ionized plasma, developing many of the fundamental equations used to model a plasma. Meanwhile, magnetic mirror theory and direct energy conversion were developed by Richard F. Post's group at LLNL. A magnetic mirror or magnetic bottle is similar to a biconic cusp except that the poles are reversed.
1980s
In 1980 Robert W. Bussard developed a cross between a fusor and magnetic mirror, the polywell. The idea was to confine a non-neutral plasma using magnetic fields. This would, in turn, attract ions. This idea had been published previously, notably by Oleg Lavrentiev in Russia. Bussard patented the design and received funding from Defense Threat Reduction Agency, DARPA and the US Navy to develop the idea.
1990s
Bussard and Nicholas Krall published theory and experimental results in the early nineties. In response, Todd Rider at MIT, under Lawrence Lidsky developed general models of the device. Rider argued that the device was fundamentally limited. That same year, 1995, William Nevins at LLNL published a criticism of the polywell. Nevins argued that the particles would build up angular momentum, causing the dense core to degrade.
In the mid-nineties, Bussard publications prompted the development of fusors at the University of Wisconsin–Madison and at the University of Illinois at Urbana–Champaign. Madison's machine was first built in 1995. George H. Miley's team at Illinois built a 25 cm fusor which has produced 107 neutrons using deuterium gas and discovered the "star mode" of fusor operation in 1994. The following year, the first "US-Japan Workshop on IEC Fusion" was conducted. This is now the premier conference for IEC researchers. At this time in Europe, an IEC device was developed as a commercial neutron source by Daimler-Chrysler Aerospace under the name FusionStar. In the late nineties, hobbyist Richard Hull began building amateur fusors in his home. In March 1999, he achieved a neutron rate of 105 neutrons per second. Hull and Paul Schatzkin started fusor.net in 1998. Through this open forum, a community of amateur fusioneers have done nuclear fusion using homemade fusors.
2000s
Despite demonstration in 2000 of 7200 hours of operation without degradation at high input power as a sealed reaction chamber with automated control the FusionStar project was canceled and the company NSD Ltd was founded. The spherical FusionStar technology was then further developed as a linear geometry system with improved efficiency and higher neutron output by NSD Ltd. which became NSD-Fusion GmbH in 2005.
In early 2000, Alex Klein developed a cross between a polywell and ion beams. Using Gabor lensing, Dr. Klein attempted to focus plasma into non-neutral clouds for fusion. He founded FP generation, which in April 2009 raised $3 million in financing from two venture funds. The company developed the MIX and Marble machine, but ran into technical challenges and closed.
In response to Riders' criticisms, researchers at LANL reasoned that a plasma oscillating could be at local thermodynamic equilibrium; this prompted the POPS and Penning trap machines. At this time, MIT researchers became interested in fusors for space propulsion and powering space vehicles. Specifically, researchers developed fusors with multiple inner cages. In 2005, Greg Piefer founded Phoenix Nuclear Labs to develop the fusor into a neutron source for the mass production of medical isotopes.
Robert Bussard began speaking openly about the Polywell in 2006. He attempted to generate interest in the research, before passing away from multiple myeloma in 2007. His company was able to raise over ten million in funding from the US Navy in 2008 and 2009.
2010s
Bussard's publications prompted the University of Sydney to start research into electron trapping in polywells in 2010. The group has explored theory, modeled devices, built devices, measured trapping and simulated trapping. These machines were all low power and cost and all had a small beta ratio. In 2010, Carl Greninger founded the northwest nuclear consortium, an organization which teaches nuclear engineering principles to high school students, using a 60 kvolt fusor. In 2012, Mark Suppes received attention, for a fusor. Suppes also measured electron trapping inside a polywell. In 2013, the first IEC textbook was published by George H. Miley.
2020s
Avalanche Energy is a start-up with about $51 million in venture/DOD funding that is working on small (tens of centimetres), modular, fusion batteries producing 5kWe. They are targeting 600 kV for their device to achieve certain design goals. Their Orbitron concept electrostatically (magnetron-augmented) confines ions orbiting around a high voltage (100s of kVs) cathode in a high vacuum environment (p< 10 −8 Torr) surrounded by one or two anode shells separated by a dielectric. Concerns include breakdown of the vacuum/dielectric and insulator surface flashover. Permanent magnet/electromagnet magnetic field generators are arranged coaxially around the anode. The magnetic field strength is targeted to exceed a Hull cut-off condition, ranging from 50-4,000 kV. Candidate ions include protons (m/z=1), deuterium (m/z=2), tritium (m/z=3), lithium-6 (m/z=6), and boron-11 (m/z=11). Recent progress includes successful testing of a 300 kV bushing.
Designs with cage
Fusor
The best known IEC device is the fusor. This device typically consists of two wire cages inside a vacuum chamber. These cages are referred to as grids. The inner cage is held at a negative voltage against the outer cage. A small amount of fusion fuel is introduced (deuterium gas being the most common). The voltage between the grids causes the fuel to ionize. The positive ions fall down the voltage drop toward the negative inner cage. As they accelerate, the electric field does work on the ions, accelerating them to fusion conditions. If these ions collide, they can fuse. Fusors can also use ion guns rather than electric grids. Fusors are popular with amateurs, because they can easily be constructed, can regularly produce fusion and are a practical way to study nuclear physics. Fusors have also been used as a commercial neutron generator for industrial applications.
No fusor has come close to producing a significant amount of fusion power. They can be dangerous if proper care is not taken because they require high voltages and can produce harmful radiation (neutrons and X-rays). Often, ions collide with the cages or wall. This conducts energy away from the device limiting its performance. In addition, collisions heat the grids, which limits high-power devices. Collisions also spray high-mass ions into the reaction chamber, pollute the plasma, and cool the fuel.
POPS
In examining nonthermal plasma, workers at LANL realized that scattering was more likely than fusion. This was due to the coulomb scattering cross section being larger than the fusion cross section. In response they built POPS, a machine with a wire cage, where ions are moving at steady-state, or oscillating around. Such plasma can be at local thermodynamic equilibrium.<ref name=Barnes1998} The ion oscillation is predicted to maintain the equilibrium distribution of the ions at all times, which would eliminate any power loss due to Coulomb scattering, resulting in a net energy gain. Working off this design, researchers in Russia simulated the POPS design using particle-in-cell code in 2009. This reactor concept becomes increasingly efficient as the size of the device shrinks. However, very high transparencies (>99.999%) are required for successful operation of the POPS concept. To this end S. Krupakar Murali et al., suggested that carbon nanotubes can be used to construct the cathode grids. This is also the first (suggested) application of carbon nanotubes directly in any fusion reactor.
Designs with fields
Several schemes attempt to combine magnetic confinement and electrostatic fields with IEC. The goal is to eliminate the inner wire cage of the fusor, and the resulting problems.
Polywell
The polywell uses a magnetic field to trap electrons. When electrons or ions move into a dense field, they can be reflected by the magnetic mirror effect. A polywell is designed to trap electrons in the center, with a dense magnetic field surrounding them. This is typically done using six electromagnets in a box. Each magnet is positioned so their poles face inward, creating a null point in the center. The electrons trapped in the center form a "virtual electrode" Ideally, this electron cloud accelerates ions to fusion conditions.
Penning trap
A Penning trap uses both an electric and a magnetic field to trap particles, a magnetic field to confine particles radially and a quadrupole electric field to confine the particles axially.
In a Penning trap fusion reactor, first the magnetic and electric fields are turned on. Then, electrons are emitted into the trap, caught and measured. The electrons form a virtual electrode similar to that in a polywell, described above. These electrons are intended to then attract ions, accelerating them to fusion conditions.
In the 1990s, researchers at LANL built a Penning trap to do fusion experiments. Their device (PFX) was a small (millimeters) and low power (one fifth of a tesla, less than ten thousand volts) machine.
Marble
MARBLE (multiple ambipolar recirculating beam line experiment) was a device which moved electrons and ions back and forth in a line. Particle beams were reflected using electrostatic optics. These optics made static voltage surfaces in free space. Such surfaces reflect only particles with a specific kinetic energy, while higher-energy particles can traverse these surfaces unimpeded, although not unaffected. Electron trapping and plasma behavior was measured by Langmuir probe. Marble kept ions on orbits that do not intersect grid wires—the latter also improves the space charge limitations by multiple nesting of ion beams at several energies. Researchers encountered problems with ion losses at the reflection points. Ions slowed down when turning, spending much time there, leading to high conduction losses.
MIX
The multipole ion-beam experiment (MIX) accelerated ions and electrons into a negatively charged electromagnet. Ions were focused using Gabor lensing. Researcher had problems with a very thin ion-turning region very close to a solid surface where ions could be conducted away.
Magnetically insulated
Devices have been proposed where the negative cage is magnetically insulated from the incoming plasmas.
General criticism
In 1995, Todd Rider critiqued all fusion power schemes using plasma systems not at thermodynamic equilibrium. Rider assumed that plasma clouds at equilibrium had the following properties:
They were quasineutral, where the positives and negatives are equally mixed together.
They had evenly mixed fuel.
They were isotropic, meaning that its behavior was the same in any given direction.
The plasma had a uniform energy and temperature throughout the cloud.
The plasma was an unstructured Gaussian sphere.
Rider argued that if such system was sufficiently heated, it could not be expected to produce net power, due to high X-ray losses.
Other fusion researchers such as Nicholas Krall, Robert W. Bussard, Norman Rostoker, and Monkhorst disagreed with this assessment. They argue that the plasma conditions inside IEC machines are not quasineutral and have non-thermal energy distributions. Because the electron has a mass and diameter much smaller than the ion, the electron temperature can be several orders of magnitude different than the ions. This may allow the plasma to be optimized, whereby cold electrons would reduce radiation losses and hot ions would raise fusion rates.
Thermalization
The primary problem that Rider has raised is the thermalization of ions. Rider argued that, in a quasineutral plasma where all the positives and negatives are distributed equally, the ions will interact. As they do, they exchange energy, causing their energy to spread out (in a Wiener process) heading to a bell curve (or Gaussian function) of energy. Rider focused his arguments within the ion population and did not address electron-to-ion energy exchange or non-thermal plasmas.
This spreading of energy causes several problems. One problem is making more and more cold ions, which are too cold to fuse. This would lower output power. Another problem is higher energy ions which have so much energy that they can escape the machine. This lowers fusion rates while raising conduction losses, because as the ions leave, energy is carried away with them.
Radiation
Rider estimated that once the plasma is thermalized the radiation losses would outpace any amount of fusion energy generated. He focused on a specific type of radiation: X-ray radiation. A particle in a plasma will radiate light anytime it speeds up or slows down. This can be estimated using the Larmor formula. Rider estimated this for D–T (deuterium–tritium fusion), D–D (deuterium fusion), and D–He3 (deuterium–helium 3 fusion), and that breakeven operation with any fuel except D–T is difficult.
Core focus
In 1995, Nevins argued that such machines would need to expend a great deal of energy maintaining ion focus in the center. The ions need to be focused so that they can find one another, collide, and fuse. Over time the positive ions and negative electrons would naturally intermix because of electrostatic attraction. This causes the focus to be lost. This is core degradation. Nevins argued mathematically, that the fusion gain (ratio of fusion power produced to the power required to maintain the non-equilibrium ion distribution function) is limited to 0.1 assuming that the device is fueled with a mixture of deuterium and tritium.
The core focus problem was also identified in fusors by Tim Thorson at the University of Wisconsin–Madison during his 1996 doctoral work. Charged ions would have some motion before they started accelerating in the center. This motion could be a twisting motion, where the ion had angular momentum, or simply a tangential velocity. This initial motion causes the cloud in the center of the fusor to be unfocused.
Brillouin limit
In 1945, Columbia University professor Léon Brillouin, suggested that there was a limit to how many electrons one could pack into a given volume. This limit is commonly referred to as the Brillouin limit or Brillouin density, this is shown below.
Where B is the magnetic field, the permeability of free space, m the mass of confined particles, and c the speed of light. This may limit the charge density inside IEC devices.
Commercial applications
Since fusion reactions generates neutrons, the fusor has been developed into a family of compact sealed reaction chamber neutron generators for a wide range of applications that need moderate neutron output rates at a moderate price. Very high output neutron sources may be used to make products such as molybdenum-99 and nitrogen-13, medical isotopes used for PET scans.
Devices
Government and commercial
Los Alamos National Laboratory Researchers developed POPS and Penning trap
Turkish Atomic Energy Authority In 2013 this team built a fusor at the Saraykoy Nuclear Research and Training center in Turkey. This fusor can reach and do deuterium fusion, producing neutrons per second.
ITT Corporation Hirschs original machine was a 17. diameter machine with voltage drop across it. This machine used ion beams.
Phoenix Nuclear Labs has developed a commercial neutron source based on a fusor, achieving neutrons per second with the deuterium-deuterium fusion reaction for 132 hours of continuous operation.
Energy Matter Conversion Inc Is a company in Santa Fe which has developed large high powered polywell devices for the US Navy.
NSD-Gradel-Fusion sealed IEC neutron generators for DD (2.5 MeV) or DT (14 MeV) with a range of maximum outputs are manufactured by Gradel sárl in Luxembourg.
Atomic Energy Organization of Iran Researchers at Shahid Beheshti University in Iran have built a diameter fusor which can produce neutrons per second at 80 kilovolts using deuterium gas.
Avalanche Energy has received $5 million in venture capital to build their prototype.
CPP-IPR in India, has achieved a significant milestone by pioneering the development of India's first Inertial Electrostatic Confinement Fusion (IECF) neutron source. The device is capable of reaching an energy potential of -92kV. It can generate an neutron yield of up to 107 neutrons per second by deuterium fusion. The primary objective of this program is to propel the advancement of portable and handheld neutron sources, characterized by both linear and spherical geometries.
Universities
Tokyo Institute of Technology has four IEC devices of different shapes: a spherical machine, a cylindrical device, a co-axial double cylinder and a magnetically assisted device.
University of Wisconsin–Madison – A group at Wisconsin–Madison has several large devices, since 1995.
University of Illinois at Urbana–Champaign – The fusion studies laboratory has built a ~25 cm fusor which has produced neutrons using deuterium gas.
Massachusetts Institute of Technology – For his doctoral thesis in 2007, Carl Dietrich built a fusor and studied its potential use in spacecraft propulsion. Also, Thomas McGuire studied multiple well fusors for applications in spaceflight.
University of Sydney has built several IEC devices and also low power, low beta ratio polywells. The first was constructed of Teflon rings and was about the size of a coffee cup. The second has ~12" diameter full casing, metal rings.
Eindhoven Technical University
Amirkabir University of Technology and Atomic Energy Organization of Iran have investigated the effect of strong pulsed magnetic fields on the neutron production rate of IEC device. Their study showed that by 1-2 Tesla magnetic field it is possible to increase the discharge current and neutron production rate more than ten times with respect to the ordinary operation.
The Institute of Space Systems at the University of Stuttgart is developing IEC devices for plasma physics research, and as an electric propulsion device, the IECT (Inertial Electrostatic Confinement Thruster).
See also
Fusor
List of fusion experiments
Northwest Nuclear Consortium
Philo Farnsworth
Phoenix Nuclear labs
Polywell
Robert Bussard
Taylor Wilson
Patents
P.T. Farnsworth, , June 1966 (Electric discharge — Nuclear interaction)
P.T. Farnsworth, . June 1968 (Method and apparatus)
Hirsch, Robert, . September 1970 (Apparatus)
Hirsch, Robert, . September 1970 (Generating apparatus — Hirsch/Meeks)
Hirsch, Robert, . October 1970 (Lithium-Ion source)
Hirsch, Robert, . April 1972 (Reduce plasma leakage)
Hirsch, Robert, . May 1972 (Electrostatic containment)
R.W. Bussard, "Method and apparatus for controlling charged particles", , May 1989 (Method and apparatus — Magnetic grid fields)
R.W. Bussard, "Method and apparatus for creating and controlling nuclear fusion reactions", , November 1992 (Method and apparatus — Ion acoustic waves)
S.T. Brookes, "Nuclear fusion reactor", UK patent GB2461267, May 2012
T.V. Stanko, "Nuclear fusion device", UK patent GB2545882, July 2017
References
External links
Polywell Fusion: Electrostatic Fusion in a Magnetic Cusp, talk at Microsoft Research
University of Wisconsin-Madison IEC homepage
IEC Overview
From Proceedings of the 1999 Fusion Summer Study (Snowmass, Colorado):
Summary of Physics Aspects of Some Emerging Concepts
Inertial-Electrostatic Confinement (IEC) of a Fusion Plasma with Grids
Fusion from Television? (American Scientist Magazine, July-August 1999)
Should Google Go Nuclear? Clean, cheap, nuclear power (no, really)
NSD-Gradel-Fusion, NSD-Gradel-Fusion (Luxembourg)
Fusion power
de:Elektrostatischer Trägheitseinschluss | Inertial electrostatic confinement | [
"Physics",
"Chemistry"
] | 5,262 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
368,389 | https://en.wikipedia.org/wiki/Ultrafiltration | Ultrafiltration (UF) is a variety of membrane filtration in which forces such as pressure or concentration gradients lead to a separation through a semipermeable membrane. Suspended solids and solutes of high molecular weight are retained in the so-called retentate, while water and low molecular weight solutes pass through the membrane in the permeate (filtrate). This separation process is used in industry and research for purifying and concentrating macromolecular (103–106 Da) solutions, especially protein solutions.
Ultrafiltration is not fundamentally different from microfiltration. Both of these are separate based on size exclusion or particle capture. It is fundamentally different from membrane gas separation, which separate based on different amounts of absorption and different rates of diffusion. Ultrafiltration membranes are defined by the molecular weight cut-off (MWCO) of the membrane used. Ultrafiltration is applied in cross-flow or dead-end mode.
Applications
Industries such as chemical and pharmaceutical manufacturing, food and beverage processing, and waste water treatment, employ ultrafiltration in order to recycle flow or add value to later products. Blood dialysis also utilizes ultrafiltration.
Drinking water
Ultrafiltration can be used for the removal of particulates and macromolecules from raw water to produce potable water. It has been used to either replace existing secondary (coagulation, flocculation, sedimentation) and tertiary filtration (sand filtration and chlorination) systems employed in water treatment plants or as standalone systems in isolated regions with growing populations. When treating water with high suspended solids, UF is often integrated into the process, utilising primary (screening, flotation, filtration) and some secondary treatments as pre-treatment stages. UF processes are currently preferred over traditional treatment methods for the following reasons:
No chemicals required (aside from cleaning)
Constant product quality regardless of feed quality
Compact plant size
Capable of exceeding regulatory standards of water quality, achieving 90–100% pathogen removal
UF processes are currently limited by the high cost incurred due to membrane fouling and replacement. Additional pretreatment of feed water is required to prevent excessive damage to the membrane units.
In many cases UF is used for pre filtration in reverse osmosis (RO) plants to protect the RO membranes.
Protein concentration
UF is used extensively in the dairy industry; particularly in the processing of cheese whey to obtain whey protein concentrate (WPC) and lactose-rich permeate. In a single stage, a UF process is able to concentrate the whey 10–30 times the feed.
The original alternative to membrane filtration of whey was using steam heating followed by drum drying or spray drying. The product of these methods had limited applications due to its granulated texture and insolubility. Existing methods also had inconsistent product composition, high capital and operating costs and due to the excessive heat used in drying would often denature some of the proteins.
Compared to traditional methods, UF processes used for this application:
Are more energy efficient
Have consistent product quality, 35–80% protein product depending on operating conditions
Do not denature proteins as they use moderate operating conditions
The potential for fouling is widely discussed, being identified as a significant contributor to decline in productivity. Cheese whey contains high concentrations of calcium phosphate which can potentially lead to scale deposits on the membrane surface. As a result, substantial pretreatment must be implemented to balance pH and temperature of the feed to maintain solubility of calcium salts.
Other applications
Filtration of effluent from paper pulp mill
Cheese manufacture, see ultrafiltered milk
Removal of some bacteria from milk
Process and waste water treatment
Enzyme recovery
Fruit juice concentration and clarification
Dialysis and other blood treatments
Desalting and solvent-exchange of proteins (via diafiltration)
Laboratory grade manufacturing
Radiocarbon dating of bone collagen
Recovery of electrodeposition paints
Treatment of oil and latex emulsions
Recovery of lignin compounds in spent pulping liquors
Principles
The basic operating principle of ultrafiltration uses a pressure induced separation of solutes from a solvent through a semi permeable membrane. The relationship between the applied pressure on the solution to be separated and the flux through the membrane is most commonly described by the Darcy equation:
,
where is the flux (flow rate per membrane area), is the transmembrane pressure (pressure difference between feed and permeate stream), is solvent viscosity and is the total resistance (sum of membrane and fouling resistance).
Membrane fouling
Concentration polarization
When filtration occurs the local concentration of rejected material at the membrane surface increases and can become saturated. In UF, increased ion concentration can develop an osmotic pressure on the feed side of the membrane. This reduces the effective TMP of the system, therefore reducing permeation rate. The increase in concentrated layer at the membrane wall decreases the permeate flux, due to increase in resistance which reduces the driving force for solvent to transport through membrane surface. CP affects almost all the available membrane separation processes. In RO, the solutes retained at the membrane layer results in higher osmotic pressure in comparison to the bulk stream concentration. So the higher pressures are required to overcome this osmotic pressure. Concentration polarisation plays a dominant role in ultrafiltration as compared to microfiltration because of the small pore size membrane. Concentration polarization differs from fouling as it has no lasting effects on the membrane itself and can be reversed by relieving the TMP. It does however have a significant effect on many types of fouling.
Types of fouling
Types of Foulants
The following are the four categories by which foulants of UF membranes can be defined in:
biological substances
macromolecules
particulates
ions
Particulate deposition
The following models describe the mechanisms of particulate deposition on the membrane surface and in the pores:
Standard blocking: macromolecules are uniformly deposited on pore walls
Complete blocking: membrane pore is completely sealed by a macromolecule
Cake formation: accumulated particles or macromolecules form a fouling layer on the membrane surface, in UF this is also known as a gel layer
Intermediate blocking: when macromolecules deposit into pores or onto already blocked pores, contributing to cake formation
Scaling
As a result of concentration polarization at the membrane surface, increased ion concentrations may exceed solubility thresholds and precipitate on the membrane surface. These inorganic salt deposits can block pores causing flux decline, membrane degradation and loss of production. The formation of scale is highly dependent on factors affecting both solubility and concentration polarization including pH, temperature, flow velocity and permeation rate.
Biofouling
Microorganisms will adhere to the membrane surface forming a gel layer – known as biofilm. The film increases the resistance to flow, acting as an additional barrier to permeation. In spiral-wound modules, blockages formed by biofilm can lead to uneven flow distribution and thus increase the effects of concentration polarization.
Membrane arrangements
Depending on the shape and material of the membrane, different modules can be used for ultrafiltration process. Commercially available designs in ultrafiltration modules vary according to the required hydrodynamic and economic constraints as well as the mechanical stability of the system under particular operating pressures. The main modules used in industry include:
Tubular modules
The tubular module design uses polymeric membranes cast on the inside of plastic or porous paper components with diameters typically in the range of 5–25 mm with lengths from 0.6–6.4 m. Multiple tubes are housed in a PVC or steel shell. The feed of the module is passed through the tubes, accommodating radial transfer of permeate to the shell side. This design allows for easy cleaning however the main drawback is its low permeability, high volume hold-up within the membrane and low packing density.
Hollow fibre
This design is conceptually similar to the tubular module with a shell and tube arrangement. A single module can consist of 50 to thousands of hollow fibres and therefore are self-supporting unlike the tubular design. The diameter of each fibre ranges from 0.2–3 mm with the feed flowing in the tube and the product permeate collected radially on the outside. The advantage of having self-supporting membranes as is the ease at which it can be cleaned due to its ability to be backflushed. Replacement costs however are high, as one faulty fibre will require the whole bundle to be replaced. Considering the tubes are of small diameter, using this design also makes the system prone to blockage.
Spiral-wound modules
Are composed of a combination of flat membrane sheets separated by a thin meshed spacer material which serves as a porous plastic screen support. These sheets are rolled around a central perforated tube and fitted into a tubular steel pressure vessel casing. The feed solution passes over the membrane surface and the permeate spirals into the central collection tube. Spiral-wound modules are a compact and cheap alternative in ultrafiltration design, offer a high volumetric throughput and can also be easily cleaned. However it is limited by the thin channels where feed solutions with suspended solids can result in partial blockage of the membrane pores.
Plate and frame
This uses a membrane placed on a flat plate separated by a mesh like material. The feed is passed through the system from which permeate is separated and collected from the edge of the plate. Channel length can range from 10–60 cm and channel heights from 0.5–1.0 mm. This module provides low volume hold-up, relatively easy replacement of the membrane and the ability to feed viscous solutions because of the low channel height, unique to this particular design.
Process characteristics
The process characteristics of a UF system are highly dependent on the type of membrane used and its application. Manufacturers' specifications of the membrane tend to limit the process to the following typical specifications:
Process design considerations
When designing a new membrane separation facility or considering its integration into an existing plant, there are many factors which must be considered. For most applications a heuristic approach can be applied to determine many of these characteristics to simplify the design process. Some design areas include:
Pre-treatment
Treatment of feed prior to the membrane is essential to prevent damage to the membrane and minimize the effects of fouling which greatly reduce the efficiency of the separation. Types of pre-treatment are often dependent on the type of feed and its quality. For example, in wastewater treatment, household waste and other particulates are screened. Other types of pre-treatment common to many UF processes include pH balancing and coagulation. Appropriate sequencing of each pre-treatment phase is crucial in preventing damage to subsequent stages. Pre-treatment can even be employed simply using dosing points.
Membrane specifications
Material
Most UF membranes use polymer materials (polysulfone, polypropylene, cellulose acetate, polylactic acid) however ceramic membranes are used for high temperature applications.
Pore size
A general rule for choice of pore size in a UF system is to use a membrane with a pore size one tenth that of the particle size to be separated. This limits the number of smaller particles entering the pores and adsorbing to the pore surface. Instead they block the entrance to the pores allowing simple adjustments of cross-flow velocity to dislodge them.
Operation strategy
Flowtype
UF systems can either operate with cross-flow or dead-end flow. In dead-end filtration the flow of the feed solution is perpendicular to the membrane surface. On the other hand, in cross flow systems the flow passes parallel to the membrane surface. Dead-end configurations are more suited to batch processes with low suspended solids as solids accumulate at the membrane surface therefore requiring frequent backflushes and cleaning to maintain high flux. Cross-flow configurations are preferred in continuous operations as solids are continuously flushed from the membrane surface resulting in a thinner cake layer and lower resistance to permeation.
Flow velocity
Flow velocity is especially critical for hard water or liquids containing suspensions in preventing excessive fouling. Higher cross-flow velocities can be used to enhance the sweeping effect across the membrane surface therefore preventing deposition of macromolecules and colloidal material and reducing the effects of concentration polarization. Expensive pumps are however required to achieve these conditions.
Flow temperature
To avoid excessive damage to the membrane, it is recommended to operate a plant at the temperature specified by the membrane manufacturer. In some instances however temperatures beyond the recommended region are required to minimise the effects of fouling. Economic analysis of the process is required to find a compromise between the increased cost of membrane replacement and productivity of the separation.
Pressure
Pressure drops over multi-stage separation can result in a drastic decline in flux performance in the latter stages of the process. This can be improved using booster pumps to increase the TMP in the final stages. This will incur a greater capital and energy cost which will be offset by the improved productivity of the process. With a multi-stage operation, retentate streams from each stage are recycled through the previous stage to improve their separation efficiency.
Multi-stage, multi-module
Multiple stages in series can be applied to achieve higher purity permeate streams. Due to the modular nature of membrane processes, multiple modules can be arranged in parallel to treat greater volumes.
Post-treatment
Post-treatment of the product streams is dependent on the composition of the permeate and retentate and its end-use or government regulation. In cases such as milk separation both streams (milk and whey) can be collected and made into useful products. Additional drying of the retentate will produce whey powder. In the paper mill industry, the retentate (non-biodegradable organic material) is incinerated to recover energy and permeate (purified water) is discharged into waterways. It is essential for the permeate water to be pH balanced and cooled to avoid thermal pollution of waterways and altering its pH.
Cleaning
Cleaning of the membrane is done regularly to prevent the accumulation of foulants and reverse the degrading effects of fouling on permeability and selectivity.
Regular backwashing is often conducted every 10 min for some processes to remove cake layers formed on the membrane surface. By pressurising the permeate stream and forcing it back through the membrane, accumulated particles can be dislodged, improving the flux of the process. Backwashing is limited in its ability to remove more complex forms of fouling such as biofouling, scaling or adsorption to pore walls.
These types of foulants require chemical cleaning to be removed. The common types of chemicals used for cleaning are:
Acidic solutions for the control of inorganic scale deposits
Alkali solutions for removal of organic compounds
Biocides or disinfection such as chlorine or peroxide when bio-fouling is evident
When designing a cleaning protocol it is essential to consider:
Cleaning time – Adequate time must be allowed for chemicals to interact with foulants and permeate into the membrane pores. However, if the process is extended beyond its optimum duration it can lead to denaturation of the membrane and deposition of removed foulants. The complete cleaning cycle including rinses between stages may take as long as 2 hours to complete.
Aggressiveness of chemical treatment – With a high degree of fouling it may be necessary to employ aggressive cleaning solutions to remove fouling material. However, in some applications this may not be suitable if the membrane material is sensitive, leading to enhanced membrane ageing.
Disposal of cleaning effluent – The release of some chemicals into wastewater systems may be prohibited or regulated therefore this must be considered. For example, the use of phosphoric acid may result in high levels of phosphates entering water ways and must be monitored and controlled to prevent eutrophication.
Summary of common types of fouling and their respective chemical treatments
New developments
In order to increase the life-cycle of membrane filtration systems, energy efficient membranes are being developed in membrane bioreactor systems. Technology has been introduced which allows the power required to aerate the membrane for cleaning to be reduced whilst still maintaining a high flux level. Mechanical cleaning processes have also been adopted using granulates as an alternative to conventional forms of cleaning; this reduces energy consumption and also reduces the area required for filtration tanks.
Membrane properties have also been enhanced to reduce fouling tendencies by modifying surface properties. This can be noted in the biotechnology industry where membrane surfaces have been altered in order to reduce the amount of protein binding. Ultrafiltration modules have also been improved to allow for more membrane for a given area without increasing its risk of fouling by designing more efficient module internals.
The current pre-treatment of seawater desulphonation uses ultrafiltration modules that have been designed to withstand high temperatures and pressures whilst occupying a smaller footprint. Each module vessel is self supported and resistant to corrosion and accommodates easy removal and replacement of the module without the cost of replacing the vessel itself.
See also
List of wastewater treatment technologies
References
External links
Filtration techniques
Water treatment
Membrane technology | Ultrafiltration | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,521 | [
"Separation processes",
"Water treatment",
"Water pollution",
"Filtration techniques",
"Membrane technology",
"Filtration",
"Environmental engineering",
"Water technology"
] |
368,621 | https://en.wikipedia.org/wiki/Sphere%20packing | In geometry, a sphere packing is an arrangement of non-overlapping spheres within a containing space. The spheres considered are usually all of identical size, and the space is usually three-dimensional Euclidean space. However, sphere packing problems can be generalised to consider unequal spheres, spaces of other dimensions (where the problem becomes circle packing in two dimensions, or hypersphere packing in higher dimensions) or to non-Euclidean spaces such as hyperbolic space.
A typical sphere packing problem is to find an arrangement in which the spheres fill as much of the space as possible. The proportion of space filled by the spheres is called the packing density of the arrangement. As the local density of a packing in an infinite space can vary depending on the volume over which it is measured, the problem is usually to maximise the average or asymptotic density, measured over a large enough volume.
For equal spheres in three dimensions, the densest packing uses approximately 74% of the volume. A random packing of equal spheres generally has a density around 63.5%.
Classification and terminology
A lattice arrangement (commonly called a regular arrangement) is one in which the centers of the spheres form a very symmetric pattern which needs only n vectors to be uniquely defined (in n-dimensional Euclidean space). Lattice arrangements are periodic. Arrangements in which the spheres do not form a lattice (often referred to as irregular) can still be periodic, but also aperiodic (properly speaking non-periodic) or random. Because of their high degree of symmetry, lattice packings are easier to classify than non-lattice ones. Periodic lattices always have well-defined densities.
Regular packing
Dense packing
In three-dimensional Euclidean space, the densest packing of equal spheres is achieved by a family of structures called close-packed structures. One method for generating such a structure is as follows. Consider a plane with a compact arrangement of spheres on it. Call it A. For any three neighbouring spheres, a fourth sphere can be placed on top in the hollow between the three bottom spheres. If we do this for half of the holes in a second plane above the first, we create a new compact layer. There are two possible choices for doing this, call them B and C. Suppose that we chose B. Then one half of the hollows of B lies above the centers of the balls in A and one half lies above the hollows of A which were not used for B. Thus the balls of a third layer can be placed either directly above the balls of the first one, yielding a layer of type A, or above the holes of the first layer which were not occupied by the second layer, yielding a layer of type C. Combining layers of types A, B, and C produces various close-packed structures.
Two simple arrangements within the close-packed family correspond to regular lattices. One is called cubic close packing (or face-centred cubic, "FCC")—where the layers are alternated in the ABCABC... sequence. The other is called hexagonal close packing ("HCP"), where the layers are alternated in the ABAB... sequence. But many layer stacking sequences are possible (ABAC, ABCBA, ABCBAC, etc.), and still generate a close-packed structure. In all of these arrangements each sphere touches 12 neighboring spheres, and the average density is
In 1611, Johannes Kepler conjectured that this is the maximum possible density amongst both regular and irregular arrangements—this became known as the Kepler conjecture. Carl Friedrich Gauss proved in 1831 that these packings have the highest density amongst all possible lattice packings. In 1998, Thomas Callister Hales, following the approach suggested by László Fejes Tóth in 1953, announced a proof of the Kepler conjecture. Hales' proof is a proof by exhaustion involving checking of many individual cases using complex computer calculations. Referees said that they were "99% certain" of the correctness of Hales' proof. On 10 August 2014, Hales announced the completion of a formal proof using automated proof checking, removing any doubt.
Other common lattice packings
Some other lattice packings are often found in physical systems. These include the cubic lattice with a density of , the hexagonal lattice with a density of and the tetrahedral lattice with a density of .
Jammed packings with a low density
Packings where all spheres are constrained by their neighbours to stay in one location are called rigid or jammed. The strictly jammed (mechanically stable even as a finite system) regular sphere packing with the lowest known density is a diluted ("tunneled") fcc crystal with a density of only . The loosest known regular jammed packing has a density of approximately 0.0555.
Irregular packing
If we attempt to build a densely packed collection of spheres, we will be tempted to always place the next sphere in a hollow between three packed spheres. If five spheres are assembled in this way, they will be consistent with one of the regularly packed arrangements described above. However, the sixth sphere placed in this way will render the structure inconsistent with any regular arrangement. This results in the possibility of a random close packing of spheres which is stable against compression. Vibration of a random loose packing can result in the arrangement of spherical particles into regular packings, a process known as granular crystallisation. Such processes depend on the geometry of the container holding the spherical grains.
When spheres are randomly added to a container and then compressed, they will generally form what is known as an "irregular" or "jammed" packing configuration when they can be compressed no more. This irregular packing will generally have a density of about 64%. Recent research predicts analytically that it cannot exceed a density limit of 63.4% This situation is unlike the case of one or two dimensions, where compressing a collection of 1-dimensional or 2-dimensional spheres (that is, line segments or circles) will yield a regular packing.
Hypersphere packing
The sphere packing problem is the three-dimensional version of a class of ball-packing problems in arbitrary dimensions. In two dimensions, the equivalent problem is packing circles on a plane. In one dimension it is packing line segments into a linear universe.
In dimensions higher than three, the densest lattice packings of hyperspheres are known up to 8 dimensions. Very little is known about irregular hypersphere packings; it is possible that in some dimensions the densest packing may be irregular. Some support for this conjecture comes from the fact that in certain dimensions (e.g. 10) the densest known irregular packing is denser than the densest known regular packing.
In 2016, Maryna Viazovska announced a proof that the E8 lattice provides the optimal packing (regardless of regularity) in eight-dimensional space, and soon afterwards she and a group of collaborators announced a similar proof that the Leech lattice is optimal in 24 dimensions. This result built on and improved previous methods which showed that these two lattices are very close to optimal.
The new proofs involve using the Laplace transform of a carefully chosen modular function to construct a radially symmetric function such that and its Fourier transform both equal 1 at the origin, and both vanish at all other points of the optimal lattice, with negative outside the central sphere of the packing and positive. Then, the Poisson summation formula for is used to compare the density of the optimal lattice with that of any other packing. Before the proof had been formally refereed and published, mathematician Peter Sarnak called the proof "stunningly simple" and wrote that "You just start reading the paper and you know this is correct."
Another line of research in high dimensions is trying to find asymptotic bounds for the density of the densest packings. It is known that for large , the densest lattice in dimension has density between (for some constant ) and . Conjectural bounds lie in between. In a 2023 preprint, Marcelo Campos, Matthew Jenssen, Marcus Michelen and Julian Sahasrabudhe improved the lower bound of the maximal density to , among their techniques they make use of the Rödl nibble.
Unequal sphere packing
Many problems in the chemical and physical sciences can be related to packing problems where more than one size of sphere is available. Here there is a choice between separating the spheres into regions of close-packed equal spheres, or combining the multiple sizes of spheres into a compound or interstitial packing. When many sizes of spheres (or a distribution) are available, the problem quickly becomes intractable, but some studies of binary hard spheres (two sizes) are available.
When the second sphere is much smaller than the first, it is possible to arrange the large spheres in a close-packed arrangement, and then arrange the small spheres within the octahedral and tetrahedral gaps. The density of this interstitial packing depends sensitively on the radius ratio, but in the limit of extreme size ratios, the smaller spheres can fill the gaps with the same density as the larger spheres filled space. Even if the large spheres are not in a close-packed arrangement, it is always possible to insert some smaller spheres of up to 0.29099 of the radius of the larger sphere.
When the smaller sphere has a radius greater than 0.41421 of the radius of the larger sphere, it is no longer possible to fit into even the octahedral holes of the close-packed structure. Thus, beyond this point, either the host structure must expand to accommodate the interstitials (which compromises the overall density), or rearrange into a more complex crystalline compound structure. Structures are known which exceed the close packing density for radius ratios up to 0.659786.
Upper bounds for the density that can be obtained in such binary packings have also been obtained.
In many chemical situations such as ionic crystals, the stoichiometry is constrained by the charges of the constituent ions. This additional constraint on the packing, together with the need to minimize the Coulomb energy of interacting charges leads to a diversity of optimal packing arrangements.
The upper bound for the density of a strictly jammed sphere packing with any set of radii is 1an example of such a packing of spheres is the Apollonian sphere packing. The lower bound for such a sphere packing is 0an example is the Dionysian sphere packing.
Hyperbolic space
Although the concept of circles and spheres can be extended to hyperbolic space, finding the densest packing becomes much more difficult. In a hyperbolic space there is no limit to the number of spheres that can surround another sphere (for example, Ford circles can be thought of as an arrangement of identical hyperbolic circles in which each circle is surrounded by an infinite number of other circles). The concept of average density also becomes much more difficult to define accurately. The densest packings in any hyperbolic space are almost always irregular.
Despite this difficulty, K. Böröczky gives a universal upper bound for the density of sphere packings of hyperbolic n-space where n ≥ 2. In three dimensions the Böröczky bound is approximately 85.327613%, and is realized by the horosphere packing of the order-6 tetrahedral honeycomb with Schläfli symbol {3,3,6}. In addition to this configuration at least three other horosphere packings are known to exist in hyperbolic 3-space that realize the density upper bound.
Touching pairs, triplets, and quadruples
The contact graph of an arbitrary finite packing of unit balls is the graph whose vertices correspond to the packing elements and whose two vertices are connected by an edge if the corresponding two packing elements touch each other. The cardinality of the edge set of the contact graph gives the number of touching pairs, the number of 3-cycles in the contact graph gives the number of touching triplets, and the number of tetrahedrons in the contact graph gives the number of touching quadruples (in general for a contact graph associated with a sphere packing in n dimensions that the cardinality of the set of n-simplices in the contact graph gives the number of touching (n + 1)-tuples in the sphere packing). In the case of 3-dimensional Euclidean space, non-trivial upper bounds on the number of touching pairs, triplets, and quadruples were proved by Karoly Bezdek and Samuel Reid at the University of Calgary.
The problem of finding the arrangement of n identical spheres that maximizes the number of contact points between the spheres is known as the "sticky-sphere problem". The maximum is known for n ≤ 11, and only conjectural values are known for larger n.
Other spaces
Sphere packing on the corners of a hypercube (with the spheres defined by Hamming distance) corresponds to designing error-correcting codes: if the spheres have radius t, then their centers are codewords of a (2t + 1)-error-correcting code. Lattice packings correspond to linear codes. There are other, subtler relationships between Euclidean sphere packing and error-correcting codes. For example, the binary Golay code is closely related to the 24-dimensional Leech lattice.
For further details on these connections, see the book Sphere Packings, Lattices and Groups by Conway and Sloane.
See also
Close-packing of equal spheres
Apollonian sphere packing
Finite sphere packing
Hermite constant
Inscribed sphere
Kissing number
Sphere-packing bound
Random close pack
Cylinder sphere packing
Sphere packing in a sphere
References
Bibliography
External links
Dana Mackenzie (May 2002) "A fine mess" (New Scientist)
A non-technical overview of packing in hyperbolic space.
"Kugelpackungen (Sphere Packing)" (T. E. Dorozinski)
"3D Sphere Packing Applet" Sphere Packing java applet
"Densest Packing of spheres into a sphere" java applet
"Database of sphere packings" (Erik Agrell)
Discrete geometry
Crystallography
Packing problems
Spheres | Sphere packing | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,861 | [
"Discrete mathematics",
"Packing problems",
"Discrete geometry",
"Materials science",
"Crystallography",
"Condensed matter physics",
"Mathematical problems"
] |
12,144,610 | https://en.wikipedia.org/wiki/Increment%20theorem | In nonstandard analysis, a field of mathematics, the increment theorem states the following: Suppose a function is differentiable at and that is infinitesimal. Then
for some infinitesimal , where
If then we may write
which implies that , or in other words that is infinitely close to , or is the standard part of .
A similar theorem exists in standard Calculus. Again assume that is differentiable, but now let be a nonzero standard real number. Then the same equation
holds with the same definition of , but instead of being infinitesimal, we have
(treating and as given so that is a function of alone).
See also
Nonstandard calculus
Elementary Calculus: An Infinitesimal Approach
Abraham Robinson
Taylor's theorem
References
Howard Jerome Keisler: Elementary Calculus: An Infinitesimal Approach. First edition 1976; 2nd edition 1986. This book is now out of print. The publisher has reverted the copyright to the author, who has made available the 2nd edition in .pdf format available for downloading at http://www.math.wisc.edu/~keisler/calc.html
Theorems in calculus
Nonstandard analysis | Increment theorem | [
"Mathematics"
] | 239 | [
"Theorems in mathematical analysis",
"Theorems in calculus",
"Calculus",
"Mathematical objects",
"Infinity",
"Nonstandard analysis",
"Mathematics of infinitesimals",
"Model theory"
] |
12,146,395 | https://en.wikipedia.org/wiki/Proximity%20effect%20%28electromagnetism%29 | In electromagnetics, proximity effect is a redistribution of electric current occurring in nearby parallel electrical conductors carrying alternating current (AC), caused by magnetic effects. In adjacent conductors carrying AC current in the same direction, it causes the current in the conductor to concentrate on the side away from the nearby conductor. In conductors carrying AC current in opposite directions, it causes the current in the conductor to concentrate on the side adjacent to the nearby conductor. Proximity effect is caused by eddy currents induced within a conductor by the time-varying magnetic field of the other conductor, by electromagnetic induction. For example, in a coil of wire carrying alternating current with multiple turns of wire lying next to each other, the current in each wire will be concentrated in a strip on each side of the wire facing away from the adjacent wires. This "current crowding" effect causes the current to occupy a smaller effective cross-sectional area of the conductor, increasing current density and AC electrical resistance of the conductor. The concentration of current on the side of the conductor gets larger with increasing frequency, so proximity effect causes adjacent wires carrying the same current to have more resistance at higher frequencies.
Explanation
A changing magnetic field will influence the distribution of an electric current flowing within an electrical conductor, by electromagnetic induction. When an alternating current (AC) flows through a conductor, it creates an associated alternating magnetic field around it. The alternating magnetic field induces eddy currents in adjacent conductors, altering the overall distribution of current flowing through them. The result is that the current is concentrated in the areas of the conductor farthest away from nearby conductors carrying current in the same direction.
The proximity effect can significantly increase the AC resistance of adjacent conductors when compared to their resistance with a DC current. The effect increases with frequency. At higher frequencies, the AC resistance of a conductor can easily exceed ten times its DC resistance.
Example: two parallel wires
The cause of proximity effect can be seen from the accompanying drawings of two parallel wires next to each other carrying alternating current (AC). The righthand wire in each drawing has the top part transparent to show the currents inside the metal. Each drawing depicts a point in the alternating current cycle when the current is increasing.
Currents in the same direction
In the first drawing the current in both wires is in the same direction. The current in the lefthand wire creates a circular magnetic field which passes through the other wire. From the right hand rule the field lines pass through the wire in an upward direction. From Faraday's law of induction, when the time-varying magnetic field is increasing, it creates a circular current within the wire around the magnetic field lines in a clockwise direction. These are called eddy currents.
On the lefthand side nearest to the other wire (1) the eddy current is in the opposite direction to the main current in the wire, so it subtracts from the main current, reducing it. On the righthand side (2) the eddy current is in the same direction as the main current so it adds to it, increasing it. The net effect is to redistribute the current in the cross section of the wire into a thin strip on the side facing away from the other wire. The current distribution is shown by the red arrows and color gradient (3) on the cross section, with blue areas indicating low current and green, yellow, and red indicating higher current.
The same argument shows that the current in the lefthand wire is also concentrated into a strip on the far side away from the other wire.
In an alternating current the currents in the wire are increasing for half the time and decreasing half the time. When the current in the wires begins to decrease, the eddy currents reverse direction, which reverses the current redistribution.
Currents in opposite directions
In the second drawing, the alternating current in the wires is in opposite directions; in the lefthand wire it is into the page and in the righthand wire it is out of the page. This is the case in AC electrical power cables, which have two wires in which the current direction is always opposite. In this case, since the current is opposite, from the right hand rule the magnetic field created by the lefthand wire is directed downward through the righthand wire, instead of upward as in the other drawing. From Faraday's law the circular eddy currents are directed in a counterclockwise direction.
On the lefthand side nearest to the other wire (1) the eddy current is now in the same direction as the main current, so it adds to the main current, increasing it. On the righthand side (2) the eddy current is in the opposite direction to the main current, reducing it. In contrast to the previous case, the net effect is to redistribute the current into a thin strip on the side adjacent to the other wire.
Effects
The additional resistance increases power losses which, in power circuits, can generate undesirable heating. Proximity and skin effect significantly complicate the design of efficient transformers and inductors operating at high frequencies, used for example in switched-mode power supplies.
In radio frequency tuned circuits used in radio equipment, proximity and skin effect losses in the inductor reduce the Q factor, broadening the bandwidth. To minimize this, special construction is used in radio frequency inductors. The winding is usually limited to a single layer, and often the turns are spaced apart to separate the conductors. In multilayer coils, the successive layers are wound in a crisscross pattern to avoid having wires lying parallel to one another; these are sometimes referred to as "basket-weave" or "honeycomb" coils. Since the current flows on the surface of the conductor, high frequency coils are sometimes silver-plated, or made of litz wire.
Dowell method for determination of losses
This one-dimensional method for transformers assumes the wires have rectangular cross-section, but can be applied approximately to circular wire by treating it as square with the same cross-sectional area.
The windings are divided into 'portions', each portion being a group of layers which contains one position of zero MMF. For a transformer with a separate primary and secondary winding, each winding is a portion. For a transformer with interleaved (or sectionalised) windings, the innermost and outermost sections are each one portion, while the other sections are each divided into two portions at the point where zero m.m.f occurs.
The total resistance of a portion is given by
RDC is the DC resistance of the portion
Re(·) is the real part of the expression in brackets
m number of layers in the portion, this should be an integer
Angular frequency of the current
resistivity of the conductor material
Nl number of turns per layer
a width of a square conductor
b width of the winding window
h height of a square conductor
Squared-field-derivative method
This can be used for round wire or litz wire transformers or inductors with multiple windings of arbitrary geometry with arbitrary current waveforms in each winding. The diameter of each strand should be less than 2 δ. It also assumes the magnetic field is perpendicular to the axis of the wire, which is the case in most designs.
Find values of the B field due to each winding individually. This can be done using a simple magnetostatic FEA model where each winding is represented as a region of constant current density, ignoring individual turns and litz strands.
Produce a matrix, D, from these fields. D is a function of the geometry and is independent of the current waveforms.
is the field due to a unit current in winding j
is the spatial average over the region of winding j
is the number of turns in winding j, for litz wire this is the product of the number of turns and the strands per turn.
is the average length of a turn
is the wire or strand diameter
is the resistivity of the wire
AC power loss in all windings can be found using D, and expressions for the instantaneous current in each winding:
Total winding power loss is then found by combining this value with the DC loss,
The method can be generalized to multiple windings.
See also
Skin effect
External links
Skin Effect, Proximity Effect, and Litz Wire Electromagnetic effects
Skin and Proximity Effects and HiFi Cables
Reading
Terman, F.E. Radio Engineers' Handbook, McGraw-Hill 1943—details electromagnetic proximity and skin effects
References
Electromagnetism | Proximity effect (electromagnetism) | [
"Physics"
] | 1,703 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
12,146,531 | https://en.wikipedia.org/wiki/Proximity%20effect%20%28superconductivity%29 | Proximity effect or Holm–Meissner effect is a term used in the field of superconductivity to describe phenomena that occur when a superconductor (S) is placed in contact with a "normal" (N) non-superconductor. Typically the critical temperature of the superconductor is suppressed and signs of weak superconductivity are observed in the normal material over mesoscopic distances. The proximity effect is known since the pioneering work by R. Holm and W. Meissner. They have observed zero resistance in SNS pressed contacts, in which two superconducting metals are separated by a thin film of a non-superconducting (i.e. normal) metal. The discovery of the supercurrent in SNS contacts is sometimes mistakenly attributed to Brian Josephson's 1962 work, yet the effect was known long before his publication and was understood as the proximity effect.
Origin of the effect
Electrons in the superconducting state of a superconductor are ordered in a very different way than in a normal metal, i.e. they are paired into Cooper pairs. Furthermore, electrons in a material cannot be said to have a definitive position because of the momentum-position complementarity. In solid state physics one generally chooses a momentum space basis, and all electron states are filled with electrons until the Fermi surface in a metal, or until the gap edge energy in the superconductor.
Because of the nonlocality of the electrons in metals, the properties of those electrons cannot change infinitely quickly. In a superconductor, the electrons are ordered as superconducting Cooper pairs; in a normal metal, the electron order is gapless (single-electron states are filled up to the Fermi surface). If the superconductor and normal metal are brought together, the electron order in the one system cannot infinitely abruptly change into the other order at the border. Instead, the paired state in the superconducting layer is carried over to the normal metal, where the pairing is destroyed by scattering events, causing the Cooper pairs to lose their coherence. For very clean metals, such as copper, the pairing can persist for hundreds of microns.
Conversely, the (gapless) electron order present in the normal metal is also carried over to the superconductor in that the superconducting gap is lowered near the interface.
The microscopic model describing this behavior in terms of single electron processes is called Andreev reflection. It describes how electrons in one material take on the order of the neighboring layer by taking into account interface transparency and the states (in the other material) from which the electrons can scatter.
Overview
As a contact effect, the proximity effect is closely related to thermoelectric phenomena like the Peltier effect or the formation of pn junctions in semiconductors. The proximity effect enhancement of is largest when the normal material is a metal with a large diffusivity rather than an insulator (I). Proximity-effect suppression of in a spin-singlet superconductor is largest when the normal material is ferromagnetic, as the presence of the internal magnetic field weakens superconductivity (Cooper pairs breaking).
Research
The study of S/N, S/I and S/S' (S' is lower superconductor) bilayers and multilayers has been a particularly active area of superconducting proximity effect research. The behavior of the compound structure in the direction parallel to the interface differs from that perpendicular to the interface. In type II superconductors exposed to a magnetic field parallel to the interface, vortex defects will preferentially nucleate in the N or I layers and a discontinuity in behavior is observed when an increasing field forces them into the S layers. In type I superconductors, flux will similarly first penetrate N layers. Similar qualitative changes in behavior do not occur when a magnetic field is applied perpendicular to the S/I or S/N interface. In S/N and S/I multilayers at low temperatures, the long penetration depths and coherence lengths of the Cooper pairs will allow the S layers to maintain a mutual, three-dimensional quantum state. As temperature is increased, communication between the S layers is destroyed resulting in a crossover to two-dimensional behavior. The anisotropic behavior of S/N, S/I and S/S' bilayers and multilayers has served as a basis for understanding the far more complex critical field phenomena observed in the highly anisotropic cuprate high-temperature superconductors.
Recently the Holm–Meissner proximity effect was observed in graphene by the Morpurgo research group. The experiments have been done on nanometer scale devices made of single graphene layers with superimposed superconducting electrodes made of 10 nm Titanium and 70 nm Aluminum films. Aluminum is a superconductor, which is responsible for inducing superconductivity into graphene. The distance between the electrodes was in the range between 100 nm and 500 nm. The proximity effect is manifested by observations of a supercurrent, i.e. a current flowing through the graphene junction with zero voltage on the junction. By using the gate electrodes the researches have shown that the proximity effect occurs when the carriers in the graphene are electrons as well as when the carriers are holes. The critical current of the devices was above zero even at the Dirac point.
Abrikosov vortex and proximity effect
Here is shown, that a quantum vortex with a well-defined core can exist in a rather thick normal metal, proximized with a superconductor.
See also
References
Superconductivity of Metals and Alloys by P.G. de Gennes, , a textbook which devotes significant space to the superconducting proximity effect (called "boundary effect" in the book).
Superconductivity | Proximity effect (superconductivity) | [
"Physics",
"Materials_science",
"Engineering"
] | 1,235 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
12,151,116 | https://en.wikipedia.org/wiki/C-myc%20mRNA | C-myc mRNA is a type of mRNA that serves as a template for the MYC protein which is implicated in the rapid growth of cancer cells. This mRNA is a topic of ongoing research to investigate the viability of preventing cancer growth by cleaving or degrading the c-myc mRNA.
See also
C-myc
References
RNA
Molecular biology | C-myc mRNA | [
"Chemistry",
"Biology"
] | 75 | [
"Biochemistry",
"Molecular biology"
] |
12,154,122 | https://en.wikipedia.org/wiki/GigaCrete | GigaCrete refers to a family of green building products based on proprietary non-silica, non-toxic, non-combustible, cementitious, mineral-based binders combined with filler material. GigaCrete building materials do not contain silica-based sands or Portland cement. GigaCrete products are manufactured by a privately held company, GigaCrete, Inc., whose factory headquarters are located in Las Vegas, Nevada.
History
GigaCrete was invented in the early 2000s by British-born architect and industrial designer Andrew C. Dennis.
Green Building Products
PlasterMax
GigaCrete PlasterMax, an LEED-qualified interior wall coating, is listed as a green building material and interior finish by Architectural Record.
GigaCrete PlasterMax is a fire-resistance rated interior wall coating for insulating concrete forms (ICF), thereby providing a fire-rated green alternative to gypsum drywall over ICF. When applied over an expanded polystyrene (EPS) foam facade of stacked ICF blocks, PlasterMax bonds to the foam and forms a surface that resists abrasion and abuse. PlasterMax can be applied over drywall.
StuccoMax
GigaCrete StuccoMax is an exterior water-resistant green stucco product, an inorganic mixture of mineral binders and limestone sand. Like PlasterMax, StuccoMax bonds with expanded polystyrene (EPS) and forms a surface resistant to abrasion and abuse.
BallistiCrete
GigaCrete BallistiCrete is a green protective interior plaster passed as NIJ Level III and NIJ Level IV Armor Piercing in tests conducted by Intertek's H.P. White Laboratory, an accredited ballistics and ballistics resistance laboratory.
Controversy in Rio de Janeiro
In April 2017, Marcelo Crivella, mayor of Rio de Janeiro, announced a plan to fortify 400 schools with BallistiCrete. The schools are located in areas of the city allegedly dominated by drug traffickers. Critics of Crivella's plan argued that school buildings, made resistant to incoming gunfire by the application of BallistiCrete, could be seized by bandits and used as armored fortresses in wars between gangs or clashes with police.
GigaHouse
GigaHouse refers to GigaCrete's steel-framed, insulated-panel building system designed to be finished using GigaCrete exterior and interior plasters. External claddings can be added to a GigaHouse.
On October 15, 2020, Bloomberg News reported: "A Nevada company called GigaCrete manufactures panels made with expanded polystyrene insulation foam that slot into steel frames to form walls. Once assembled at a building site, the exterior and interior surfaces are coated with a proprietary non-combustible material that resists temperatures up to 1,700°F (927°C). GigaCrete structures have also been rated to withstand wind speeds of 245 miles per hour (394 kilometers per hour)."
Miami-Dade County Notice of Acceptance (NOA)
On May 30, 2019, the Miami-Dade County Product Control Section issued a Notice of Acceptance (NOA #19-0326.04) in respect to the GigaCrete Exterior Wall Panel System and Large and Small Missile Impact Resistance, thereby designating said system as complying with the High Velocity Hurricane Zone (HVHZ) of the Florida Building Code.
On December 10, 2020, the Miami-Dade County Product Control Section issued a superseding Notice of Acceptance (NOA #20-0922.04) in respect to the GigaCrete Exterior Wall Panel System and Large and Small Missile Impact Resistance, thereby designating said system as complying with the High Velocity Hurricane Zone (HVHZ) of the Florida Building Code.
FAA
In February 2015, the U.S. Federal Aviation Administration (FAA) issued a solicitation for "Design & Construction of 5 Duplexes at Kaibab National Forest, Tusayan Ranger Station, Tusayan, Arizona." In section 02.Scope of Work, the solicitation states, "The housing design/construction shall be GigaHouse by Giga Crete or equal.", and further states, "Each stem wall must use the GigaCrete mortar-less joint CMU system; or equal."
On August 4, 2015, pursuant to said solicitation, the FAA awarded the contract to Koo Design-Build, Inc. of Scottsdale, Arizona in the amount of US$1,085,100.00.
East Bay Revitalization
East Bay Revitalization, Inc., a California nonprofit organization, sponsored the construction of a energy-efficient Accessory Dwelling Unit (ADU) a.k.a.Tiny house in Richmond, California. The unit was completed in September 2016 and built using GigaCrete's GigaHouse system and materials technology. In February 2017, sponsor EBR announced commencement of construction of a GigaHouse adjacent to the Richmond ADU.
Habitat for Humanity of Sonoma County (Sonoma Wildfire Cottage Initiative)
Habitat for Humanity of Sonoma County, California, is one of 1,400 affiliates of Habitat for Humanity International.
On June 6, 2018, Habitat for Humanity of Sonoma County announced a Sonoma Wildfire Cottage Initiative, a pilot project of temporary cottages showcasing a variety of innovative construction technologies for the purpose of evaluation. Three firms were selected to participate in the test, viz., Connect Homes, West Coast SIPs, and Giga Crete/Presidio Realty Advisors.
As reported on June 13, 2018, in Builder magazine, the then-interim CEO of Habitat for Humanity of Sonoma County, Mr. John Kennedy, chairman of the board, stated, “The devastating October (2017) wildfires destroyed over 5,200 homes in Sonoma County, which made our housing crisis dramatically worse. This pilot project helps us quickly evaluate a variety of technologies while simultaneously helping families in dire need of stable temporary housing."
Construction for the pilot program commenced October 12, 2018, on the Medtronic Fountaingrove campus in Santa Rosa, California.
On August 16, 2019, the North Bay Business Journal reported, "The first five 'wildfire cottages' built for survivors of the October 2017 blazes that destroyed thousands of Santa Rosa homes were dedicated Friday on land donated by Medtronic during a ceremony honoring the many partners involved."
Awards
Popular Mechanics
In 2007, Popular Mechanics magazine awarded a Best in Green Design to panels made with GigaCrete hydraulic cement and waste materials.
U.S. Department of Energy Builders Challenge
In 2009, Next Gen 09 LLC, in partnership with the U.S. Department of Energy Builders Challenge program, built a high-performance ICF demonstration home outside Las Vegas, Nevada. A score of 70 or less on the Energy Smart Home Scale qualifies for the Builders Challenge. The Pittsburgh Post-Gazette, citing GigaCrete PlasterMax as the interior wall finish of the NextGen home, wrote: "[PlasterMax is] a mineral-based hydraulic cement made with recycled waste materials such as fly ash. Sprayed over ordinary drywall and then troweled smooth, it's lighter than conventional concrete and also won't shrink or crack; it's also bullet and blast resistant."
ICF Builder Magazine
The 2016 ICF Builder Award Winner for Best in Class Small Residential is an ICF home erected in Corte Madera, California. GigaCrete interior and exterior plasters were applied directly over ICF wall surfaces.
References
External links
GigaCrete, Inc. company website
Building materials
Companies based in Las Vegas | GigaCrete | [
"Physics",
"Engineering"
] | 1,566 | [
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Matter",
"Building materials"
] |
8,998,654 | https://en.wikipedia.org/wiki/Killer-cell%20immunoglobulin-like%20receptor | Killer-cell immunoglobulin-like receptors (KIRs), are a family of type I transmembrane glycoproteins expressed on the plasma membrane of natural killer (NK) cells and a minority of T cells. In humans, they are encoded in the leukocyte receptor complex (LRC) on chromosome 19q13.4; the KIR region is approximately 150 kilobases and contains 14 loci, including 7 protein-coding genes (some duplicated) and 2 pseudogenes.
They regulate the killing function of these cells by interacting with major histocompatibility (MHC) class I molecules, which are expressed on all nucleated cell types. KIR receptors can distinguish between MHC I allelic variants, which allows them to detect virally infected cells or transformed cells. KIRs are paired receptors, meaning some have activating and others have inhibitory functions; most KIRs are inhibitory: their recognition of MHC molecules suppresses the cytotoxic activity of their NK cell.
A limited number of KIRs are activating: their recognition of MHC molecules activates the cytotoxic activity of their cell. Initial expression of KIRs on NK cells is stochastic, but NK cells undergo an educational process as they mature that alters the KIR expression to maximize the balance between effective defense and self-tolerance. KIR's role in killing unhealthy self-cells and not killing healthy self-cells, involves them in protection against and propensity to viral infection, autoimmune disease, and cancer. KIR molecules are polymorphic: their gene sequences differ greatly across individuals. They are also polygenic so that it is rare for two unrelated individuals to possess the same KIR genotype.
Unlike T lymphocytes, resting NK cells use preformed lytic granules to kill target cells, implying a rapid cytolytic effect that requires a finely regulated control mechanism. The ability to spare normal tissues, but not transformed cells, is termed the "missing self" hypothesis. This phenomenon is determined by MHC class I–specific inhibitory receptors that functionally dominate the triggering potentials induced by activating receptors. Thus, NK cells use a complex array of inhibitory or activating receptor/ligand interactions, the balance of which regulates NK cell function and cytolytic activity. Receptors displaying this function evolved during phylogenesis following the rapid evolution of genes coding for MHC class I molecules. Thus, in primates and a few other species, evolved MHC class I–inhibitory receptors belong to the KIR immunoglobulin superfamily, while in rodents and other species the same function is under the control of type II integral transmembrane glycoproteins, structurally characterized as disulfide-linked homodimers belonging to the Ly49 protein family.
Function
Role in natural killer cells
Natural killer (NK) cells are a type of lymphocyte cell involved in the innate immune system's response to viral infection and tumor transformation of host cells. Like T cells, NK cells have many qualities characteristic of the adaptive immune system, including the production of “memory” cells that persist following encounter with antigens and the ability to create a secondary recall response. Unlike T cells, NK cell receptors are germline encoded, and therefore do not require somatic gene rearrangements. Because NK cells target self cells, they have an intricate mechanism by which they differentiate self and non-self cells in order to minimize the destruction of healthy cells and maximize the destruction of unhealthy cells.
Natural killer cell cytolysis of target cells and cytokine production is controlled by a balance of inhibitory and activating signals, which are facilitated by NK cell receptors. NK cell inhibitory receptors are part of either the immunoglobulin-like (IgSF) superfamily or the C-type lectin-like receptor (CTLR) superfamily. Members of the IgSF family include the human killer cell immunoglobulin-like receptor (KIR) and the Immunoglobulin-like transcripts (ILT). CTLR inhibitory receptors include the CD94/NKG2A and the murine Ly49, which is probably analogous to the human KIR.
Role in T cells
KIR and CD94 (CTLR) receptors are expressed by 5% of peripheral blood T cells.
Nomenclature and classification
KIR receptors are named based on the number of their extracellular Ig-like domains (2D or 3D) and by the length of their cytoplasmic tail (long (L), short (S), or pseudogene (P)). The number following the L, S, or P in the case of a pseudogene, differentiates KIR receptors with the same number of extracellular domains and length of cytoplasmic tail. Finally, the asterisk after this nomenclature indicates allelic variants.
Single substitutions, insertions, or deletions in the genetic material that encodes KIR receptors changes the site of termination for the gene, causing the cytoplasmic tail to be long or short, depending on the site of the stop codon. These single nucleotide alterations in the nucleotide sequence fundamentally alter KIR function. With the exception of KIR2DL4, which has both activating and inhibitory capabilities, KIR receptors with long cytoplasmic tails are inhibitory and those with short tails are activating.
Receptor types
Inhibitory receptors
Inhibitory receptors recognize self-MHC class I molecules on target self cells, causing the activation of signaling pathways that stop the cytolytic function of NK cells. Self-MHC class I molecules are always expressed under normal circumstance. According to the missing-self hypothesis, inhibitory KIR receptors recognize the downregulation of MHC class I molecules in virally-infected or transformed self cells, leading these receptors to stop sending the inhibition signal, which then leads to the lysis of these unhealthy cells. Because natural killer cells target virally infected host cells and tumor cells, inhibitory KIR receptors are important in facilitating self-tolerance.
KIR inhibitory receptors signal through their immunoreceptor tyrosine-based inhibitory motif (ITIM) in their cytoplasmic domain. When inhibitory KIR receptors bind to a ligand, their ITIMs are tyrosine phosphorylated and protein tyrosine phosphatases, including SHP-1, are recruited. Inhibition occurs early in the activation signaling pathway, likely through the interference of the pathway by these phosphatases.
Activating receptors
Activating receptors recognize ligands that indicate host cell aberration, including induced-self antigens (which are markers of infected self cells and include MICA, MICB, and ULBP, all of which are related to MHC class 1 molecules), altered-self antigens (MHC class I antigens laden with foreign peptide), and/or non-self (pathogen encoded molecules). The binding of activating KIR receptors to these molecules causes the activation of signaling pathways that cause NK cells to lyse virally infected or transformed cells.
Activating receptors do not have the immunoreceptor tyrosine-base inhibition motif (ITIM) characteristic of inhibitory receptors, and instead contain a positively charged lysine or arginine residue in their transmembrane domain (with the exception of KIR2L4) that helps to bind DAP12, an adaptor molecule containing a negatively charged residue as well as immunoreceptor tyrosine-based activation motifs (ITAM). Activating KIR receptors include KIR2DS, and KIR3DS.
Much less is known about activating receptors compared to inhibitory receptors. A significant proportion of the human population lacks activating KIR receptors on the surface of their NK cells as a result of truncated variants of KIR2DS4 and 2DL4, which are not expressed on the cell surface, in individuals who are heterozygous for the KIR group A haplotype. This suggests that a lack of activating KIR receptors is not incredibly detrimental, likely because there are other families of activating NK cell surface receptors that bind MHC class I molecules that are probably expressed in individuals with this phenotype. Because little is known about the function of activating KIR receptors, however, it is possible that there is an important function of activating KIR receptors of which we are not yet aware.
Activating receptors have lower affinity for their ligands than do inhibitory receptors. Although the purpose of this difference in affinity is unknown, it is possible that the cytolysis of target cells occurs preferentially under conditions in which the expression of stimulating MHC class I molecules on target cells is high, which may occur during viral infection. This difference, which is also present in Ly49, the murine homolog to KIR, tips the balance towards self-tolerance.
Expression
Activating and inhibitory KIR receptors are expressed on NK cells in patchy, variegated combinations, leading to distinct NK cells. The IgSF and CTLR superfamily inhibitory receptors expressed on the surface of NK cells are each expressed on a subset of NK cells in such a way that not all classes of inhibitory NK cell receptors are expressed on each NK cell, but there is some overlap. This creates unique repertories of NK cells, increasing the specificity with which NK cells recognize virally-infected and transformed self-cells. Expression of KIR receptors is determined primarily by genetic factors, but recent studies have found that epigenetic mechanisms also play a role in KIR receptor expression. Activating and inhibitory KIR receptors that recognize the same class I MHC molecule are mostly not expressed by the same NK cell. This pattern of expression is beneficial in that target cells that lack inhibitory MHC molecules but express activating MHC molecules are extremely sensitive to cytolysis.
Although initial expression of inhibitory and activating receptors on NK cells appears to be stochastic, there is an education process based on MHC class I alleles expressed by the host that determines the final repertoire of NK receptor expression. This process of education is not well understood. Different receptor genes are expressed primarily independently of other receptor genes, which substantiates the idea that initial expression of receptors is stochastic. Receptors are not expressed entirely independently of each other, however, which supports the idea that there is an education process that reduces the amount of randomness associated with receptor expression. Further, once an NK receptor gene is activated in a cell, its expression is maintained for many cell generations. It appears that some proportion of NK cells are developmentally immature and therefore lack inhibitory receptors, making them hyporesponsive to target cells. In the human fetal liver, KIR and CD49 receptors are already expressed by NK cells, indicating that at least some KIR receptors are present in fetal NK cells, although more studies are needed to substantiate this idea. Although the induction of NK receptor expression is not fully understood, one study found that human progenitor cells cultured in vitro with cytokines developed into NK cells, and many of these cells expressed CD94/NKG2A receptors, a CTLR receptor. Moreover, there was little to no KIR receptor expression in these cells, so additional signals are clearly required for KIR induction.
The balance between effective defense and self-tolerance is important to the functioning of NK cells. It is thought that NK cell self-tolerance is regulated by the educational process of receptor expression described above, although the exact mechanism is not known. The “at least one” hypothesis is an attractive, though not yet fully substantiated, hypothesis that tries to explain the way in which self-tolerance is regulated in the education process. This hypothesis posits that the NK cell repertoire is regulated so that at least one inhibitory receptor (either of the IgSF or CTLR superfamily) is present on every NK cell, which would ensure self-tolerance. Effective defense requires an opposing pattern of receptor expression. The co-expression of many MHC-specific receptors by NK cells is disfavored, likely because cells that co-express receptors are less able to attack virally infected or transformed cells that have down-regulated or lost one MHC molecule compared to NK cells that co-express receptors to a lesser degree. Minimization of co-expression, therefore, is important for mounting an effective defense by maximizing the sensitivity of response.
Structure
Gene structure
The KIR gene cluster has approximately 150 kb and is located in the leukocyte receptor complex (LRC) on human chromosome 19q13.4. KIR genes have 9 exons, which are strongly correlated with KIR receptor protein domains (leader, D0, D1, and D2, stem, transmembrane, and cytosolic domains). Furthermore, the promoter regions of the KIR genes share greater than 90% sequence identity, which indicates that there is similar transcriptional regulation of KIR genes.
The human killer cell immunoglobulin-like receptors superfamily (which share 35–50% sequence identity and the same fold as KIR) includes immunoglobulin-like transcripts (ILT, also known as leukocyte immunoglobulin-like receptors (LIRs)), leukocyte-associated Ig-like receptors (LAIR), paired Ig-like receptors (PIR), and gp49. Moreover, it has been reported that between 12 and 17 KIR receptors have been identified. There was a single ancestral gene from which all extant KIR receptor genes arose via duplications, recombinations, and mutations, and all KIR receptors share more than 90% sequence identity.
Genes
two domains, long cytoplasmic tail: KIR2DL1, KIR2DL2, KIR2DL3, KIR2DL4, KIR2DL5A, KIR2DL5B,
two domains, short cytoplasmic tail: KIR2DS1, KIR2DS2, KIR2DS3, KIR2DS4, KIR2DS5
three domains, long cytoplasmic tail: KIR3DL1, KIR3DL2, KIR3DL3
three domains, short cytoplasmic tail: KIR3DS1
Protein structure
NK cell receptors bind directly to the MHC class I molecules on the surface of target cells. Human killer cell immunoglobulin-like receptors recognize the α1 and α2 domains of class I human leukocyte antigens (HLA-A, -B, and –C), which are the human versions of MHCs. Position 44 in the D1 domain of KIR receptors and position 80 in HLA-C are important for the specificity of KIR-HLA binding.
Diversity
Allelic diversity
All but two KIR genes (KIR2DP1 and KIR3DL3) have multiple alleles, with KIR3DL2 and KIR3DL1 having the most variations (12 and 11, respectively). In total, as of 2012 there were 614 known KIR nucleotide sequences encoding 321 distinct KIR proteins. Further, inhibitory receptors are more polymorphic than activating receptors. The great majority (69%) of substitutions in the KIR DNA sequence are nonsynonymous, and 31% are synonymous. The ratio of nonsynonymous to synonymous substitutions (dN/dS) is greater than one for every KIR and every KIR domain, indicating that positive selection is occurring. Further, the 5′ exons, which encode the leader peptide and the Ig-like domains, have a larger proportion of nonsynonymous substitutions than do the 3′ exons, which encode the stem, transmembrane region, and the cytoplasmic tail. This indicates that stronger selection is occurring on the 5′ exons, which encodes the extracellular part of the KIR that binds to the MHC. There is, therefore, evidence of strong selection on the KIR ligand binding sites, which is consistent with the high specificity of the KIR ligand binding site, as well as the rapid evolution of class I MHC molecules and viruses.
Genotype and haplotype diversity
Human genomes differ in their amount of KIR genes, in their proportion of inhibitory versus activating genes, and in their allelic variations of each gene. As a result of these polygenic and polymorphic variations, less than 2% of unrelated individuals have the same KIR genotype, and ethnic populations have broadly different KIR genotype frequencies. This incredible diversity likely reflects the pressure from rapidly evolving viruses. 30 distinct haplotypes have been classified, all of which can be broadly characterized by group A and group B haplotypes. The Group A haplotype has a fixed set of genes, which are KIR3DL3, 2L3, 2DP1, 2DL1, 3DP1, 2DL4, 3DL1, 2DS4, and 3DL2. Group B haplotypes encompass all other haplotypes, and therefore have a variable set of genes, including several genes absent from group A, including KIR2DS1, 2DS2, 2DS3, 2DS5, 2DL2, 2DL5, and 3DS1. Because group B has both gene and allelic diversity (compared to just allelic diversity in group A), group B is even more diverse than group A. Four KIR genes (2DL4, 3DL2, 3DL3, AND 3DP1) are present in nearly all KIR haplotypes and as a result are known as framework genes. Inheritance of maternal and paternal haplotypes results in further diversity of individual KIR genotype.
Group A only has one activating KIR receptor, whereas Group B contains many activating KIR receptors, and as a result group B haplotype carriers have a stronger response to virally infected and transformed cells. As a result of the huge migrations peoples indigenous to India, Australia, and the Americas made from Africa, activating KIR receptors became advantageous to these populations, and as a result these populations acquired activating KIR receptors.
A study of the genotypes of 989 individuals representing eight distinct populations found 111 distinct KIR genotypes. Individuals with the most frequent genotype, which comprised 27% of the individuals studied, are homozygous for the group A haplotype. The remaining 110 KIR genotypes found in this study are either group A and group B heterozygotes or group B homozygotes (who are indistinguishable from heterozygotes by genotype alone). 41% (46) of the genotypes identified were found in only one individual, and 90% of individuals had the same 40 genotypes. Clearly, there is extensive diversity in human KIR genotypes, which allows for rapid evolution in response to rapidly evolving viruses.
Role in disease
Genotypes that are inhibitory KIR receptor dominant are likely susceptible to infection and reproductive disorders but protective against autoimmune diseases, whereas activating KIR receptor dominant genotypes are likely susceptible to autoimmunity but protective against viral infection and cancer. The relationship between inhibitory vs stimulatory KIR genotype dominance, however, is more complicated than this because diseases are so diverse and have so many different causes, and immune activation or de-activation may not be protective or harmful at every stage of disease. KIR2DS2 or 2DS1, which are activating receptors, are strongly correlated with most autoimmune diseases, which is logical because activating receptors induce signaling pathways that lead to cytolysis of target cells. Another activating receptor, KIR3DS1, is protective to hepatitis-C virus infection, is associated with slowing down of AIDs progression, and is associated with cervical cancer, which is associated with a distinct strain of HPV. It is likely that KIR3DS1 is associated with cervical cancer despite its stimulatory nature because cervical tumors generally associate with localized inflammation.
As a drug target
1-7F9 is a human monoclonal antibody that binds to KIR2DL1/2L3. Very similar Lirilumab is intended for the treatment of cancers e.g. leukemia.
Use of KIRs in CAR T Cell Therapy
The Killer-cell immunoglobulin-like receptors (KIR) are being explored as an alternative activation method in CAR T cell therapy. Unlike the traditional approach that utilizes T cell receptors, incorporating KIRs into CAR T cells aims to exploit the cytotoxic properties and regulatory functions of natural killer (NK) cells. This method is under investigation for its potential to enhance the targeting and destruction of cancer cells, with the goal of addressing limitations seen in current CAR T cell therapies, such as off-target effects and resistance. Research is ongoing to determine the effectiveness and safety of using KIR-based activation in CAR T cell treatments.
See also
NK-92, a natural killer cell line that does not express KIR
References
External links
http://www.KIRtyping.info
The KIR Gene Cluster, NIH Bookshelf, PDF
Immunoglobulin superfamily
Receptors | Killer-cell immunoglobulin-like receptor | [
"Chemistry"
] | 4,503 | [
"Receptors",
"Signal transduction"
] |
8,998,943 | https://en.wikipedia.org/wiki/845%20%28vacuum%20tube%29 | The 845 power triode is a radio transmitting vacuum tube which can also be used as an audio amplifier and modulation tube. Typically, the plate is machined from solid graphite in order to accommodate high power dissipation (up to 100 watts) and voltage. Some current production 845 tubes have metal plates.
The 845 tube has a bayonet mount and thoriated filaments which glow like lightbulbs when powered up. The glass envelope is about 2-5/16" in diameter and 6 inches tall, with the a total tube height of about 7-7/8 inches. It was first released by RCA in 1931. It saw extensive use in RCA AM radio transmitters
References
External links
845 @ The National Valve Museum
KRLA Broadcast History
Vacuum tubes | 845 (vacuum tube) | [
"Physics"
] | 162 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
9,000,484 | https://en.wikipedia.org/wiki/Quantum%20amplifier | In physics, a quantum amplifier is an amplifier that uses quantum mechanical methods to amplify a signal; examples include the active elements of lasers and optical amplifiers.
The main properties of the quantum amplifier are its amplification coefficient and uncertainty. These parameters are not independent; the higher the amplification coefficient, the higher the uncertainty (noise). In the case of lasers, the uncertainty corresponds to the amplified spontaneous emission of the active medium. The unavoidable noise of quantum amplifiers is one of the reasons for the use of digital signals in optical communications and can be deduced from the fundamentals of quantum mechanics.
Introduction
An amplifier increases the amplitude of whatever goes through it. While classical amplifiers take in classical signals, quantum amplifiers take in quantum signals, such as coherent states. This does not necessarily mean that the output is a coherent state; indeed, typically it is not. The form of the output depends on the specific amplifier design. Besides amplifying the intensity of the input, quantum amplifiers can also increase the quantum noise present in the signal.
Exposition
The physical electric field in a paraxial single-mode pulse can be approximated with superposition of modes; the electric field of a single mode can be described as
where
is the spatial coordinate vector, with z giving the direction of motion,
is the polarization vector of the pulse,
is the wave number in the z direction,
is the annihilation operator of the photon in a specific mode .
The analysis of the noise in the system is made with respect to the mean value of the annihilation operator. To obtain the noise, one solves for the real and imaginary parts of the projection of the field to a given mode . Spatial coordinates do not appear in the solution.
Assume that the mean value of the initial field is . Physically, the initial state corresponds to the coherent pulse at the input of the optical amplifier; the final state corresponds to the output pulse. The amplitude-phase behavior of the pulse must be known, although only the quantum state of the corresponding mode is important. The pulse may be treated in terms of a single-mode field.
A quantum amplifier is a unitary transform , acting the initial state and producing the amplified state , as follows:
This equation describes the quantum amplifier in the Schrödinger representation.
The amplification depends on the mean value of the field operator and its dispersion . A coherent state is a state with minimal uncertainty; when the state is transformed, the uncertainty may increase. This increase can be interpreted as noise in the amplifier.
The gain can be defined as follows:
The can be written also in the Heisenberg representation; the changes are attributed to the amplification of the field operator. Thus, the evolution of the operator A is given by , while the state vector remains unchanged. The gain is given by
In general, the gain may be complex, and it may depend on the initial state. For laser applications, the amplification of coherent states is important. Therefore, it is usually assumed that the initial state is a coherent state characterized by a complex-valued initial parameter such that . Even with such a restriction, the gain may depend on the amplitude or phase of the initial field.
In the following, the Heisenberg representation is used; all brackets are assumed to be evaluated with respect to the initial coherent state.
The expectation values are assumed to be evaluated with respect to the initial coherent state. This quantity characterizes the increase of the uncertainty of the field due to amplification. As the uncertainty of the field operator does not depend on its parameter, the quantity above shows how much output field differs from a coherent state.
Linear phase-invariant amplifiers
Linear phase-invariant amplifiers may be described as follows. Assume that the unitary operator amplifies in such a way that the input and the output are related by a linear equation
where and are c-numbers and is a creation operator characterizing the amplifier. Without loss of generality, it may be assumed that and are real. The commutator of the field operators is invariant under unitary transformation :
From the unitarity of , it follows that satisfies the canonical commutation relations for operators with Bose statistics:
The c-numbers are then
Hence, the phase-invariant amplifier acts by introducing an additional mode to the field, with a large amount of stored energy, behaving as a boson. Calculating the gain and the noise of this amplifier, one finds
and
The coefficient is sometimes called the intensity amplification coefficient. The noise of the linear phase-invariant amplifier is given by . The gain can be dropped by splitting the beam; the estimate above gives the minimal possible noise of the linear phase-invariant amplifier.
The linear amplifier has an advantage over the multi-mode amplifier: if several modes of a linear amplifier are amplified by the same factor, the noise in each mode is determined independently;that is, modes in a linear quantum amplifier are independent.
To obtain a large amplification coefficient with minimal noise, one may use homodyne detection, constructing a field state with known amplitude and phase, corresponding to the linear phase-invariant amplifier. The uncertainty principle sets the lower bound of quantum noise in an amplifier. In particular, the output of a laser system and the output of an optical generator are not coherent states.
Nonlinear amplifiers
Nonlinear amplifiers do not have a linear relation between their input and output. The maximum noise of a nonlinear amplifier cannot be much smaller than that of an idealized linear amplifier. This limit is determined by the derivatives of the mapping function; a larger derivative implies an amplifier with greater uncertainty. Examples include most lasers, which include near-linear amplifiers, operating close to their threshold and thus exhibiting large uncertainty and nonlinear operation. As with the linear amplifiers, they may preserve the phase and keep the uncertainty low, but there are exceptions. These include parametric oscillators, which amplify while shifting the phase of the input.
References
Further reading
Quantum optics
Amplifiers | Quantum amplifier | [
"Physics",
"Technology"
] | 1,207 | [
"Amplifiers",
"Quantum optics",
"Quantum mechanics"
] |
9,003,300 | https://en.wikipedia.org/wiki/Illumina%2C%20Inc. | Illumina, Inc. is an American biotechnology company, headquartered in San Diego, California. Incorporated on April 1, 1998, Illumina develops, manufactures, and markets integrated systems for the analysis of genetic variation and biological function. The company provides a line of products and services that serves the sequencing, genotyping and gene expression, and proteomics markets, and serves more than 155 countries.
Illumina's customers include genomic research centers, pharmaceutical companies, academic institutions, clinical research organizations, and biotechnology companies.
History
Illumina was founded in April 1998 by David Walt, Larry Bock, John Stuelpnagel, Anthony Czarnik, and Mark Chee. While working with CW Group, a venture-capital firm, Bock and Stuelpnagel uncovered what would become Illumina's BeadArray technology at Tufts University and negotiated an exclusive license to that technology. In 1999, Illumina acquired Spyder Instruments (founded by Michal Lebl, Richard Houghten, and Jutta Eichler) for their technology of high-throughput synthesis of oligonucleotides. Illumina completed its initial public offering in July 2000.
Illumina began offering single nucleotide polymorphism (SNP) genotyping services in 2001 and launched its first system, the Illumina BeadLab, in 2002, using GoldenGate Genotyping technology. Illumina currently offers microarray-based products and services for an expanding range of genetic analysis sequencing, including SNP genotyping, gene expression, and protein analysis. Illumina's technologies are used by a broad range of academic, government, pharmaceutical, biotechnology, and other leading institutions around the globe.
On January 26, 2007, the company completed the acquisition of the British company Solexa, Inc. for ~$650M. Solexa was founded in June 1998 by Shankar Balasubramanian and David Klenerman to develop and commercialize genome-sequencing technology invented by the founders at the University of Cambridge. Solexa, Inc. was formed in 2005 when Solexa Ltd reversed into Lynx Therapeutics of Hayward.
Illumina also uses the DNA colony sequencing technology, invented in 1997 by Pascal Mayer and Laurent Farinelli and which was acquired by Solexa in 2004 from Manteia Predictive Medicine. It is being used to perform a range of analyses, including whole genome resequencing, gene-expression analysis, and small ribonucleic acid (sRNA) analysis.
In June 2009, Illumina announced the launch of their own Personal Full Genome Sequencing Service at a depth of 30X.
Until 2010, Illumina sold only instruments that were labeled "for research use only"; in early 2010, Illumina obtained FDA approval for its BeadXpress system to be used in clinical tests. This was part of the company's strategy at the time to open its own CLIA lab and begin offering clinical genetic testing itself.
Illumina acquired Epicentre Biotechnologies, based in Madison, Wisconsin, on January 11, 2011. On January 25, 2012, Hoffmann-La Roche made an unsolicited bid to buy Illumina for $44.50 per share or about $5.7 billion. Roche tried other tactics, including raising its offer (to $51.00, for about $6.8 billion). Illumina rejected the offer, and Roche abandoned the offer in April.
In 2014, the company announced a multimillion-dollar product, HiSeq X Ten. In January 2014, Illumina already held 70% of the market for genome-sequencing machines. Illumina machines accounted for more than 90% of all DNA data produced. In 2020, the company invested in the acquisition of the pre-commercial firm Enancio, which had developed a DNA data compression algorithm specifically targeting Illumina data capable of reducing storage footprint by 80% (e.g. 50 GB compressed to 10 GB).
On July 5, 2016, Jay Flatley, who had been CEO since 1999, assumed the role of executive chairman of the board of directors. Francis deSouza, who had been president of the company since 2013, took on the additional role of CEO.
In late 2015, Illumina spun off the company Grail, focused on blood testing for cancer tumors in the bloodstream. In 2017 Grail had planned to raise $1 billion in its second round of financing, and received funding from Bill Gates and Jeff Bezos investing $100 million in series A funding, and with Illumina maintaining a 20% holding share in Grail. Grail is working with a blood test trial with over 120,000 women during scheduled mammogram visits in the states of Minnesota and Wisconsin, as well as a partnership with the Mayo Clinic. Grail uses Illumina sequencing technology for tests. Grail planned to roll out the tests by 2019. In September 2020, Illumina announced a proposed cash and stock deal to acquire Grail for $8 billion.
In November 2018, Illumina proposed the acquisition of Pacific Biosciences for $8.00 per share or around $1.2 billion in total. In December 2019, the Federal Trade Commission (FTC) sued to block the acquisition. The proposed deal was abandoned on January 2, 2020, with Illumina paying Pacific a $98 million termination fee.
In March 2021, the FTC sued to block Illumina's $7.1 billion vertical merger with Grail. In July 2021, the European Commission opened an in-depth investigation into the Grail acquisition by Illumina. Against the orders of active investigations by both the US FTC and the EU European Commission, Illumina publicly announced it had completed its acquisition of Grail on August 18, 2021. The FTC urged Illumina to "unwind" the merger shortly after, and in October 2021, the European Commission ordered Illumina to keep Grail a separate company and adopted interim measures to prevent harm to competition, or face penalty payments up to 5% of their average daily turnover and/or fines up to 10% of their annual worldwide turnover under Articles 15 and 14 of the EU Merger Regulation respectively. In September 2022, a US administrative judge ruled against the FTC's efforts to prevent the acquisition on antitrust grounds. In April 2023, the FTC ordered Grail to be divested by Illumina. In July 2023, the European Commission imposed a €432 million ($476 million) penalty on Illumina for closing the Grail acquisition without EU approval.
In September 2022, Illumina launched NovaSeq X and NovaSeq X Plus. The NovaSeq X Plus can sequence 20,000 genomes per year, compared to 7,500 per year of Illumina's previous machines and generate up to 16 Tb of data per run. The series includes redeveloped reagents, dyes, and polymerases which can be shipped at ambient temperature.
In June 2023, deSouza resigned as CEO of Illumina, and was replaced by interim CEO Charles Dadswell, the company's general counsel. Also in June 2023, Hologic CEO Stephen Macmillan was named non-executive Chairman of the Board of Directors.
In September 2023, Agilent Technologies' senior vice president Jacob Thaysen was appointed CEO.
In October 2023, the European Commission ordered Grail to be divested from Illumina within the next twelve months. Illumina said it would explore a third-party sale or a capital markets transaction if it fails to win its ongoing challenge in court. In June 2024 Illumina has completed the spin-off of Grail, keeping only a minority stake of 14.5%. The 2022 appeal in the case against the European Commission has been settled in September 2024 in favour of Illumina and declaring the merger outside the Commissions jurisdiction. With the repealed decision Illumina concluded the fine to be void.
Acquisition history
The following is an illustration of the company's mergers, acquisitions, spin-offs and historical predecessors:
Illumina, Inc.
Spyder Instruments (Acq 1999)
CyVera, Inc. (Acq 2005)
Solexa, Inc. (Acq 2007)
Solexa Ltd (Merged 2005)
Lynx Therapeutics Inc. (Merged 2005)
Avantome Inc. (Acq 2008)
Helixis, Inc. (Acq 2010)
Epicentre Biotechnologies (Acq 2011)
BlueGnome (Acq 2012)
Verinata Health, Inc. (Acq 2013)
Advanced Liquid Logic (Acq 2013)
NextBio (Acq 2013)
Myraqa (Acq 2014)
GenoLogics Life Sciences Software Inc. (Acq 2015)
Conexio Genomics (Acq 2016)
Edico Genome (Acq 2018)
Enancio (Acq 2020)
BlueBee (Acq 2020)
Emedgene (Acq 2021)
IDbyDNA (Acq 2022)
Partek Inc. (Acq 2023)
Fluent (Acq 2024)
Products
DNA sequencing
Illumina sells a number of high-throughput DNA sequencing systems, also known as DNA sequencers, based on technology developed by Solexa. The technology features bridge amplification to generate clusters and reversible terminators for sequence determination. The technology behind these sequencing systems involves ligation of fragmented DNA to a chip, followed by primer addition and sequential fluorescent dNTP incorporation and detection.
Depending on the kit used, according to the company the MiSeq Series generates up to 25 million reads per run. With dual flow cells, the NextSeq 2000 generates up to 2.4 billion single reads per run and the NovaSeq X Series generates up to 52 billion single reads per run. Illumina uses next-generation sequencing, which is far faster and more efficient than traditional Sanger sequencing. Illumina sequencers perform short-read sequencing, and are image based, utilizing Illumina dye sequencing. This technology has a higher accuracy than long-read sequencing.
Flow cells
Illumina sequencing happens within the flow cells. These flow cells are small in size and are housed in the flow cell compartment. Flow cell clustering happens when a denatured DNA sample is placed in a flow cell. Primers already in the flow cell channel capture and bind to the ends of the short denatured DNA sample. Then, DNA polymerase is added and the DNA building blocks are introduced. This results in a newly synthesized strand constrained to the bottom of the flow cell. Next, the original template strand is washed out binding the newly synthesized strand to the other DNA sequence present on the surface. DNA polymerase and building blocks are introduced again forming a new strand. These steps are repeated until about 1,000 copies are made in a cluster.
Litigation
Czarnik suit against Illumina
In 2005, co-founder and former Chief Scientific Officer Anthony Czarnik sued Illumina. In the case, Czarnik v. Illumina Inc., the trial court granted Illumina's motion to dismiss in part but allowed Czarnik's correction of inventorship claims to continue.
Cornell University and Life Technologies suit against Illumina
In 2010, Cornell University and Life Technologies filed a lawsuit against Illumina, alleging that its microarray products infringed on eight patents held by the university and exclusively licensed to the start-up. The case was settled in April 2017 without any finding of fault. In September 2017 both parties asked to have the settlement reviewed, with Cornell accusing both Illumina and Life Technologies of misrepresentation and fraud. Cornell claimed that ThermoFisher had promised to settle the suit with Illumina and asked for the Markman wording to be dropped so that it could file a subsequent suit involving other patents invented at Cornell. Instead of filing the suit, ThermoFisher and Illumina settled another lawsuit in California and secretly sublicensed those very same patents. In 2018, Dr. Monib Zirvi filed a lawsuit in the Southern District of New York against Illumina and some of its key employees claiming that they knowingly incorporated ideas and ZipCode DNA sequences invented in the Barany Lab in Illumina's patent applications. Although this suit was dismissed, it was only after Illumina and its attorneys claimed that some of those IP misappropriation were “storm warnings” and thus statutes of limitations had run out on those particular claims. Dr. Monib Zirvi also filed a FOIA case in New Jersey in 2020 for unredacted copies for key NIH grants that Illumina filed early in its existence. William Noon, an in-house attorney at Illumina, had filed a FOIA request for 4 of these key grants as well in January 2015.
Patent infringement suits
Illumina was a party in a patent lawsuit against competitor Ariosa Diagnostics. The litigation began in 2012 with Verinata Health filing suit against Ariosa. Illumina joined the suit after acquiring Verinata in 2013. Ariosa subsequently brought a counterclaim against Illumina. The trial court granted summary judgment in favor of Ariosa, but the United States Court of Appeals for the Federal Circuit reversed. Ariosa initially pursued an appeal to the Supreme Court of the United States, but the two parties resolved the dispute before the Court decided whether to take the case.
In February 2016, Illumina filed a lawsuit against Oxford Nanopore Technologies. Illumina claimed that Oxford Nanopore infringed its patents on the use of a biological nanopore, Mycobacterium smegmatis porinA (MspA), for sequencing systems. In August 2016 the parties settled their lawsuit.
In February 2020, Illumina filed a patent infringement suit against BGI relating to its "CoolMPS" sequencing products. In return BGI has filed patent infringement lawsuits for violation of federal antitrust and California unfair competition laws, claiming use of "fraudulent behavior" to obtain or enforce sequencing patents that it has asserted against BGI, preventing the firm from entering the US market. However, in May 2022, Illumina was ordered to pay $333 million to a U.S. unit of BGI in California for infringing two patents of DNA-sequencing systems. The jury of the case also said that Illumina willfully infringed the patents, and that their former accusation of BGI's infringement was invalid.
On May 6, 2022, a jury in the U.S. District Court for the District of Delaware rendered a verdict that Illumina willfully infringed two patents owned by Complete Genomics, and awarded approximately $334 million to CGI in past damages. The jury also invalidated three patents owned by Illumina.
Trade secrets suit against Eltoukhy and Talasaz
In March 2022, Illumina sued Helmy Eltoukhy and Amir Talasaz, the co-founders of Guardant, over stealing trade secrets. Guardant called the lawsuit "frivolous and retaliatory" and framed it as a response to its concerns about the Illumina-Grail merger. Guardant also claimed the lawsuit was filed in order to suppress competition in the marketplace.
References
External links
Companies based in San Diego
Health care companies based in California
Biotechnology companies of the United States
Genomics companies
American companies established in 1998
Biotechnology companies established in 1998
1998 establishments in California
Microarrays
DNA sequencing
2000 initial public offerings
Companies listed on the Nasdaq
Companies in the S&P 400 | Illumina, Inc. | [
"Chemistry",
"Materials_science",
"Biology"
] | 3,225 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques",
"DNA sequencing"
] |
9,005,332 | https://en.wikipedia.org/wiki/Octadecanoid%20pathway | The octadecanoid pathway is a biosynthetic pathway for the production of the phytohormone jasmonic acid (JA), an important hormone for induction of defense genes. JA is synthesized from alpha-linolenic acid, which can be released from the plasma membrane by certain lipase enzymes. For example, in the wound defense response, phospholipase C will cause the release of alpha-linolenic acid for JA synthesis.
In the first step, alpha-linolenic acid is oxidized by the enzyme lipoxygenase. This forms 13-hydroperoxylinolenic acid, which is then modified by a dehydrase and undergoes cyclization by allene oxide cyclase to form 12-oxo-phytodienoic acid. This undergoes reduction and three rounds of beta oxidation to form jasmonic acid.
Footnotes
References
Metabolic pathways | Octadecanoid pathway | [
"Chemistry",
"Biology"
] | 192 | [
"Biochemistry",
"Biotechnology stubs",
"Biochemistry stubs",
"Metabolic pathways",
"Metabolism"
] |
9,007,520 | https://en.wikipedia.org/wiki/ISO/IEC%2080000 | ISO/IEC 80000, Quantities and units, is an international standard describing the International System of Quantities (ISQ). It was developed and promulgated jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). It serves as a style guide for using physical quantities and units of measurement, formulas involving them, and their corresponding units, in scientific and educational documents for worldwide use. The ISO/IEC 80000 family of standards was completed with the publication of the first edition of Part 1 in November 2009.
Overview
By 2021, ISO/IEC 80000 comprised 13 parts, two of which (parts 6 and 13) were developed by IEC and the remaining 11 were developed by ISO, with a further three parts (15, 16 and, 17) under development. Part 14 was withdrawn.
Subject areas
By 2021 the 80000 standard had 13 published parts. A description of each part is available online, with the complete parts for sale.
Part 1: General
ISO 80000-1:2022 revised ISO 80000-1:2009, which replaced ISO 31-0:1992 and ISO 1000:1992.
This document gives general information and definitions concerning quantities, systems of quantities, units, quantity and unit symbols, and coherent unit systems, especially the International System of Quantities (ISQ).
The descriptive text of this part is available online.
According to the standard, symbols for quantities are "generally single letters from the Latin or Greek alphabet" and are "written in italic (sloping) type". Examples include
density of heat flow rate: q = Φ / A
electric current density: J = I / A
magnetic flux density: B = Φ / A
Part 2: Mathematics
ISO 80000-2:2019 revised ISO 80000-2:2009, which superseded ISO 31-11.
It specifies mathematical symbols, explains their meanings, and gives verbal equivalents and applications. The descriptive text of this part is available online.
Part 3: Space and time
ISO 80000-3:2019 revised ISO 80000-3:2006, which supersedes ISO 31-1 and ISO 31-2.
It gives names, symbols, definitions and units for quantities of space and time. The descriptive text of this part is available online.
A definition of the decibel, included in the original 2006 publication, was omitted in the 2019 revision, leaving ISO/IEC 80000 without a definition of this unit; a new part of the standard, IEC 80000-15 (Logarithmic and related quantities), is under development.
Part 4: Mechanics
ISO 80000-4:2019 revised ISO 80000-4:2006, which superseded ISO 31-3.
It gives names, symbols, definitions and units for quantities of mechanics. The descriptive text of this part is available online.
Part 5: Thermodynamics
ISO 80000-5:2019 revised ISO 80000-5:2007, which superseded ISO 31-4. It gives names, symbols, definitions and units for quantities of thermodynamics. The descriptive text of this part is available online.
Part 6: Electromagnetism
IEC 80000-6:2022 revised IEC 80000-6:2008, which superseded ISO 31-5 as well as IEC 60027-1. It gives names, symbols, and definitions for quantities and units of electromagnetism. The descriptive text of this part is available online.
Part 7: Light and radiation
ISO 80000-7:2019 revised ISO 80000-7:2008, which superseded ISO 31-6.
It gives names, symbols, definitions and units for quantities used for light and optical radiation in the wavelength range of approximately 1 nm to 1 mm. The descriptive text of this part is available online.
Part 8: Acoustics
ISO 80000-8:2020 revised ISO 80000-8:2007, which revised ISO 31-7:1992. It gives names, symbols, definitions, and units for quantities of acoustics. The descriptive text of this part is available online.
It has a foreword, scope introduction, scope, normative references (of which there are none), as well as terms, and definitions. It includes definitions of sound pressure, sound power, and sound exposure, and their corresponding levels: sound pressure level, sound power level, and sound exposure level. It includes definitions of the following quantities:
logarithmic frequency range
static pressure
sound pressure
sound particle displacement
sound particle velocity
sound particle acceleration
volume flow rate, volume velocity
sound energy density
sound energy
sound power
sound intensity
sound exposure
characteristic impedance for longitudinal waves
acoustic impedance
sound pressure level
sound power level
sound exposure level
reverberation time
Part 13: Information science and technology
IEC 80000-13:2008 was reviewed and confirmed in 2022 and published in 2008, and replaced subclauses 3.8 and 3.9 of IEC 60027-2:2005 and IEC 60027-3. It defines quantities and units used in information science and information technology, and specifies names and symbols for these quantities and units. It has a scope; normative references; names, definitions, and symbols; and prefixes for binary multiples.
Quantities defined in this standard are:
traffic intensity [A]: number of simultaneously busy resources in a particular pool of resources
traffic offered intensity [A0]: traffic intensity ... of the traffic that would have been generated by the users of a pool of resources if their use had not been limited by the size of the pool
traffic carried intensity [Y]: traffic intensity ... of the traffic served by a particular pool of resources
mean queue length [L, (Ω)]: time average of queue length
loss probability [B]: probability for losing a call attempt
waiting probability [W]: probability for waiting for a resource
call intensity, calling rate [λ]: number of call attempts over a specified time interval divided by the duration of this interval
completed call intensity [μ]: call intensity ... for the call attempts that result in the transmission of an answer signal
storage capacity, storage size [M]
equivalent binary storage capacity [Me]
transfer rate [r, (ν)]
period of data elements [T]
binary digit rate, bit rate [rb, rbit (νb, νbit)]
period of binary digits, bit period [Tb, Tbit]
equivalent binary digit rate, equivalent bit rate [re, (νe)]
modulation rate, line digit rate [rm, u]
quantizing distortion power [TQ]
carrier power [Pc, C]
signal energy per binary digit [Eb, Ebit]
error probability [P]
Hamming distance [dn]
clock frequency, clock rate [fcl]
decision content [Da]
information content [I(x)]
entropy [H]
maximum entropy [H0, (Hmax)]
relative entropy [Hr]
redundancy [R]
relative redundancy [r]
joint information content [I(x, y)]
conditional information content [I(xy)]
conditional entropy, mean conditional information content, average conditional information content [H(XY)]
equivocation [H(XY)]
irrelevance [C]
transinformation content [T(x, y)]
mean transinformation content [T]
character mean entropy [H]
average information rate [H*]
character mean transinformation content [T]
average transinformation rate [T*]
channel capacity per character; channel capacity [C]
channel time capacity; channel capacity [C*]
The standard also includes definitions for units relating to information technology, such as the erlang (E), bit (bit), octet (o), byte (B), baud (Bd), shannon (Sh), hartley (Hart), and the natural unit of information (nat).
Clause 4 of the standard defines standard binary prefixes used to denote powers of 1024 as 10241 (kibi-), 10242 (mebi-), 10243 (gibi-), 10244 (tebi-), 10245 (pebi-), 10246 (exbi-), 10247 (zebi-), and 10248 (yobi-).
International System of Quantities
Part 1 of ISO 80000 introduces the International System of Quantities and describes its relationship with the International System of Units (SI). Specifically, its introduction states "The system of quantities, including the relations among the quantities used as the basis of the units of the SI, is named the International System of Quantities, denoted 'ISQ', in all languages." It further clarifies that "ISQ is simply a convenient notation to assign to the essentially infinite and continually evolving and expanding system of quantities and equations on which all of modern science and technology rests. ISQ is a shorthand notation for the 'system of quantities on which the SI is based'."
Units of the ISO and IEC 80000 series
The standard includes all SI units but is not limited to only SI units. Units that form part of the standard but not the SI include the units of information storage (bit and byte), units of entropy (shannon, natural unit of information and hartley), and the erlang (a unit of traffic intensity).
The standard includes all SI prefixes as well as the binary prefixes kibi-, mebi-, gibi-, etc., originally introduced by the International Electrotechnical Commission to standardise binary multiples of byte such as mebibyte (MiB), for 2 bytes, to distinguish them from their decimal counterparts such as megabyte (MB), for precisely 1 million (2) bytes. In the standard, the application of the binary prefixes is not limited to units of information storage. For example, a frequency 10 octaves above 1 hertz, i.e., 210 Hz (1024 Hz), is 1 kibihertz (1 KiHz). These binary prefixes were standardized first in a 1999 addendum to IEC 60027-2. The harmonized IEC 80000-13:2008 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005, which had defined the prefixes for binary multiples. The only significant change in IEC 80000-13 is the addition of explicit definitions for some quantities.
See also
International Vocabulary of Metrology
International System of Units
BIPM – publishes freely available information on SI units
NIST – official U.S. representative for SI; publishes freely available guide to use of SI
References
External links
BIPM SI Brochure
ISO TC12 standards – Quantities, units, symbols, conversion factors
NIST Special Publication 330 – The International System of Units
NIST Special Publication 811 – Guide for the Use of the International System of Units
+
Measurement
80000 | ISO/IEC 80000 | [
"Physics",
"Mathematics"
] | 2,230 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Physical properties"
] |
17,672,471 | https://en.wikipedia.org/wiki/MIKE%20SHE | MIKE SHE is an integrated hydrological modelling system for building and simulating surface water flow and groundwater flow. MIKE SHE can simulate the entire land phase of the hydrologic cycle and allows components to be used independently and customized to local needs. MIKE SHE emerged from Système Hydrologique Européen (SHE) as developed and extensively applied since 1977 onwards by a consortium of three European organizations: the Institute of Hydrology (the United Kingdom), SOGREAH (France) and DHI (Denmark). Since then, DHI has continuously invested resources into research and development of MIKE SHE. MIKE SHE can be used for the analysis, planning and management of a wide range of water resources and environmental problems related to surface water and groundwater, especially surface-water impact from groundwater withdrawal, conjunctive use of groundwater and surface water, wetland management and restoration, river basin management and planning, impact studies for changes in land use and climate.
The program is offered in both 32-bit and 64-bit versions for Microsoft Windows operating systems.
Other commonly used groundwater simulators
FEFLOW
MODFLOW
HydroGeoSphere
See also
Hydrological transport model
References
External links
DHI Water.Environment.Health
Integrated hydrologic modelling
Hydraulic engineering
Environmental engineering
Physical geography | MIKE SHE | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 248 | [
"Hydrology",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
17,674,024 | https://en.wikipedia.org/wiki/HBT%20%28explosive%29 | {{Chembox
| Verifiedfields = changed
| Watchedfields = changed
| verifiedrevid = 412683567
| ImageFile = HBT.png
| ImageFile_Ref =
| ImageSize = 121
| ImageName = Skeletal formula of a minor tautomer of HBT
| Name = HBT
| IUPACName = ''N,N-Bis-(1H-tetrazol-5-yl)-hydrazine
| OtherNames = 1,2-Ditetrazolylhydrazine
5,5'-Hydrazinebistetrazole
BTH
|Section1=
|Section2=
}}HBT''' is a bistetrazole. It is an explosive approximately as powerful as HMX or CL-20, but it releases less toxic reaction products when detonated: ammonia and hydrogen cyanide. When combined with ADN or AN oxidizers, the amount of HCN produced by a deflagration may be reduced. The compound is thus considered by its advocates to be a more environmentally friendly explosive than traditional nitroamine-based explosives.
References
See also
1,1'-Azobis-1,2,3-triazole
G2ZT
Explosive chemicals
Hydrazines
Tetrazoles | HBT (explosive) | [
"Chemistry"
] | 269 | [
"Explosive chemicals",
"Functional groups",
"Hydrazines"
] |
17,683,121 | https://en.wikipedia.org/wiki/High-integrity%20pressure%20protection%20system | A high-integrity pressure protection system (HIPPS) is a type of safety instrumented system (SIS) designed to prevent over-pressurization of a plant, such as a chemical plant or oil refinery. The HIPPS will shut off the source of the high pressure before the design pressure of the system is exceeded, thus preventing loss of containment through rupture (explosion) of a line or vessel. Therefore, a HIPPS is considered as a barrier between a high-pressure and a low-pressure section of an installation.
Traditional systems
In traditional systems over-pressure is dealt with through relief systems. A relief system will open an alternative outlet for the fluids in the system once a set pressure is exceeded, to avoid further build-up of pressure in the protected system. This alternative outlet generally leads to a flare or venting system to safely dispose the excess fluids. A relief system aims at removing any excess inflow of fluids for safe disposal, where a HIPPS aims at stopping the inflow of excess fluids and containing them in the system.
Conventional relief systems have disadvantages such as release of (flammable and toxic) process fluids or their combustion products in the environment and often a large footprint of the installation. With increasing environmental awareness, relief systems are not always an acceptable solution. However, because of their simplicity, relatively low cost and wide availability, conventional relief systems are still often applied.
Advantages of HIPPS
HIPPS provides a solution to protect equipment in cases where:
high-pressures and / or flow rates are processed
the environment is to be protected
the economic viability of a development needs improvement
the risk profile of the plant must be reduced
HIPPS is an instrumented safety system that is designed and built in accordance with the IEC 61508 and IEC 61511 standards.
The international standards IEC 61508 and 61511 refer to safety functions and Safety Instrumented Systems (SIS) when discussing a device to protect equipment, personnel and environment. Older standards use terms like safety shutdown systems, emergency shutdown systems or last layers of defence.
Components of HIPPS
A system that closes the source of over-pressure within a specified time with at least the same reliability as a safety relief valve is usually called a HIPPS. Such a HIPPS is a complete functional loop consisting of:
sensors, (or initiators) that detect the high pressure
a logic solver, which processes the input from the sensors to an output to the final element
final elements, that actually perform the corrective action in the field by bringing the process to a safe state. In case of a HIPPS this means shutting off the source of overpressure. The final element consists of a valve, actuator and solenoids.
Diagram
The scheme above presents three pressure transmitters (PT) connected to a logic solver. The solver will decide based on 2-out-of-3 (2oo3) voting whether or not to activate the final element. the 1oo2 solenoid panel decides which valve to be closed. The final elements consist here of two block valves that stop flow to the downstream facilities (right) to prevent them from exceeding a maximum pressure. The operator of the plant is warned through a pressure alarm (PA) that the HIPPS was activated.
This system has a high degree of redundancy:
failure of one of the three pressure transmitters will not compromise the HIPPS functionality, as two readings of high pressure are needed for activation.
failure of one of the two block valves will not compromise the HIPPS functionality, as the other valve will close on activation of the HIPPS.
One must not confine self to the above design as the only means of materializing the HIPPS definition. One must always think of the HIPPS generically, as a means of isolating a source of a high pressure when down stream flow have been blocked, isolating the upstream equipment (source of the high pressure) in a highly reliable manner. Be this source of the high pressure a pump (in case of liquid) or a gas compressor (in case of gas), the aim of the HIPPS in these cases is to reliably shut down the pump or the gas compressor creating the high pressure condition in a reliable and safe manner.
Standards and design practices
The ever-increasing flow rates in combination with the environmental constraints initiated the widespread and rapid acceptance in the last decades of HIPPS as the ultimate protection system.
The International Electrotechnical Commission (IEC) has introduced the IEC 61508 and the IEC 61511 standards in 1998 and 2003. These are performance based, non-prescriptive, standards which provide a detailed framework and a life-cycle approach for the design, implementation and management of safety systems applicable to a variety of sectors with different levels of risk definition. These standards also apply to HIPPS.
The IEC 61508 mainly focuses on electrical/electronic/programmable safety-related systems. However it also provides a framework for safety-related systems based on other technologies including mechanical systems. The IEC 61511 is added by the IEC specifically for designers, integrators and users of safety instrumented systems and covers the other parts of the safety loop (sensors and final elements) in more detail.
The basis for the design of your safety instrumented system is the required Safety Integrity Level (SIL). The SIL is obtained during the risk analysis of a plant or process and represents the required risk reduction. The SIS shall meet the requirements of the applicable SIL which ranges from 1 to 4. The IEC standards define the requirements for each SIL for the lifecycle of the equipment, including design and maintenance. The SIL also defines a required probability of failure on demand (PFD) for the complete loop and architectural constraints for the loop and its different elements.
The requirements of the HIPPS should not be simplified to a PFD level only, the qualitative requirements and architectural constraints form an integral part of the requirements to an instrumented protection system such as HIPPS.
The European standard EN12186 (formerly the DIN G491) and more specific the EN14382 (formerly DIN 3381) has been used for the past decades in (mechanically) instrumented overpressure protection systems. These standards prescribe the requirements for the over-pressure protection systems, and their components, in gas plants. Not only the response time and accuracy of the loop but also safety factors for over-sizing of the actuator of the final element are dictated by these standards. Independent design verification and testing to prove compliance to the EN14382 standard is mandatory. Therefore the users often refer to this standard for HIPPS design.
References
External links
International Electrotechnical Commission
https://risknowlogy.com/risknowlogy-certification-program/hipps-sil-verification/ HIPPS SIL Certification]
Safety Users Group – Functional Safety-Information Resources
Example piggable HIPPS
SIL and Functional Safery in a Nutshell - eBook introducing SIL and Functional Safety
Explosion protection
Safety engineering
Process safety | High-integrity pressure protection system | [
"Chemistry",
"Engineering"
] | 1,435 | [
"Systems engineering",
"Explosion protection",
"Safety engineering",
"Combustion engineering",
"Process safety",
"Explosions",
"Chemical process engineering"
] |
184,570 | https://en.wikipedia.org/wiki/Microbotics | Microbotics (or microrobotics) is the field of miniature robotics, in particular mobile robots with characteristic dimensions less than 1 mm. The term can also be used for robots capable of handling micrometer size components.
History
Microbots were born thanks to the appearance of the microcontroller in the last decade of the 20th century, and the appearance of microelectromechanical systems (MEMS) on silicon, although many microbots do not use silicon for mechanical components other than sensors. The earliest research and conceptual design of such small robots was conducted in the early 1970s in (then) classified research for U.S. intelligence agencies. Applications envisioned at that time included prisoner of war rescue assistance and electronic intercept missions. The underlying miniaturization support technologies were not fully developed at that time, so that progress in prototype development was not immediately forthcoming from this early set of calculations and concept design. As of 2008, the smallest microrobots use a scratch drive actuator.
The development of wireless connections, especially Wi-Fi (i.e. in household networks) has greatly increased the communication capacity of microbots, and consequently their ability to coordinate with other microbots to carry out more complex tasks. Indeed, much recent research has focused on microbot communication, including a 1,024 robot swarm at Harvard University that assembles itself into various shapes; and manufacturing microbots at SRI International for DARPA's "MicroFactory for Macro Products" program that can build lightweight, high-strength structures.
Microbots called xenobots have also been built using biological tissues instead of metal and electronics. Xenobots avoid some of the technological and environmental complications of traditional microbots as they are self-powered, biodegradable, and biocompatible.
Definitions
While the "micro" prefix has been used subjectively to mean "small", standardizing on length scales avoids confusion. Thus a nanorobot would have characteristic dimensions at or below 1 micrometer, or manipulate components on the 1 to 1000 nm size range. A microrobot would have characteristic dimensions less than 1 millimeter, a millirobot would have dimensions less than a cm, a mini-robot would have dimensions less than , and a small robot would have dimensions less than .
Many sources also describe robots larger than 1 millimeter as microbots or robots larger than 1 micrometer as nanobots.
Design considerations
The way microrobots move around is a function of their purpose and necessary size. At submicron sizes, the physical world demands rather bizarre ways of getting around. The Reynolds number for airborne robots is less than unity; the viscous forces dominate the inertial forces, so “flying” could use the viscosity of air, rather than Bernoulli's principle of lift. Robots moving through fluids may require rotating flagella like the motile form of E. coli. Hopping is stealthy and energy-efficient; it allows the robot to negotiate the surfaces of a variety of terrains. Pioneering calculations (Solem 1994) examined possible behaviors based on physical realities.
One of the major challenges in developing a microrobot is to achieve motion using a very limited power supply. The microrobots can use a small lightweight battery source like a coin cell or can scavenge power from the surrounding environment in the form of vibration or light energy. Microrobots are also now using biological motors as power sources, such as flagellated Serratia marcescens, to draw chemical power from the surrounding fluid to actuate the robotic device. These biorobots can be directly controlled by stimuli such as chemotaxis or galvanotaxis with several control schemes available. A popular alternative to an onboard battery is to power the robots using externally induced power. Examples include the use of electromagnetic fields, ultrasound and light to activate and control micro robots.
The 2022 study focused on a photo-biocatalytic approach for the "design of light-driven microrobots with applications in microbiology and biomedicine".
Locomotion of microrobots
Microrobots employ various locomotion methods to navigate through different environments, from solid surfaces to fluids. These methods are often inspired by biological systems and are designed to be effective at the micro-scale. Several factors need to be maximized (precision, speed, stability), and others have to be minimized (energy consumption, energy loss) in the design and operation of microrobot locomotion in order to guarantee accurate, effective, and efficient movement.
When describing the locomotion of microrobots, several key parameters are used to characterize and evaluate their movement, including stride length and transportation costs. A stride refers to a complete cycle of movement that includes all the steps or phases necessary for an organism or robot to move forward by repeating a specific sequence of actions. Stride length (𝞴s) is the distance covered by a microrobot in one complete cycle of its locomotion mechanism. Cost of transport (CoT) defines the work required to move a unit of mass of a microrobot a unit of distance
Surface locomotion
Microrobots that use surface locomotion can move in a variety of ways, including walking, crawling, rolling, or jumping. These microrobots meet different challenges, such as gravity and friction. One of the parameters describing surface locomotion is the Frounde number, defined as:
Where v is motion speed, g is the gravitational field, and 𝞴s is a stride length. A microrobot demonstrating a low Froude number moves slower and more stable as gravitational forces dominate, while a high Froude number indicates that inertial forces are more significant, allowing faster and potentially less stable movement.
Crawling is one of the most typical surface locomotion types. The mechanisms employed by microrobots for crawling can differ but usually include the synchronized movement of multiple legs or appendages. The mechanism of the microrobots' movements is often inspired by animals such as insects, reptiles, and small mammals. An example of a crawling microrobot is RoBeetle. The autonomous microrobot weighs 88 milligrams (approximately the weight of three rice grains). The robot is powered by the catalytic combustion of methanol. The design relies on controllable NiTi-Pt–based catalytic artificial micromuscles with a mechanical control mechanism.
Other options for actuating microrobots' surface locomotion include magnetic, electromagnetic, piezoelectric, electrostatic, and optical actuation.
Swimming locomotion
Swimming microrobots are designed to operate in 3D through fluid environments, like biological fluids or water. To achieve effective movements, locomotion strategies are adopted from small aquatic animals or microorganisms, such as flagellar propulsion, pulling, chemical propulsion, jet propulsion, and tail undulation. Swimming microrobots, in order to move forward, must drive water backward.
Microrobots move in the low Reynolds number regime due to their small sizes and low operating speeds, as well as high viscosity of the fluids they navigate. At this level, viscous forces dominate over inertial forces. This requires a different approach in the design compared to swimming at the macroscale in order to achieve effective movements. The low Reynolds number also allows for accurate movements, which makes it good application in medicine, micro-manipulation tasks, and environmental monitoring.
Dominating viscous (Stokes) drag forces Tdrag on the robot balances the propulsive force Fp generated by a swimming mechanism.
Where b is the viscous drag coefficient, v is motion speed, and m is the body mass.
One of the examples of a swimming microrobot is a helical magnetic microrobot consisting of a spiral tail and a magnetic head body. This design is inspired by the flagellar motion of bacteria. By applying a magnetic torque to a helical microrobot within a low-intensity rotating magnetic field, the rotation can be transformed into linear motion. This conversion is highly effective in low Reynolds number environments due to the unique helical structure of the microrobot. By altering the external magnetic field, the direction of the spiral microrobot's motion can be easily reversed.
At Air-Fluid Interface locomotion
In the specific instance when microrobots are at the air-fluid interface, they can take advantage of surface tension and forces provided by capillary motion. At the point where air and a liquid, most often water, come together, it is possible to establish an interface capable of supporting the weight of the microrobots through the work of surface tension. Cohesion between molecules of a liquid creates surface tension, which otherwise creates ‘skin’ over the water’s surface, letting the microrobots float instead of sinking. Through such concepts, microrobots could perform specific locomotion functions, including climbing, walking, levitating, floating, and or even jumping, by exploring the characteristics of the air-fluid interface.
Due to the surface tension ,σ, the buoyancy force, Fb, and the curvature force, Fc, play the most important roles, particularly in deciding whether the microrobot will float or sink on the surface of the liquid. This can be expressed as
Fb is obtained by integrating the hydrostatic pressure over the area of the body in contact with the water. In contrast, Fc is obtained by integrating the curvature pressure over this area or, alternatively, the vertical component of the surface tension, , along the contact perimeter.
One example of a climbing, walking microrobot that utilizes air-fluid locomotion is the Harvard Ambulatory MicroRobot with Electroadhesion (HAMR-E). The control system of HAMR-E is developed to allow the robot to function in a flexible and maneuverable manner in a challenging environment. Its features include its ability to move on horizontal, vertical, and inverted planes, which is facilitated by the electro-adhesion system. This uses electric fields to create electrostatic attraction, causing the robot to stick and move on different surfaces. With four compliant and electro-adhesion footpads, HAMR-E can safely grasp and slide over various substrate types, including glass, wood, and metal. The robot has a slim body and is fully posable, making it easy to perform complex movements and balance on any surface.
Flying locomotion
Flying microrobots are miniature robotic systems meticulously engineered to operate in the air by emulating the flight mechanisms of insects and birds. These microrobots have to overcome the issues related to lift, thrust, and movement that are challenging to accomplish at such a small scale where most aerodynamic theories must be modified. Active flight is the most energy-intensive mode of locomotion, as the microrobot must lift its body weight while propelling itself forward. To achieve this function, these microrobots mimic the movement of insect wings and generate the necessary airflow for producing lift and thrust. Miniaturized wings of the robots are actuated with Piezoelectric materials, which offer better control of wing kinematics and flight dynamics.
To calculate the necessary aerodynamic power for maintaining a hover with flapping wings, the primary physical equation is expressed as
where m is the body mass, L is the wing length, Φ represents the wing flapping amplitude in radians, ρ indicates the air density, and Vi corresponds to the induced air speed surrounding the body, a consequence of the wings' flapping and rotation movements. This equation illustrates that a small insect or robotic device must impart sufficient momentum to the surrounding air to counterbalance its own weight.
One example of a flying microrobot that utilizes flying locomotion is the RoboBee and DelFly Nimble, which, regarding flight dynamics, emulate bees and fruit flies, respectively. Harvard University invented the RoboBee, a miniature robot that mimics a bee fly, takes off and lands like one, and moves around confined spaces. It can be used in self-driving pollination and search operations for missing people and things. The DelFly Nimble, developed by the Delft University of Technology, is one of the most agile micro aerial vehicles that can mimic the maneuverability of a fruit fly by doing different tricks due to its minimal weight and advanced control mechanisms.
Types and applications
Due to their small size, microbots are potentially very cheap, and could be used in large numbers (swarm robotics) to explore environments which are too small or too dangerous for people or larger robots. It is expected that microbots will be useful in applications such as looking for survivors in collapsed buildings after an earthquake or crawling through the digestive tract. What microbots lack in brawn or computational power, they can make up for by using large numbers, as in swarms of microbots. Bioinspired microrobots have emerged as a game-changing tool in the quest for precise drug delivery. These microscopic robots are designed to navigate the human body with a degree of precision previously unimaginable.
Potential applications with demonstrated prototypes include:
Medical microbots
For example, there are biocompatible microalgae-based microrobots for active drug-delivery in the brain, lungs and the gastrointestinal tract, and magnetically guided engineered bacterial microbots for 'precision targeting' for fighting cancer that all have been tested with mice.
See also
Artificial intelligence
Claytronics
Microswimmer
Biohybrid microswimmer
Nanobiotechnology#Nanomedicine
References
Robotics
Microtechnology | Microbotics | [
"Materials_science",
"Engineering"
] | 2,758 | [
"Microtechnology",
"Materials science",
"Automation",
"Robotics",
"Micro robots"
] |
184,726 | https://en.wikipedia.org/wiki/Heat%20transfer | Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat transfer is the energy exchanged between materials (solid/liquid/gas) as a result of a temperature difference. The thermodynamic free energy is the amount of work that a thermodynamic system can perform. Enthalpy is a thermodynamic potential, designated by the letter "H", that is the sum of the internal energy of the system (U) plus the product of pressure (P) and volume (V). Joule is a unit to quantify energy, work, or the amount of heat.
Heat transfer is a process function (or path function), as opposed to functions of state; therefore, the amount of heat transferred in a thermodynamic process that changes the state of a system depends on how that process occurs, not only the net difference between the initial and final states of the process.
Thermodynamic and mechanical heat transfer is calculated with the heat transfer coefficient, the proportionality between the heat flux and the thermodynamic driving force for the flow of heat. Heat flux is a quantitative, vectorial representation of heat flow through a surface.
In engineering contexts, the term heat is taken as synonymous with thermal energy. This usage has its origin in the historical interpretation of heat as a fluid (caloric) that can be transferred by various causes, and that is also common in the language of laymen and everyday life.
The transport equations for thermal energy (Fourier's law), mechanical momentum (Newton's law for fluids), and mass transfer (Fick's laws of diffusion) are similar, and analogies among these three transport processes have been developed to facilitate the prediction of conversion from any one to the others.
Thermal engineering concerns the generation, use, conversion, storage, and exchange of heat transfer. As such, heat transfer is involved in almost every sector of the economy. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes.
Mechanisms
The fundamental modes of heat transfer are:
Advection
Advection is the transport mechanism of a fluid from one location to another, and is dependent on motion and momentum of that fluid.
Conduction or diffusion
The transfer of energy between objects that are in physical contact. Thermal conductivity is the property of a material to conduct heat and is evaluated primarily in terms of Fourier's law for heat conduction.
Convection
The transfer of energy between an object and its environment, due to fluid motion. The average temperature is a reference for evaluating properties related to convective heat transfer.
Radiation
The transfer of energy by the emission of electromagnetic radiation.
Advection
By transferring matter, energy—including thermal energy—is moved by the physical transfer of a hot or cold object from one place to another. This can be as simple as placing hot water in a bottle and heating a bed, or the movement of an iceberg in changing ocean currents. A practical example is thermal hydraulics. This can be described by the formula:
where
is heat flux (W/m2),
is density (kg/m3),
is heat capacity at constant pressure (J/kg·K),
is the difference in temperature (K),
is velocity (m/s).
Conduction
On a microscopic scale, heat conduction occurs as hot, rapidly moving or vibrating atoms and molecules interact with neighboring atoms and molecules, transferring some of their energy (heat) to these neighboring particles. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to another. Conduction is the most significant means of heat transfer within a solid or between solid objects in thermal contact. Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. The process of heat transfer from one place to another place without the movement of particles is called conduction, such as when placing a hand on a cold glass of water—heat is conducted from the warm skin to the cold glass, but if the hand is held a few inches from the glass, little conduction would occur since air is a poor conductor of heat. Steady-state conduction is an idealized model of conduction that happens when the temperature difference driving the conduction is constant so that after a time, the spatial distribution of temperatures in the conducting object does not change any further (see Fourier's law). In steady state conduction, the amount of heat entering a section is equal to amount of heat coming out, since the temperature change (a measure of heat energy) is zero. An example of steady state conduction is the heat flow through walls of a warm house on a cold day—inside the house is maintained at a high temperature and, outside, the temperature stays low, so the transfer of heat per unit time stays near a constant rate determined by the insulation in the wall and the spatial distribution of temperature in the walls will be approximately constant over time.
Transient conduction (see Heat equation) occurs when the temperature within an object changes as a function of time. Analysis of transient systems is more complex, and analytic solutions of the heat equation are only valid for idealized model systems. Practical applications are generally investigated using numerical methods, approximation techniques, or empirical study.
Convection
The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". All convective processes also move heat partly by diffusion, as well. Another form of convection is forced convection. In this case, the fluid is forced to flow by using a pump, fan, or other mechanical means.
Convective heat transfer, or simply, convection, is the transfer of heat from one place to another by the movement of fluids, a process that is essentially the transfer of heat via mass transfer. The bulk motion of fluid enhances heat transfer in many physical situations, such as between a solid surface and the fluid. Convection is usually the dominant form of heat transfer in liquids and gases. Although sometimes discussed as a third method of heat transfer, convection is usually used to describe the combined effects of heat conduction within the fluid (diffusion) and heat transference by bulk fluid flow streaming. The process of transport by fluid streaming is known as advection, but pure advection is a term that is generally associated only with mass transport in fluids, such as advection of pebbles in a river. In the case of heat transfer in fluids, where transport by advection in a fluid is always also accompanied by transport via heat diffusion (also known as heat conduction) the process of heat convection is understood to refer to the sum of heat transport by advection and diffusion/conduction.
Free, or natural, convection occurs when bulk fluid motions (streams and currents) are caused by buoyancy forces that result from density variations due to variations of temperature in the fluid. Forced convection is a term used when the streams and currents in the fluid are induced by external means—such as fans, stirrers, and pumps—creating an artificially induced convection current.
Convection-cooling
Convective cooling is sometimes described as Newton's law of cooling:
However, by definition, the validity of Newton's law of cooling requires that the rate of heat loss from convection be a linear function of ("proportional to") the temperature difference that drives heat transfer, and in convective cooling this is sometimes not the case. In general, convection is not linearly dependent on temperature gradients, and in some cases is strongly nonlinear. In these cases, Newton's law does not apply.
Convection vs. conduction
In a body of fluid that is heated from underneath its container, conduction, and convection can be considered to compete for dominance. If heat conduction is too great, fluid moving down by convection is heated by conduction so fast that its downward movement will be stopped due to its buoyancy, while fluid moving up by convection is cooled by conduction so fast that its driving buoyancy will diminish. On the other hand, if heat conduction is very low, a large temperature gradient may be formed and convection might be very strong.
The Rayleigh number () is the product of the Grashof () and Prandtl () numbers. It is a measure that determines the relative strength of conduction and convection.
where
g is the acceleration due to gravity,
ρ is the density with being the density difference between the lower and upper ends,
μ is the dynamic viscosity,
α is the Thermal diffusivity,
β is the volume thermal expansivity (sometimes denoted α elsewhere),
T is the temperature,
ν is the kinematic viscosity, and
L is characteristic length.
The Rayleigh number can be understood as the ratio between the rate of heat transfer by convection to the rate of heat transfer by conduction; or, equivalently, the ratio between the corresponding timescales (i.e. conduction timescale divided by convection timescale), up to a numerical factor. This can be seen as follows, where all calculations are up to numerical factors depending on the geometry of the system.
The buoyancy force driving the convection is roughly , so the corresponding pressure is roughly . In steady state, this is canceled by the shear stress due to viscosity, and therefore roughly equals , where V is the typical fluid velocity due to convection and the order of its timescale. The conduction timescale, on the other hand, is of the order of .
Convection occurs when the Rayleigh number is above 1,000–2,000.
Radiation
Radiative heat transfer is the transfer of energy via thermal radiation, i.e., electromagnetic waves. It occurs across vacuum or any transparent medium (solid or fluid or gas). Thermal radiation is emitted by all objects at temperatures above absolute zero, due to random movements of atoms and molecules in matter. Since these atoms and molecules are composed of charged particles (protons and electrons), their movement results in the emission of electromagnetic radiation which carries away energy. Radiation is typically only important in engineering applications for very hot objects, or for objects with a large temperature difference.
When the objects and distances separating them are large in size and compared to the wavelength of thermal radiation, the rate of transfer of radiant energy is best described by the Stefan-Boltzmann equation. For an object in vacuum, the equation is:
For radiative transfer between two objects, the equation is as follows:
where
is the heat flux,
is the emissivity (unity for a black body),
is the Stefan–Boltzmann constant,
is the view factor between two surfaces a and b, and
and are the absolute temperatures (in kelvins or degrees Rankine) for the two objects.
The blackbody limit established by the Stefan-Boltzmann equation can be exceeded when the objects exchanging thermal radiation or the distances separating them are comparable in scale or smaller than the dominant thermal wavelength. The study of these cases is called near-field radiative heat transfer.
Radiation from the sun, or solar radiation, can be harvested for heat and power. Unlike conductive and convective forms of heat transfer, thermal radiation – arriving within a narrow-angle i.e. coming from a source much smaller than its distance – can be concentrated in a small spot by using reflecting mirrors, which is exploited in concentrating solar power generation or a burning glass. For example, the sunlight reflected from mirrors heats the PS10 solar power tower and during the day it can heat water to .
The reachable temperature at the target is limited by the temperature of the hot source of radiation. (T4-law lets the reverse flow of radiation back to the source rise.) The (on its surface) somewhat 4000 K hot sun allows to reach coarsely 3000 K (or 3000 °C, which is about 3273 K) at a small probe in the focus spot of a big concave, concentrating mirror of the Mont-Louis Solar Furnace in France.
Phase transition
Phase transition or phase change, takes place in a thermodynamic system from one phase or state of matter to another one by heat transfer. Phase change examples are the melting of ice or the boiling of water.
The Mason equation explains the growth of a water droplet based on the effects of heat transport on evaporation and condensation.
Phase transitions involve the four fundamental states of matter:
Solid – Deposition, freezing, and solid-to-solid transformation.
Liquid – Condensation and melting / fusion.
Gas – Boiling / evaporation, recombination/ deionization, and sublimation.
Plasma – Ionization.
Boiling
The boiling point of a substance is the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid and the liquid evaporates resulting in an abrupt change in vapor volume.
In a closed system, saturation temperature and boiling point mean the same thing. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition.
At standard atmospheric pressure and low temperatures, no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapor bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is sub-cooled nucleate boiling, and is a very efficient heat transfer mechanism. At high bubble generation rates, the bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling, or DNB).
At similar standard atmospheric pressure and high temperatures, the hydrodynamically quieter regime of film boiling is reached. Heat fluxes across the stable vapor layers are low but rise slowly with temperature. Any contact between the fluid and the surface that may be seen probably leads to the extremely rapid nucleation of a fresh vapor layer ("spontaneous nucleation"). At higher temperatures still, a maximum in the heat flux is reached (the critical heat flux, or CHF).
The Leidenfrost Effect demonstrates how nucleate boiling slows heat transfer due to gas bubbles on the heater's surface. As mentioned, gas-phase thermal conductivity is much lower than liquid-phase thermal conductivity, so the outcome is a kind of "gas thermal barrier".
Condensation
Condensation occurs when a vapor is cooled and changes its phase to a liquid. During condensation, the latent heat of vaporization must be released. The amount of heat is the same as that absorbed during vaporization at the same fluid pressure.
There are several types of condensation:
Homogeneous condensation, as during the formation of fog.
Condensation in direct contact with subcooled liquid.
Condensation on direct contact with a cooling wall of a heat exchanger: This is the most common mode used in industry: Dropwise condensation is difficult to sustain reliably; therefore, industrial equipment is normally designed to operate in filmwise condensation mode.
Melting
Melting is a thermal process that results in the phase transition of a substance from a solid to a liquid. The internal energy of a substance is increased, typically through heat or pressure, resulting in a rise of its temperature to the melting point, at which the ordering of ionic or molecular entities in the solid breaks down to a less ordered state and the solid liquefies. Molten substances generally have reduced viscosity with elevated temperature; an exception to this maxim is the element sulfur, whose viscosity increases to a point due to polymerization and then decreases with higher temperatures in its molten state.
Modeling approaches
Heat transfer can be modeled in various ways.
Heat equation
The heat equation is an important partial differential equation that describes the distribution of heat (or temperature variation) in a given region over time. In some cases, exact solutions of the equation are available; in other cases the equation must be solved numerically using computational methods such as DEM-based models for thermal/reacting particulate systems (as critically reviewed by Peng et al.).
Lumped system analysis
Lumped system analysis often reduces the complexity of the equations to one first-order linear differential equation, in which case heating and cooling are described by a simple exponential solution, often referred to as Newton's law of cooling.
System analysis by the lumped capacitance model is a common approximation in transient conduction that may be used whenever heat conduction within an object is much faster than heat conduction across the boundary of the object. This is a method of approximation that reduces one aspect of the transient conduction system—that within the object—to an equivalent steady-state system. That is, the method assumes that the temperature within the object is completely uniform, although its value may change over time.
In this method, the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary, known as the Biot number, is calculated. For small Biot numbers, the approximation of spatially uniform temperature within the object can be used: it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object.
Climate models
Climate models study the radiant heat transfer by using quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice.
Engineering
Heat transfer has broad application to the functioning of numerous devices and systems. Heat-transfer principles may be used to preserve, increase, or decrease temperature in a wide variety of circumstances. Heat transfer methods are used in numerous disciplines, such as automotive engineering, thermal management of electronic devices and systems, climate control, insulation, materials processing, chemical engineering and power station engineering.
Insulation, radiance and resistance
Thermal insulators are materials specifically designed to reduce the flow of heat by limiting conduction, convection, or both. Thermal resistance is a heat property and the measurement by which an object or material resists to heat flow (heat per time unit or thermal resistance) to temperature difference.
Radiance, or spectral radiance, is a measure of the quantity of radiation that passes through or is emitted. Radiant barriers are materials that reflect radiation, and therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and a poor insulator.
The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity (at that same wavelength), and vice versa. At any specific wavelength, reflectivity=1 - emissivity. An ideal radiant barrier would have a reflectivity of 1, and would therefore reflect 100 percent of incoming radiation. Vacuum flasks, or Dewars, are silvered to approach this ideal. In the vacuum of space, satellites use multi-layer insulation, which consists of many layers of aluminized (shiny) Mylar to greatly reduce radiation heat transfer and control satellite temperature.
Devices
A heat engine is a system that performs the conversion of a flow of thermal energy (heat) to mechanical energy to perform mechanical work.
A thermocouple is a temperature-measuring device and a widely used type of temperature sensor for measurement and control, and can also be used to convert heat into electric power.
A thermoelectric cooler is a solid-state electronic device that pumps (transfers) heat from one side of the device to the other when an electric current is passed through it. It is based on the Peltier effect.
A thermal diode or thermal rectifier is a device that causes heat to flow preferentially in one direction.
Heat exchangers
A heat exchanger is used for more efficient heat transfer or to dissipate heat. Heat exchangers are widely used in refrigeration, air conditioning, space heating, power generation, and chemical processing. One common example of a heat exchanger is a car's radiator, in which the hot coolant fluid is cooled by the flow of air over the radiator's surface.
Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids move in opposite directions; and in cross flow, the fluids move at right angles to each other. Common types of heat exchangers include shell and tube, double pipe, extruded finned pipe, spiral fin pipe, u-tube, and stacked plate. Each type has certain advantages and disadvantages over other types.
A heat sink is a component that transfers heat generated within a solid material to a fluid medium, such as air or a liquid. Examples of heat sinks are the heat exchangers used in refrigeration and air conditioning systems or the radiator in a car. A heat pipe is another heat-transfer device that combines thermal conductivity and phase transition to efficiently transfer heat between two solid interfaces.
Applications
Architecture
Efficient energy use is the goal to reduce the amount of energy required in heating or cooling. In architecture, condensation and air currents can cause cosmetic or structural damage. An energy audit can help to assess the implementation of recommended corrective procedures. For instance, insulation improvements, air sealing of structural leaks, or the addition of energy-efficient windows and doors.
Smart meter is a device that records electric energy consumption in intervals.
Thermal transmittance is the rate of transfer of heat through a structure divided by the difference in temperature across the structure. It is expressed in watts per square meter per kelvin, or W/(m2K). Well-insulated parts of a building have a low thermal transmittance, whereas poorly-insulated parts of a building have a high thermal transmittance.
Thermostat is a device to monitor and control temperature.
Climate engineering
Climate engineering consists of carbon dioxide removal and solar radiation management. Since the amount of carbon dioxide determines the radiative balance of Earth's atmosphere, carbon dioxide removal techniques can be applied to reduce the radiative forcing. Solar radiation management is the attempt to absorb less solar radiation to offset the effects of greenhouse gases.
An alternative method is passive daytime radiative cooling, which enhances terrestrial heat flow to outer space through the infrared window (8–13 μm). Rather than merely blocking solar radiation, this method increases outgoing longwave infrared (LWIR) thermal radiation heat transfer with the extremely cold temperature of outer space (~2.7 K) to lower ambient temperatures while requiring zero energy input.
Greenhouse effect
The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases and clouds, and is re-radiated in all directions, resulting in a reduction in the amount of thermal radiation reaching space relative to what would reach space in the absence of absorbing materials. This reduction in outgoing radiation leads to a rise in the temperature of the surface and troposphere until the rate of outgoing radiation again equals the rate at which heat arrives from the Sun.
Heat transfer in the human body
The principles of heat transfer in engineering systems can be applied to the human body to determine how the body transfers heat. Heat is produced in the body by the continuous metabolism of nutrients which provides energy for the systems of the body. The human body must maintain a consistent internal temperature to maintain healthy bodily functions. Therefore, excess heat must be dissipated from the body to keep it from overheating. When a person engages in elevated levels of physical activity, the body requires additional fuel which increases the metabolic rate and the rate of heat production. The body must then use additional methods to remove the additional heat produced to keep the internal temperature at a healthy level.
Heat transfer by convection is driven by the movement of fluids over the surface of the body. This convective fluid can be either a liquid or a gas. For heat transfer from the outer surface of the body, the convection mechanism is dependent on the surface area of the body, the velocity of the air, and the temperature gradient between the surface of the skin and the ambient air. The normal temperature of the body is approximately 37 °C. Heat transfer occurs more readily when the temperature of the surroundings is significantly less than the normal body temperature. This concept explains why a person feels cold when not enough covering is worn when exposed to a cold environment. Clothing can be considered an insulator which provides thermal resistance to heat flow over the covered portion of the body. This thermal resistance causes the temperature on the surface of the clothing to be less than the temperature on the surface of the skin. This smaller temperature gradient between the surface temperature and the ambient temperature will cause a lower rate of heat transfer than if the skin were not covered.
To ensure that one portion of the body is not significantly hotter than another portion, heat must be distributed evenly through the bodily tissues. Blood flowing through blood vessels acts as a convective fluid and helps to prevent any buildup of excess heat inside the tissues of the body. This flow of blood through the vessels can be modeled as pipe flow in an engineering system. The heat carried by the blood is determined by the temperature of the surrounding tissue, the diameter of the blood vessel, the thickness of the fluid, the velocity of the flow, and the heat transfer coefficient of the blood. The velocity, blood vessel diameter, and fluid thickness can all be related to the Reynolds Number, a dimensionless number used in fluid mechanics to characterize the flow of fluids.
Latent heat loss, also known as evaporative heat loss, accounts for a large fraction of heat loss from the body. When the core temperature of the body increases, the body triggers sweat glands in the skin to bring additional moisture to the surface of the skin. The liquid is then transformed into vapor which removes heat from the surface of the body. The rate of evaporation heat loss is directly related to the vapor pressure at the skin surface and the amount of moisture present on the skin. Therefore, the maximum of heat transfer will occur when the skin is completely wet. The body continuously loses water by evaporation but the most significant amount of heat loss occurs during periods of increased physical activity.
Cooling techniques
Evaporative cooling
Evaporative cooling happens when water vapor is added to the surrounding air. The energy needed to evaporate the water is taken from the air in the form of sensible heat and converted into latent heat, while the air remains at a constant enthalpy. Latent heat describes the amount of heat that is needed to evaporate the liquid; this heat comes from the liquid itself and the surrounding gas and surfaces. The greater the difference between the two temperatures, the greater the evaporative cooling effect. When the temperatures are the same, no net evaporation of water in the air occurs; thus, there is no cooling effect.
Laser cooling
In quantum physics, laser cooling is used to achieve temperatures of near absolute zero (−273.15 °C, −459.67 °F) of atomic and molecular samples to observe unique quantum effects that can only occur at this heat level.
Doppler cooling is the most common method of laser cooling.
Sympathetic cooling is a process in which particles of one type cool particles of another type. Typically, atomic ions that can be directly laser-cooled are used to cool nearby ions or atoms. This technique allows the cooling of ions and atoms that cannot be laser-cooled directly.
Magnetic cooling
Magnetic evaporative cooling is a process for lowering the temperature of a group of atoms, after pre-cooled by methods such as laser cooling. Magnetic refrigeration cools below 0.3K, by making use of the magnetocaloric effect.
Radiative cooling
Radiative cooling is the process by which a body loses heat by radiation. Outgoing energy is an important effect in the Earth's energy budget. In the case of the Earth-atmosphere system, it refers to the process by which long-wave (infrared) radiation is emitted to balance the absorption of short-wave (visible) energy from the Sun. The thermosphere (top of atmosphere) cools to space primarily by infrared energy radiated by carbon dioxide () at 15 μm and by nitric oxide (NO) at 5.3 μm. Convective transport of heat and evaporative transport of latent heat both remove heat from the surface and redistribute it in the atmosphere.
Thermal energy storage
Thermal energy storage includes technologies for collecting and storing energy for later use. It may be employed to balance energy demand between day and nighttime. The thermal reservoir may be maintained at a temperature above or below that of the ambient environment. Applications include space heating, domestic or process hot water systems, or generating electricity.
History
Newton's law of cooling
In 1701, Isaac Newton anonymously published an article in Philosophical Transactions noting (in modern terms) that the rate of temperature change of a body is proportional to the difference in temperatures (, "degrees of heat") between the body and its surroundings. The phrase "temperature change" was later replaced with "heat loss", and the relationship was named Newton's law of cooling. In general, the law is valid only if the temperature difference is small and the heat transfer mechanism remains the same.
Thermal conduction
In heat conduction, the law is valid only if the thermal conductivity of the warmer body is independent of temperature. The thermal conductivity of most materials is only weakly dependent on temperature, so in general the law holds true.
Thermal convection
In convective heat transfer, the law is valid for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference.
Thermal radiation
In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences.
Thermal conductivity of different metals
In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities:
Benjamin Thompson's experiments on heat transfer
During the years 1784 – 1798, the British physicist Benjamin Thompson (Count Rumford) lived in Bavaria, reorganizing the Bavarian army for the Prince-elector Charles Theodore among other official and charitable duties. The Elector gave Thompson access to the facilities of the Electoral Academy of Sciences in Mannheim. During his years in Mannheim and later in Munich, Thompson made a large number of discoveries and inventions related to heat.
Conductivity experiments
"New Experiments upon Heat"
In 1785 Thompson performed a series of thermal conductivity experiments, which he describes in great detail in the Philosophical Transactions article "New Experiments upon Heat" from 1786. The fact that good electrical conductors are often also good heat conductors and vice versa must have been well known at the time, for Thompson mentions it in passing. He intended to measure the relative conductivities of mercury, water, moist air, "common air" (dry air at normal atmospheric pressure), dry air of various rarefication, and a "Torricellian vacuum".
For these experiments, Thompson employed a thermometer inside a large, closed glass tube. Under the circumstances described, heat may—unbeknownst to Thompson—have been transferred more by radiation than by conduction. These were his results.
After the experiments, Thompson was surprised to observe that a vacuum was a significantly poorer heat conductor than air "which of itself is reckoned among the worst", but only a very small difference between common air and rarefied air. He also noted the great difference between dry air and moist air, and the great benefit this affords.
Temperature vs. sensible heat
Thompson concluded with some comments on the important difference between temperature and sensible heat.
Coining of the term "convection"
In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says:This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water".
See also
Combined forced and natural convection
Heat capacity
Heat transfer enhancement
Heat transfer physics
Stefan–Boltzmann law
Thermal contact conductance
Thermal physics
Thermal resistance
Citations
References
External links
A Heat Transfer Textbook - (free download).
Thermal-FluidsPedia - An online thermal fluids encyclopedia.
Hyperphysics Article on Heat Transfer - Overview
Interseasonal Heat Transfer - a practical example of how heat transfer is used to heat buildings without burning fossil fuels.
Aspects of Heat Transfer, Cambridge University
Thermal-Fluids Central
Energy2D: Interactive Heat Transfer Simulations for Everyone
Chemical engineering
Mechanical engineering
Unit operations
Transport phenomena | Heat transfer | [
"Physics",
"Chemistry",
"Engineering"
] | 7,449 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Applied and interdisciplinary physics",
"Unit operations",
"Chemical engineering",
"Thermodynamics",
"Mechanical engineering",
"nan",
"Chemical process engineering"
] |
184,873 | https://en.wikipedia.org/wiki/Potassium%20ferricyanide | Potassium ferricyanide is the chemical compound with the formula K3[Fe(CN)6]. This bright red salt contains the octahedrally coordinated [Fe(CN)6]3− ion. It is soluble in water and its solution shows some green-yellow fluorescence. It was discovered in 1822 by Leopold Gmelin.
Preparation
Potassium ferricyanide is manufactured by passing chlorine through a solution of potassium ferrocyanide. Potassium ferricyanide separates from the solution:
2 K4[Fe(CN)6] + Cl2 → 2 K3[Fe(CN)6] + 2 KCl
Structure
Like other metal cyanides, solid potassium ferricyanide has a complicated polymeric structure. The polymer consists of octahedral [Fe(CN)6]3− centers crosslinked with K+ ions that are bound to the CN ligands. The K+---NCFe linkages break when the solid is dissolved in water.
Applications
The compound is also used to harden iron and steel, in electroplating, dyeing wool, as a laboratory reagent, and as a mild oxidizing agent in organic chemistry.
Photography
Blueprint, cyanotype, toner
The compound has widespread use in blueprint drawing and in photography (Cyanotype process). Several photographic print toning processes involve the use of potassium ferricyanide.It is often used as a mild bleach in a concentration of 10g/L to reduce film or print density.
Bleaching
Potassium ferricyanide was used as an oxidizing agent to remove silver from color negatives and positives during processing, a process called bleaching. Because potassium ferricyanide bleaches are environmentally unfriendly, short-lived, and capable of releasing hydrogen cyanide gas if mixed with high concentrations and volumes of acid, bleaches using ferric EDTA have been used in color processing since the 1972 introduction of the Kodak C-41 process. In color lithography, potassium ferricyanide is used to reduce the size of color dots without reducing their number, as a kind of manual color correction called dot etching.
Farmer's reducer
Ferricyanide is also used in black-and-white photography with sodium thiosulfate (hypo) to reduce the density of a negative or gelatin silver print where the mixture is known as Farmer's reducer; this can help offset problems from overexposure of the negative, or brighten the highlights in the print.
Reagent in organic synthesis
Potassium ferricyanide is a used as an oxidant in organic chemistry. It is an oxidant for catalyst regeneration in Sharpless dihydroxylations.
Sensors and indicators
Potassium ferricyanide is also one of two compounds present in ferroxyl indicator solution (along with phenolphthalein) that turns blue (Prussian blue) in the presence of Fe2+ ions, and which can therefore be used to detect metal oxidation that will lead to rust. It is possible to calculate the number of moles of Fe2+ ions by using a colorimeter, because of the very intense color of Prussian blue.
In physiology experiments potassium ferricyanide provides a means increasing a solution's redox potential (E°' ~ 436 mV at pH 7). As such, it can oxidize reduced cytochrome c (E°' ~ 247 mV at pH 7) in isolated mitochondria. Sodium dithionite is usually used as a reducing chemical in such experiments (E°' ~ −420 mV at pH 7).
Potassium ferricyanide is used to determine the ferric reducing power potential of a sample (extract, chemical compound, etc.). Such a measurement is used to determine of the antioxidant property of a sample.
Potassium ferricyanide is a component of amperometric biosensors as an electron transfer agent replacing an enzyme's natural electron transfer agent such as oxygen as with the enzyme glucose oxidase. It is an ingredient in commercially available blood glucose meters for use by diabetics.
Other
Potassium ferricyanide is combined with potassium hydroxide (or sodium hydroxide as a substitute) and water to formulate Murakami's etchant. This etchant is used by metallographers to provide contrast between binder and carbide phases in cemented carbides.
Prussian blue
Prussian blue, the deep blue pigment in blue printing, is generated by the reaction of K3[Fe(CN)6] with ferrous (Fe2+) ions as well as K4[Fe(CN)6] with ferric salts.
In histology, potassium ferricyanide is used to detect ferrous iron in biological tissue. Potassium ferricyanide reacts with ferrous iron in acidic solution to produce the insoluble blue pigment, commonly referred to as Turnbull's blue or Prussian blue. To detect ferric (Fe3+) iron, potassium ferrocyanide is used instead in the Perls' Prussian blue staining method. The material formed in the Turnbull's blue reaction and the compound formed in the Prussian blue reaction are the same.
Safety
Potassium ferricyanide has low toxicity, its main hazard being that it is a mild irritant to the eyes and skin. However, under very strongly acidic conditions, highly toxic hydrogen cyanide gas is evolved, according to the equation:
6 H+ + [Fe(CN)6]3− → 6 HCN + Fe3+
For example, it will react with diluted sulfuric acid under heating forming potassium sulfate, ferric sulfate and hydrogen cyanide.
2 K3 [Fe(CN)6] + 6 H2SO4 → 3 K2 SO4 + Fe2 (SO4)3 + 12 HCN
This won't occour with concentrated sulfuric acid as hydrolysis to formic acid and dehydration to carbon monoxide will take place instead.
2 K3 Fe(CN)6 + 12 H2 SO4 + 12 H2O → 3 K2SO4 + 6 (NH4)2 SO4 + Fe2 (SO4)3 + 12 CO
See also
Ferricyanide
Ferrocyanide
Potassium ferrocyanide
References
Further reading
Studying redox reaction of Ferricyanide using Potentiostat Effect of different parameters using Cyclic Voltammetry
External links
International Chemical Safety Card 1132
National Pollutant Inventory – Cyanide compounds fact sheet
Potassium compounds
Iron(III) compounds
Cyano complexes
Iron complexes
Photographic chemicals
Oxidizing agents | Potassium ferricyanide | [
"Chemistry"
] | 1,415 | [
"Redox",
"Oxidizing agents"
] |
185,239 | https://en.wikipedia.org/wiki/Thermal%20radiation | Thermal radiation is electromagnetic radiation emitted by the thermal motion of particles in matter. All matter with a temperature greater than absolute zero emits thermal radiation. The emission of energy arises from a combination of electronic, molecular, and lattice oscillations in a material. Kinetic energy is converted to electromagnetism due to charge-acceleration or dipole oscillation. At room temperature, most of the emission is in the infrared (IR) spectrum, though above around 525 °C (977 °F) enough of it becomes visible for the matter to visibly glow. This visible glow is called incandescence. Thermal radiation is one of the fundamental mechanisms of heat transfer, along with conduction and convection.
The primary method by which the Sun transfers heat to the Earth is thermal radiation. This energy is partially absorbed and scattered in the atmosphere, the latter process being the reason why the sky is visibly blue. Much of the Sun's radiation transmits through the atmosphere to the surface where it is either absorbed or reflected.
Thermal radiation can be used to detect objects or phenomena normally invisible to the human eye. Thermographic cameras create an image by sensing infrared radiation. These images can represent the temperature gradient of a scene and are commonly used to locate objects at a higher temperature than their surroundings. In a dark environment where visible light is at low levels, infrared images can be used to locate animals or people due to their body temperature. Cosmic microwave background radiation is another example of thermal radiation.
Blackbody radiation is a concept used to analyze thermal radiation in idealized systems. This model applies if a radiating object meets the physical characteristics of a black body in thermodynamic equilibrium. Planck's law describes the spectrum of blackbody radiation, and relates the radiative heat flux from a body to its temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity. Where blackbody radiation is not an accurate approximation, emission and absorption can be modeled using quantum electrodynamics (QED).
Overview
Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. Thermal radiation reflects the conversion of thermal energy into electromagnetic energy. Thermal energy is the kinetic energy of random movements of atoms and molecules in matter. It is present in all matter of nonzero temperature. These atoms and molecules are composed of charged particles, i.e., protons and electrons. The kinetic interactions among matter particles result in charge acceleration and dipole oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons, radiating energy away from the body. Electromagnetic radiation, including visible light, will propagate indefinitely in vacuum.
The characteristics of thermal radiation depend on various properties of the surface from which it is emanating, including its temperature and its spectral emissivity, as expressed by Kirchhoff's law. The radiation is not monochromatic, i.e., it does not consist of only a single frequency, but comprises a continuous spectrum of photon energies, its characteristic spectrum. If the radiating body and its surface are in thermodynamic equilibrium and the surface has perfect absorptivity at all wavelengths, it is characterized as a black body. A black body is also a perfect emitter. The radiation of such perfect emitters is called black-body radiation. The ratio of any body's emission relative to that of a black body is the body's emissivity, so a black body has an emissivity of one.
Absorptivity, reflectivity, and emissivity of all bodies are dependent on the wavelength of the radiation. Due to reciprocity, absorptivity and emissivity for any particular wavelength are equal at equilibrium – a good absorber is necessarily a good emitter, and a poor absorber is a poor emitter. The temperature determines the wavelength distribution of the electromagnetic radiation.
The distribution of power that a black body emits with varying frequency is described by Planck's law. At any given temperature, there is a frequency fmax at which the power emitted is a maximum. Wien's displacement law, and the fact that the frequency is inversely proportional to the wavelength, indicates that the peak frequency fmax is proportional to the absolute temperature T of the black body. The photosphere of the sun, at a temperature of approximately 6000 K, emits radiation principally in the (human-)visible portion of the electromagnetic spectrum. Earth's atmosphere is partly transparent to visible light, and the light reaching the surface is absorbed or reflected. Earth's surface emits the absorbed radiation, approximating the behavior of a black body at 300 K with spectral peak at fmax. At these lower frequencies, the atmosphere is largely opaque and radiation from Earth's surface is absorbed or scattered by the atmosphere. Though about 10% of this radiation escapes into space, most is absorbed and then re-emitted by atmospheric gases. It is this spectral selectivity of the atmosphere that is responsible for the planetary greenhouse effect, contributing to global warming and climate change in general (but also critically contributing to climate stability when the composition and properties of the atmosphere are not changing).
History
Ancient Greece
Burning glasses are known to date back to about 700 BC. One of the first accurate mentions of burning glasses appears in Aristophanes's comedy, The Clouds, written in 423 BC. According to the Archimedes' heat ray anecdote, Archimedes is purported to have developed mirrors to concentrate heat rays in order to burn attacking Roman ships during the Siege of Syracuse (c. 213–212 BC), but no sources from the time have been confirmed. Catoptrics is a book attributed to Euclid on how to focus light in order to produce heat, but the book might have been written in 300 AD.
Renaissance
During the Renaissance, Santorio Santorio came up with one of the earliest thermoscopes. In 1612 he published his results on the heating effects from the Sun, and his attempts to measure heat from the Moon.
Earlier, in 1589, Giambattista della Porta reported on the heat felt on his face, emitted by a remote candle and facilitated by a concave metallic mirror. He also reported the cooling felt from a solid ice block. Della Porta's experiment would be replicated many times with increasing accuracy. It was replicated by astronomers Giovanni Antonio Magini and Christopher Heydon in 1603, and supplied instructions for Rudolf II, Holy Roman Emperor who performed it in 1611. In 1660, della Porta's experiment was updated by the Accademia del Cimento using a thermometer invented by Ferdinand II, Grand Duke of Tuscany.
Enlightenment
In 1761, Benjamin Franklin wrote a letter describing his experiments on the relationship between color and heat absorption. He found that darker color clothes got hotter when exposed to sunlight than lighter color clothes. One experiment he performed consisted of placing square pieces of cloth of various colors out in the snow on a sunny day. He waited some time and then measured that the black pieces sank furthest into the snow of all the colors, indicating that they got the hottest and melted the most snow.
Caloric theory
Antoine Lavoisier considered that radiation of heat was concerned with the condition of the surface of a physical body rather than the material of which it was composed. Lavoisier described a poor radiator to be a substance with a polished or smooth surface as it possessed its molecules lying in a plane closely bound together thus creating a surface layer of caloric fluid which insulated the release of the rest within. He described a good radiator to be a substance with a rough surface as only a small proportion of molecules held caloric in within a given plane, allowing for greater escape from within. Count Rumford would later cite this explanation of caloric movement as insufficient to explain the radiation of cold, which became a point of contention for the theory as a whole.
In his first memoir, Augustin-Jean Fresnel responded to a view he extracted from a French translation of Isaac Newton's Optics. He says that Newton imagined particles of light traversing space uninhibited by the caloric medium filling it, and refutes this view (never actually held by Newton) by saying that a body under illumination would increase indefinitely in heat.
In Marc-Auguste Pictet's famous experiment of 1790, it was reported that a thermometer detected a lower temperature when a set of mirrors were used to focus "frigorific rays" from a cold object.
In 1791, Pierre Prevost a colleague of Pictet, introduced the concept of radiative equilibrium, wherein all objects both radiate and absorb heat. When an object is cooler than its surroundings, it absorbs more heat than it emits, causing its temperature to increase until it reaches equilibrium. Even at equilibrium, it continues to radiate heat, balancing absorption and emission.
The discovery of infrared radiation is ascribed to astronomer William Herschel. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the calorific rays, beyond the red part of the spectrum, by an increase in the temperature recorded on a thermometer in that region.
Electromagnetic theory
At the end of the 19th century it was shown that the transmission of light or of radiant heat was allowed by the propagation of electromagnetic waves. Television and radio broadcasting waves are types of electromagnetic waves with specific wavelengths. All electromagnetic waves travel at the same speed; therefore, shorter wavelengths are associated with high frequencies. All bodies generate and receive electromagnetic waves at the expense of heat exchange.
In 1860, Gustav Kirchhoff published a mathematical description of thermal equilibrium (i.e. Kirchhoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. This relation is known as Stefan–Boltzmann law.
Quantum theory
The microscopic theory of radiation is best known as the quantum theory and was first offered by Max Planck in 1900. According to this theory, energy emitted by a radiator is not continuous but is in the form of quanta. Planck noted that energy was emitted in quantas of frequency of vibration similarly to the wave theory. The energy E an electromagnetic wave in vacuum is found by the expression E = hf, where h is the Planck constant and f is its frequency.
Bodies at higher temperatures emit radiation at higher frequencies with an increasing energy per quantum. While the propagation of electromagnetic waves of all wavelengths is often referred as "radiation", thermal radiation is often constrained to the visible and infrared regions. For engineering purposes, it may be stated that thermal radiation is a form of electromagnetic radiation which varies on the nature of a surface and its temperature.
Radiation waves may travel in unusual patterns compared to conduction heat flow. Radiation allows waves to travel from a heated body through a cold non-absorbing or partially absorbing medium and reach a warmer body again. An example is the case of the radiation waves that travel from the Sun to the Earth.
Characteristics
Frequency
Thermal radiation emitted by a body at any temperature consists of a wide range of frequencies. The frequency distribution is given by Planck's law of black-body radiation for an idealized emitter as shown in the diagram at top.
The dominant frequency (or color) range of the emitted radiation shifts to higher frequencies as the temperature of the emitter increases. For example, a red hot object radiates mainly in the long wavelengths (red and orange) of the visible band. If it is heated further, it also begins to emit discernible amounts of green and blue light, and the spread of frequencies in the entire visible range cause it to appear white to the human eye; it is white hot. Even at a white-hot temperature of 2000 K, 99% of the energy of the radiation is still in the infrared. This is determined by Wien's displacement law. In the diagram the peak value for each curve moves to the left as the temperature increases.
Relationship to temperature
The total radiation intensity of a black body rises as the fourth power of the absolute temperature, as expressed by the Stefan–Boltzmann law. A kitchen oven, at a temperature about double room temperature on the absolute temperature scale (600 K vs. 300 K) radiates 16 times as much power per unit area. An object at the temperature of the filament in an incandescent light bulb—roughly 3000 K, or 10 times room temperature—radiates 10,000 times as much energy per unit area.
As for photon statistics, thermal light obeys Super-Poissonian statistics.
Appearance
When the temperature of a body is high enough, its thermal radiation spectrum becomes strong enough in the visible range to visibly glow. The visible component of thermal radiation is sometimes called incandescence,
though this term can also refer to thermal radiation in general. The term derive from the Latin verb , 'to glow white'.
In practice, virtually all solid or liquid substances start to glow around , with a mildly dull red color, whether or not a chemical reaction takes place that produces light as a result of an exothermic process. This limit is called the Draper point. The incandescence does not vanish below that temperature, but it is too weak in the visible spectrum to be perceptible.
Reciprocity
The rate of electromagnetic radiation emitted by a body at a given frequency is proportional to the rate that the body absorbs radiation at that frequency, a property known as reciprocity. Thus, a surface that absorbs more red light thermally radiates more red light. This principle applies to all properties of the wave, including wavelength (color), direction, polarization, and even coherence. It is therefore possible to have thermal radiation which is polarized, coherent, and directional; though polarized and coherent sources are fairly rare in nature.
Fundamental principles
Thermal radiation is one of the three principal mechanisms of heat transfer. It entails the emission of a spectrum of electromagnetic radiation due to an object's temperature. Other mechanisms are convection and conduction.
Electromagnetic waves
Thermal radiation is characteristically different from conduction and convection in that it does not require a medium and, in fact it reaches maximum efficiency in a vacuum. Thermal radiation is a type of electromagnetic radiation which is often modeled by the propagation of waves. These waves have the standard wave properties of frequency, and wavelength, which are related by the equationwhere is the speed of light in the medium.
Irradiation
Thermal irradiation is the rate at which radiation is incident upon a surface per unit area. It is measured in watts per square meter. Irradiation can either be reflected, absorbed, or transmitted. The components of irradiation can then be characterized by the equation
where, represents the absorptivity, reflectivity and transmissivity. These components are a function of the wavelength of the electromagnetic wave as well as the material properties of the medium.
Absorptivity and emissivity
The spectral absorption is equal to the emissivity ; this relation is known as Kirchhoff's law of thermal radiation. An object is called a black body if this holds for all frequencies, and the following formula applies:
If objects appear white (reflective in the visual spectrum), they are not necessarily equally reflective (and thus non-emissive) in the thermal infrared – see the diagram at the left. Most household radiators are painted white, which is sensible given that they are not hot enough to radiate any significant amount of heat, and are not designed as thermal radiators at all – instead, they are actually convectors, and painting them matt black would make little difference to their efficacy. Acrylic and urethane based white paints have 93% blackbody radiation efficiency at room temperature (meaning the term "black body" does not always correspond to the visually perceived color of an object). These materials that do not follow the "black color = high emissivity/absorptivity" caveat will most likely have functional spectral emissivity/absorptivity dependence.
Only truly gray systems (relative equivalent emissivity/absorptivity and no directional transmissivity dependence in all control volume bodies considered) can achieve reasonable steady-state heat flux estimates through the Stefan-Boltzmann law. Encountering this "ideally calculable" situation is almost impossible (although common engineering procedures surrender the dependency of these unknown variables and "assume" this to be the case). Optimistically, these "gray" approximations will get close to real solutions, as most divergence from Stefan-Boltzmann solutions is very small (especially in most standard temperature and pressure lab controlled environments).
Reflectivity
Reflectivity deviates from the other properties in that it is bidirectional in nature. In other words, this property depends on the direction of the incident of radiation as well as the direction of the reflection. Therefore, the reflected rays of a radiation spectrum incident on a real surface in a specified direction forms an irregular shape that is not easily predictable. In practice, surfaces are often assumed to reflect either in a perfectly specular or a diffuse manner. In a specular reflection, the angles of reflection and incidence are equal. In diffuse reflection, radiation is reflected equally in all directions. Reflection from smooth and polished surfaces can be assumed to be specular reflection, whereas reflection from rough surfaces approximates diffuse reflection. In radiation analysis a surface is defined as smooth if the height of the surface roughness is much smaller relative to the wavelength of the incident radiation.
Transmissivity
A medium that experiences no transmission () is opaque, in which case absorptivity and reflectivity sum to unity:
Radiation intensity
Radiation emitted from a surface can propagate in any direction from the surface. Irradiation can also be incident upon a surface from any direction. The amount of irradiation on a surface is therefore dependent on the relative orientation of both the emitter and the receiver. The parameter radiation intensity, is used to quantify how much radiation makes it from one surface to another.
Radiation intensity is often modeled using a spherical coordinate system.
Emissive power
Emissive power is the rate at which radiation is emitted per unit area. It is a measure of heat flux. The total emissive power from a surface is denoted as and can be determined by,where is in units of steradians and is the total intensity.
The total emissive power can also be found by integrating the spectral emissive power over all possible wavelengths. This is calculated as,where represents wavelength.
The spectral emissive power can also be determined from the spectral intensity, as follows,
where both spectral emissive power and emissive intensity are functions of wavelength.
Blackbody radiation
A "black body" is a body which has the property of allowing all incident rays to enter without surface reflection and not allowing them to leave again.
Blackbodies are idealized surfaces that act as the perfect absorber and emitter. They serve as the standard against which real surfaces are compared when characterizing thermal radiation. A blackbody is defined by three characteristics:
A blackbody absorbs all incident radiation, regardless of wavelength and direction.
No surface can emit more energy than a blackbody for a given temperature and wavelength.
A blackbody is a diffuse emitter.
The Planck distribution
The spectral intensity of a blackbody, was first determined by Max Planck. It is given by Planck's law per unit wavelength as:This formula mathematically follows from calculation of spectral distribution of energy in quantized electromagnetic field which is in complete thermal equilibrium with the radiating object. Planck's law shows that radiative energy increases with temperature, and explains why the peak of an emission spectrum shifts to shorter wavelengths at higher temperatures. It can also be found that energy emitted at shorter wavelengths increases more rapidly with temperature relative to longer wavelengths.
The equation is derived as an infinite sum over all possible frequencies in a semi-sphere region. The energy, , of each photon is multiplied by the number of states available at that frequency, and the probability that each of those states will be occupied.
Stefan-Boltzmann law
The Planck distribution can be used to find the spectral emissive power of a blackbody, as follows,
The total emissive power of a blackbody is then calculated as,The solution of the above integral yields a remarkably elegant equation for the total emissive power of a blackbody, the Stefan-Boltzmann law, which is given as,where is the Steffan-Boltzmann constant.
Wien's displacement law
The wavelength for which the emission intensity is highest is given by Wien's displacement law as:
Constants
Definitions of constants used in the above equations:
Variables
Definitions of variables, with example values:
Emission from non-black surfaces
For surfaces which are not black bodies, one has to consider the (generally frequency dependent) emissivity factor . This factor has to be multiplied with the radiation spectrum formula before integration. If it is taken as a constant, the resulting formula for the power output can be written in a way that contains as a factor:
This type of theoretical model, with frequency-independent emissivity lower than that of a perfect black body, is often known as a grey body. For frequency-dependent emissivity, the solution for the integrated power depends on the functional form of the dependence, though in general there is no simple expression for it. Practically speaking, if the emissivity of the body is roughly constant around the peak emission wavelength, the gray body model tends to work fairly well since the weight of the curve around the peak emission tends to dominate the integral.
Heat transfer between surfaces
Calculation of radiative heat transfer between groups of objects, including a 'cavity' or 'surroundings' requires solution of a set of simultaneous equations using the radiosity method. In these calculations, the geometrical configuration of the problem is distilled to a set of numbers called view factors, which give the proportion of radiation leaving any given surface that hits another specific surface. These calculations are important in the fields of solar thermal energy, boiler and furnace design and raytraced computer graphics.
The net radiative heat transfer from one surface to another is the radiation leaving the first surface for the other minus that arriving from the second surface.
Formulas for radiative heat transfer can be derived for more particular or more elaborate physical arrangements, such as between parallel plates, concentric spheres and the internal surfaces of a cylinder.
Applications
Thermal radiation is an important factor of many engineering applications, especially for those dealing with high temperatures.
Solar energy
Sunlight is the incandescence of the "white hot" surface of the Sun. Electromagnetic radiation from the sun has a peak wavelength of about 550 nm, and can be harvested to generate heat or electricity.
Thermal radiation can be concentrated on a tiny spot via reflecting mirrors, which concentrating solar power takes advantage of. Instead of mirrors, Fresnel lenses can also be used to concentrate radiant energy. Either method can be used to quickly vaporize water into steam using sunlight. For example, the sunlight reflected from mirrors heats the PS10 Solar Power Plant, and during the day it can heat water to .
A selective surface can be used when energy is being extracted from the sun. Selective surfaces are surfaces tuned to maximize the amount of energy they absorb from the sun's radiation while minimizing the amount of energy they lose to their own thermal radiation. Selective surfaces can also be used on solar collectors.
Incandescent light bulbs
The incandescent light bulb creates light by heating a filament to a temperature at which it emits significant visible thermal radiation. For a tungsten filament at a typical temperature of 3000 K, only a small fraction of the emitted radiation is visible, and the majority is infrared light. This infrared light does not help a person see, but still transfers heat to the environment, making incandescent lights relatively inefficient as a light source.
If the filament could be made hotter, efficiency would increase; however, there are currently no materials able to withstand such temperatures which would be appropriate for use in lamps.
More efficient light sources, such as fluorescent lamps and LEDs, do not function by incandescence.
Thermal comfort
Thermal radiation plays a crucial role in human comfort, influencing perceived temperature sensation. Various technologies have been developed to enhance thermal comfort, including personal heating and cooling devices.
The mean radiant temperature is a metric used to quantify the exchange of radiant heat between a human and their surrounding environment.
Personal heating
Radiant personal heaters are devices that convert energy into infrared radiation that are designed to increase a user's perceived temperature. They typically are either gas-powered or electric. In domestic and commercial applications, gas-powered radiant heaters can produce a higher heat flux than electric heaters which are limited by the amount of current that can be drawn through a circuit breaker.
Personal cooling
Personalized cooling technology is an example of an application where optical spectral selectivity can be beneficial. Conventional personal cooling is typically achieved through heat conduction and convection. However, the human body is a very efficient emitter of infrared radiation, which provides an additional cooling mechanism. Most conventional fabrics are opaque to infrared radiation and block thermal emission from the body to the environment. Fabrics for personalized cooling applications have been proposed that enable infrared transmission to directly pass through clothing, while being opaque at visible wavelengths, allowing the wearer to remain cooler.
Windows
Low-emissivity windows in houses are a more complicated technology, since they must have low emissivity at thermal wavelengths while remaining transparent to visible light. To reduce the heat transfer from a surface, such as a glass window, a clear reflective film with a low emissivity coating can be placed on the interior of the surface. "Low-emittance (low-E) coatings are microscopically thin, virtually invisible, metal or metallic oxide layers deposited on a window or skylight glazing surface primarily to reduce the U-factor by suppressing radiative heat flow". By adding this coating we are limiting the amount of radiation that leaves the window thus increasing the amount of heat that is retained inside the window.
Spacecraft
Shiny metal surfaces, have low emissivities both in the visible wavelengths and in the far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example of this is the multi-layer insulation used to insulate spacecraft.
Since any electromagnetic radiation, including thermal radiation, conveys momentum as well as energy, thermal radiation also induces very small forces on the radiating or absorbing objects. Normally these forces are negligible, but they must be taken into account when considering spacecraft navigation. The Pioneer anomaly, where the motion of the craft slightly deviated from that expected from gravity alone, was eventually tracked down to asymmetric thermal radiation from the spacecraft. Similarly, the orbits of asteroids are perturbed since the asteroid absorbs solar radiation on the side facing the Sun, but then re-emits the energy at a different angle as the rotation of the asteroid carries the warm surface out of the Sun's view (the YORP effect).
Nanostructures
Nanostructures with spectrally selective thermal emittance properties offer numerous technological applications for energy generation and efficiency, e.g., for daytime radiative cooling of photovoltaic cells and buildings. These applications require high emittance in the frequency range corresponding to the atmospheric transparency window in 8 to 13 micron wavelength range. A selective emitter radiating strongly in this range is thus exposed to the clear sky, enabling the use of the outer space as a very low temperature heat sink.
Health and safety
Metabolic temperature regulation
In a practical, room-temperature setting, humans lose considerable energy due to infrared thermal radiation in addition to that lost by conduction to air (aided by concurrent convection, or other air movement like drafts). The heat energy lost is partially regained by absorbing heat radiation from walls or other surroundings. Human skin has an emissivity of very close to 1.0. A human, having roughly 2m2 in surface area, and a temperature of about 307 K, continuously radiates approximately 1000 W. If people are indoors, surrounded by surfaces at 296 K, they receive back about 900 W from the wall, ceiling, and other surroundings, resulting in a net loss of 100 W. These estimates are highly dependent on extrinsic variables, such as wearing clothes.
Lighter colors and also whites and metallic substances absorb less of the illuminating light, and as a result heat up less. However, color makes little difference in the heat transfer between an object at everyday temperatures and its surroundings. This is because the dominant emitted wavelengths are not in the visible spectrum, but rather infrared. Emissivities at those wavelengths are largely unrelated to visual emissivities (visible colors); in the far infra-red, most objects have high emissivities. Thus, except in sunlight, the color of clothing makes little difference as regards warmth; likewise, paint color of houses makes little difference to warmth except when the painted part is sunlit.
Burns
Thermal radiation is a phenomenon that can burn skin and ignite flammable materials. The time to a damage from exposure to thermal radiation is a function of the rate of delivery of the heat. Radiative heat flux and effects are given as follows:
Near-field radiative heat transfer
At distances on the scale of the wavelength of a radiated electromangetic wave or smaller, Planck's law is not accurate. For objects this small and close together, the quantum tunneling of EM waves has a significant impact on the rate of radiation.
A more sophisticated framework involving electromagnetic theory must be used for smaller distances from the thermal source or surface. For example, although far-field thermal radiation at distances from surfaces of more than one wavelength is generally not coherent to any extent, near-field thermal radiation (i.e., radiation at distances of a fraction of various radiation wavelengths) may exhibit a degree of both temporal and spatial coherence.
Planck's law of thermal radiation has been challenged in recent decades by predictions and successful demonstrations of the radiative heat transfer between objects separated by nanoscale gaps that deviate significantly from the law predictions. This deviation is especially strong (up to several orders in magnitude) when the emitter and absorber support surface polariton modes that can couple through the gap separating cold and hot objects. However, to take advantage of the surface-polariton-mediated near-field radiative heat transfer, the two objects need to be separated by ultra-narrow gaps on the order of microns or even nanometers. This limitation significantly complicates practical device designs.
Another way to modify the object thermal emission spectrum is by reducing the dimensionality of the emitter itself. This approach builds upon the concept of confining electrons in quantum wells, wires and dots, and tailors thermal emission by engineering confined photon states in two- and three-dimensional potential traps, including wells, wires, and dots. Such spatial confinement concentrates photon states and enhances thermal emission at select frequencies. To achieve the required level of photon confinement, the dimensions of the radiating objects should be on the order of or below the thermal wavelength predicted by Planck's law. Most importantly, the emission spectrum of thermal wells, wires and dots deviates from Planck's law predictions not only in the near field, but also in the far field, which significantly expands the range of their applications.
See also
Incandescence
Infrared photography
Interior radiation control coating
Heat transfer
Microwave Radiation
Planck radiation
Radiant cooling
Sakuma–Hattori equation
Thermal dose unit
View factor
References
Further reading
E.M. Sparrow and R.D. Cess. Radiation Heat Transfer. Hemisphere Publishing Corporation, 1978.
Kuenzer, C. and S. Dech (2013): Thermal Infrared Remote Sensing: Sensors, Methods, Applications (= Remote Sensing and Digital Image Processing 17). Dordrecht: Springer.
External links
Black Body Emission Calculator
Heat transfer
Atmospheric Radiation
Infrared Temperature Calibration 101
Electromagnetic radiation
Heat transfer
Thermodynamics
Temperature
Infrared | Thermal radiation | [
"Physics",
"Chemistry",
"Mathematics"
] | 6,631 | [
"Transport phenomena",
"Scalar physical quantities",
"Temperature",
"Heat transfer",
"Physical phenomena",
"Spectrum (physical sciences)",
"Physical quantities",
"Thermodynamic properties",
"Electromagnetic radiation",
"SI base quantities",
"Intensive quantities",
"Electromagnetic spectrum",
... |
185,259 | https://en.wikipedia.org/wiki/Germ%20theory%20of%20disease | The germ theory of disease is the currently accepted scientific theory for many diseases. It states that microorganisms known as pathogens or "germs" can cause disease. These small organisms, which are too small to be seen without magnification, invade humans, other animals, and other living hosts. Their growth and reproduction within their hosts can cause disease. "Germ" refers to not just a bacterium but to any type of microorganism, such as protists or fungi, or other pathogens that can cause disease, such as viruses, prions, or viroids. Diseases caused by pathogens are called infectious diseases. Even when a pathogen is the principal cause of a disease, environmental and hereditary factors often influence the severity of the disease, and whether a potential host individual becomes infected when exposed to the pathogen. Pathogens are disease-carrying agents that can pass from one individual to another, both in humans and animals. Infectious diseases are caused by biological agents such as pathogenic microorganisms (viruses, bacteria, and fungi) as well as parasites.
Basic forms of germ theory were proposed by Girolamo Fracastoro in 1546, and expanded upon by Marcus von Plenciz in 1762. However, such views were held in disdain in Europe, where Galen's miasma theory remained dominant among scientists and doctors.
By the early 19th century, the first vaccine, smallpox vaccination was commonplace in Europe, though doctors were unaware of how it worked or how to extend the principle to other diseases. A transitional period began in the late 1850s with the work of Louis Pasteur. This work was later extended by Robert Koch in the 1880s. By the end of that decade, the miasma theory was struggling to compete with the germ theory of disease. Viruses were initially discovered in the 1890s. Eventually, a "golden era" of bacteriology ensued, during which the germ theory quickly led to the identification of the actual organisms that cause many diseases.
Miasma theory
The miasma theory was the predominant theory of disease transmission before the germ theory took hold towards the end of the 19th century; it is no longer accepted as a correct explanation for disease by the scientific community. It held that diseases such as cholera, chlamydia infection, or the Black Death were caused by a (, Ancient Greek: "pollution"), a noxious form of "bad air" emanating from rotting organic matter. Miasma was considered to be a poisonous vapor or mist filled with particles from decomposed matter (miasmata) that was identifiable by its foul smell. The theory posited that diseases were the product of environmental factors such as contaminated water, foul air, and poor hygienic conditions. Such infections, according to the theory, were not passed between individuals but would affect those within a locale that gave rise to such vapors.
Development of germ theory
Greece and Rome
In Antiquity, the Greek historian Thucydides ( – ) was the first person to write, in his account of the plague of Athens, that diseases could spread from an infected person to others.
One theory of the spread of contagious diseases that were not spread by direct contact was that they were spread by spore-like "seeds" (Latin: semina) that were present in and dispersible through the air. In his poem, De rerum natura (On the Nature of Things, ), the Roman poet Lucretius ( – ) stated that the world contained various "seeds", some of which could sicken a person if they were inhaled or ingested.
The Roman statesman Marcus Terentius Varro (116–27 BC) wrote, in his Rerum rusticarum libri III (Three Books on Agriculture, 36 BC): "Precautions must also be taken in the neighborhood of swamps... because there are bred certain minute creatures which cannot be seen by the eyes, which float in the air and enter the body through the mouth and nose and there cause serious diseases."
The Greek physician Galen (AD 129 – ) speculated in his On Initial Causes () that some patients might have "seeds of fever". In his On the Different Types of Fever (), Galen speculated that plagues were spread by "certain seeds of plague", which were present in the air. And in his Epidemics (), Galen explained that patients might relapse during recovery from fever because some "seed of the disease" lurked in their bodies, which would cause a recurrence of the disease if the patients did not follow a physician's therapeutic regimen.
The Middle Ages
A hybrid form of miasma and contagion theory was proposed by Persian physician Ibn Sina (known as Avicenna in Europe) in The Canon of Medicine (1025). He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt.
During the early Middle Ages, Isidore of Seville (–636) mentioned "plague-bearing seeds" (pestifera semina) in his On the Nature of Things (). Later in 1345, Tommaso del Garbo (–1370) of Bologna, Italy mentioned Galen's "seeds of plague" in his work Commentaria non-parum utilia in libros Galeni (Helpful commentaries on the books of Galen).
The 16th century Reformer Martin Luther appears to have had some idea of the contagion theory, commenting, "I have survived three plagues and visited several people who had two plague spots which I touched. But it did not hurt me, thank God. Afterwards when I returned home, I took up Margaret," (born 1534), "who was then a baby, and put my unwashed hands on her face, because I had forgotten; otherwise I should not have done it, which would have been tempting God." In 1546, Italian physician Girolamo Fracastoro published De Contagione et Contagiosis Morbis (On Contagion and Contagious Diseases), a set of three books covering the nature of contagious diseases, categorization of major pathogens, and theories on preventing and treating these conditions. Fracastoro blamed "seeds of disease" that propagate through direct contact with an infected host, indirect contact with fomites, or through particles in the air.
The Early Modern Period
In 1668, Italian physician Francesco Redi published experimental evidence rejecting spontaneous generation, the theory that living creatures arise from nonliving matter. He observed that maggots only arose from rotting meat that was uncovered. When meat was left in jars covered by gauze, the maggots would instead appear on the gauze's surface, later understood as rotting meat's smell passing through the mesh to attract flies that laid eggs.
Microorganisms are said to have been first directly observed in the 1670s by Anton van Leeuwenhoek, an early pioneer in microbiology, considered "the Father of Microbiology". Leeuwenhoek is said to be the first to see and describe bacteria in 1674, yeast cells, the teeming life in a drop of water (such as algae), and the circulation of blood corpuscles in capillaries. The word "bacteria" didn't exist yet, so he called these microscopic living organisms "animalcules", meaning "little animals". Those "very little animalcules" he was able to isolate from different sources, such as rainwater, pond and well water, and the human mouth and intestine.
Yet German Jesuit priest and scholar Athanasius Kircher (or "Kirchner", as it is often spelled) may have observed such microorganisms prior to this. One of his books written in 1646 contains a chapter in Latin, which reads in translation: "Concerning the wonderful structure of things in nature, investigated by microscope...who would believe that vinegar and milk abound with an innumerable multitude of worms." Kircher defined the invisible organisms found in decaying bodies, meat, milk, and secretions as "worms." His studies with the microscope led him to the belief, which he was possibly the first to hold, that disease and putrefaction, or decay were caused by the presence of invisible living bodies, writing that "a number of things might be discovered in the blood of fever patients." When Rome was struck by the bubonic plague in 1656, Kircher investigated the blood of plague victims under the microscope. He noted the presence of "little worms" or "animalcules" in the blood and concluded that the disease was caused by microorganisms.
Kircher was the first to attribute infectious disease to a microscopic pathogen, inventing the germ theory of disease, which he outlined in his Scrutinium Physico-Medicum, published in Rome in 1658. Kircher's conclusion that disease was caused by microorganisms was correct, although it is likely that what he saw under the microscope were in fact red or white blood cells and not the plague agent itself. Kircher also proposed hygienic measures to prevent the spread of disease, such as isolation, quarantine, burning clothes worn by the infected, and wearing facemasks to prevent the inhalation of germs. It was Kircher who first proposed that living beings enter and exist in the blood.
In the 18th century, more proposals were made, but struggled to catch on. In 1700, physician Nicolas Andry argued that microorganisms he called "worms" were responsible for smallpox and other diseases. In 1720, Richard Bradley theorised that the plague and "all pestilential distempers" were caused by "poisonous insects", living creatures viewable only with the help of microscopes.
In 1762, the Austrian physician Marcus Antonius von Plenciz (1705–1786) published a book titled Opera medico-physica. It outlined a theory of contagion stating that specific animalcules in the soil and the air were responsible for causing specific diseases. Von Plenciz noted the distinction between diseases which are both epidemic and contagious (like measles and dysentery), and diseases which are contagious but not epidemic (like rabies and leprosy). The book cites Anton van Leeuwenhoek to show how ubiquitous such animalcules are and was unique for describing the presence of germs in ulcerating wounds. Ultimately, the theory espoused by von Plenciz was not accepted by the scientific community.
19th and 20th centuries
Agostino Bassi, Italy
During the early 19th century, driven by economic concerns over collapsing silk production, Italian entomologist Agostino Bassi researched a silkworm disease known as "muscardine" in French and "calcinaccio" or "mal del segno" in Italian, causing white fungal spots along the caterpillar. From 1835 to 1836, Bassi published his findings that fungal spores transmitted the disease between individuals. In recommending the rapid removal of diseased caterpillars and disinfection of their surfaces, Bassi outlined methods used in modern preventative healthcare. Italian naturalist Giuseppe Gabriel Balsamo-Crivelli named the causative fungal species after Bassi, currently classified as Beauveria bassiana.
Louis-Daniel Beauperthuy, France
In 1838 French specialist in tropical medicine Louis-Daniel Beauperthuy pioneered using microscopy in relation to diseases and independently developed a theory that all infectious diseases were due to parasitic infection with "animalcules" (microorganisms). With the help of his friend M. Adele de Rosseville, he presented his theory in a formal presentation before the French Academy of Sciences in Paris. By 1853, he was convinced that malaria and yellow fever were spread by mosquitos. He even identified the particular group of mosquitos that transmit yellow fever as the "domestic species" of "striped-legged mosquito", which can be recognised as Aedes aegypti, the actual vector. He published his theory in 1854 in the Gaceta Oficial de Cumana ("Official Gazette of Cumana"). His reports were assessed by an official commission, which discarded his mosquito theory.
Ignaz Semmelweis, Austria
Ignaz Semmelweis, a Hungarian obstetrician working at the Vienna General Hospital (Allgemeines Krankenhaus) in 1847, noticed the dramatically high maternal mortality from puerperal fever following births assisted by doctors and medical students. However, those attended by midwives were relatively safe. Investigating further, Semmelweis made the connection between puerperal fever and examinations of delivering women by doctors, and further realized that these physicians had usually come directly from autopsies. Asserting that puerperal fever was a contagious disease and that matter from autopsies was implicated in its spread, Semmelweis made doctors wash their hands with chlorinated lime water before examining pregnant women. He then documented a sudden reduction in the mortality rate from 18% to 2.2% over a period of a year. Despite this evidence, he and his theories were rejected by most of the contemporary medical establishment.
Gideon Mantell, UK
Gideon Mantell, the Sussex doctor more famous for discovering dinosaur fossils, spent time with his microscope, and speculated in his Thoughts on Animalcules (1850) that perhaps "many of the most serious maladies which afflict humanity, are produced by peculiar states of invisible animalcular life".
John Snow, UK
British physician John Snow is credited as a founder of modern epidemiology for studying the 1854 Broad Street cholera outbreak. Snow criticized the Italian anatomist Giovanni Maria Lancisi for his early 18th century writings that claimed swamp miasma spread malaria, rebutting that bad air from decomposing organisms was not present in all cases. In his 1849 pamphlet On the Mode of Communication of Cholera, Snow proposed that cholera spread through the fecal–oral route, replicating in human lower intestines.
In the book's second edition, published in 1855, Snow theorized that cholera was caused by cells smaller than human epithelial cells, leading to Robert Koch's 1884 confirmation of the bacterial species Vibrio cholerae as the causative agent. In recognizing a biological origin, Snow recommended boiling and filtering water, setting the precedent for modern boil-water advisory directives.
Through a statistical analysis tying cholera cases to specific water pumps associated with the Southwark and Vauxhall Waterworks Company, which supplied sewage-polluted water from the River Thames, Snow showed that areas supplied by this company experienced fourteen times as many deaths as residents using Lambeth Waterworks Company pumps that obtained water from the upriver, cleaner Seething Wells. While Snow received praise for convincing the Board of Guardians of St James's Parish to remove the handles of contaminated pumps, he noted that the outbreak's cases were already declining as scared residents fled the region.
Louis Pasteur, France
During the mid-19th century, French microbiologist Louis Pasteur showed that treating the female genital tract with boric acid killed the microorganisms causing postpartum infections while avoiding damage to mucous membranes.
Building on Redi's work, Pasteur disproved spontaneous generation by constructing swan neck flasks containing nutrient broth. Since the flask contents were only fermented when in direct contact with the external environment's air by removing the curved tubing, Pasteur demonstrated that bacteria must travel between sites of infection to colonize environments.
Similar to Bassi, Pasteur extended his research on germ theory by studying pébrine, a disease that causes brown spots on silkworms. While Swiss botanist Carl Nägeli discovered the fungal species Nosema bombycis in 1857, Pasteur applied the findings to recommend improved ventilation and screening of silkworm eggs, an early form of disease surveillance.
Robert Koch, Germany
In 1884, German bacteriologist Robert Koch published four criteria for establishing causality between specific microorganisms and diseases, now known as Koch's postulates:
The microorganism must be found in abundance in all organisms with the disease, but should not be found in healthy organisms.
The microorganism must be isolated from a diseased organism and grown in pure culture.
The cultured microorganism should cause disease when introduced into a healthy organism.
The microorganism must be re-isolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent.
During his lifetime, Koch recognized that the postulates were not universally applicable, such as asymptomatic carriers of cholera violating the first postulate. For this same reason, the third postulate specifies "should", rather than "must", because not all host organisms exposed to an infectious agent will acquire the infection, potentially due to differences in prior exposure to the pathogen.Limiting the second postulate, it was later discovered that viruses cannot be grown in pure cultures because they are obligate intracellular parasites, making it impossible to fulfill the second postulate. Similarly, pathogenic misfolded proteins, known as prions, only spread by transmitting their structure to other proteins, rather than self-replicating.
While Koch's postulates retain historical importance for emphasizing that correlation does not imply causation, many pathogens are accepted as causative agents of specific diseases without fulfilling all of the criteria. In 1988, American microbiologist Stanley Falkow published a molecular version of Koch's postulates to establish correlation between microbial genes and virulence factors.
Joseph Lister, UK
After reading Pasteur's papers on bacterial fermentation, British surgeon Joseph Lister recognized that compound fractures, involving bones breaking through the skin, were more likely to become infected due to exposure to environmental microorganisms. He recognized that carbolic acid could be applied to the site of injury as an effective antiseptic.
See also
Alexander Fleming
Cell theory
Cooties
Epidemiology
Germ theory denialism
History of emerging infectious diseases
Robert Hooke
Rudolf Virchow
Zymotic disease
References
Further reading
Baldwin, Peter. Contagion and the State in Europe, 1830-1930 (Cambridge UP, 1999), focus on cholera, smallpox and syphilis in Britain, France, Germany and Sweden.
Brock, Thomas D. Robert Koch. A Life in Medicine and Bacteriology (1988).
Dubos, René. Louis Pasteur: Free Lance of Science (1986)
Gaynes, Robert P. Germ Theory (ASM Press, 2023), pp.143-205 online
Geison, Gerald L. The Private Science of Louis Pasteur (Princeton University Press, 1995) online
Hudson, Robert P. Disease and Its Control: The Shaping of Modern Thought (1983)
Lawrence, Christopher, and Richard Dixey. "Practising on Principle: Joseph Lister and the Germ Theories of Disease," in Medical Theory, Surgical Practice: Studies in the History of Surgery ed. by Christopher Lawrence (Routledge, 1992), pp. 153-215.
Magner, Lois N. A history of infectious diseases and the microbial world (2008) online
Magner, Lois N. A History of Medicine (1992) pp. 305–334. online
Nutton, Vivian. "The seeds of disease: an explanation of contagion and infection from the Greeks to the Renaissance." Medical history 27.1 (1983): 1-34. online
Porter, Roy. Blood and Guts: A Short History of Medicine (2004) online
Tomes, Nancy. 'The gospel of germs: Men, women, and the microbe in American life (Harvard University Press, 1999) online.
Tomes, Nancy. "Moralizing the microbe: the germ theory and the moral construction of behavior in the late-nineteenth-century antituberculosis movement." in Morality and health (Routledge, 2013) pp. 271-294.
Tomes, Nancy J. "American attitudes toward the germ theory of disease: Phyllis Allen Richmond revisited." Journal of the History of Medicine and Allied Sciences 52.1 (1997): 17-50. online
Winslow, Charles-Edward Amory. The Conquest of Epidemic Disease. A Chapter in the History of Ideas (1943) online.
External links
John Horgan, "Germ Theory" (2023)
Stephen T. Abedon Germ Theory of Disease Supplemental Lecture (98/03/28 update), www.mansfield.ohio-state.edu William C. Campbell The Germ Theory Timeline, germtheorytimeline.info Science's war on infectious diseases, www.creatingtechnology.org''
Biology theories
Infectious diseases
Microbiology | Germ theory of disease | [
"Chemistry",
"Biology"
] | 4,316 | [
"Microbiology",
"Biology theories",
"Microscopy"
] |
185,384 | https://en.wikipedia.org/wiki/Quetiapine | Quetiapine, sold under the brand name Seroquel among others, is an atypical antipsychotic medication used in the treatment of schizophrenia, bipolar disorder, bipolar depression, and major depressive disorder. Despite being widely prescribed as a sleep aid due to its tranquillizing effects, the benefits of such use may not outweigh the risk of undesirable side effects. It is taken orally.
Common side effects include sedation, fatigue, weight gain, constipation, and dry mouth. Other side effects include low blood pressure with standing, seizures, a prolonged erection, high blood sugar, tardive dyskinesia, and neuroleptic malignant syndrome. In older people with dementia, its use increases the risk of death. Use in the third trimester of pregnancy may result in a movement disorder in the baby for some time after birth. Quetiapine is believed to work by blocking a number of receptors, including those for serotonin and dopamine.
Quetiapine was developed in 1985 and was approved for medical use in the United States in 1997. It is available as a generic medication. In 2022, it was the most prescribed antipsychotic and 82nd most commonly prescribed medication in the United States, with more than 8million prescriptions. It is on the World Health Organization's List of Essential Medicines.
The drug is typically indicated to have superior efficacy over other existing antipsychotics for the treatment of bipolar disorder, followed by olanzapine and aripiprazole, in that order. Quetiapine is currently the only antipsychotic to produce equal efficacy as a standalone therapy for mixed manic-depressive mood swings as it is when used in combination with an SSRI antidepressant. However, quetiapine is less potent than clozapine, amisulpride, olanzapine, risperidone, and paliperidone, respectively, in alleviating psychotic symptoms or treating schizophrenia.
Medical uses
Quetiapine is primarily used to treat schizophrenia or bipolar disorder. Quetiapine targets both positive and negative symptoms of schizophrenia.
Schizophrenia
A 2013 Cochrane review compared quetiapine to typical antipsychotics:
In a 2013 comparison of 15 antipsychotics in effectiveness in treating schizophrenia, quetiapine demonstrated standard effectiveness. It was 13–16% more effective than ziprasidone, chlorpromazine, and asenapine and approximately as effective as haloperidol and aripiprazole.
There is tentative evidence of the benefit of quetiapine versus placebo in schizophrenia; however, definitive conclusions are not possible due to the high rate of attrition in trials (greater than 50%) and the lack of data on economic outcomes, social functioning, or quality of life.
It is debatable whether, as a class, typical or atypical antipsychotics are more effective. Both have equal drop-out and symptom relapse rates when typicals are used at low to moderate dosages. While quetiapine has lower rates of extrapyramidal side effects, there is greater sleepiness and rates of dry mouth.
A Cochrane review comparing quetiapine to other atypical antipsychotic agents tentatively concluded that it may be less efficacious than olanzapine and risperidone; produce fewer movement related side effects than paliperidone, aripiprazole, ziprasidone, risperidone and olanzapine; and produce weight gain similar to risperidone, clozapine and aripiprazole. They concluded that it produces suicide attempt, suicide; death; QTc prolongation, low blood pressure; tachycardia; sedation; gynaecomastia; galactorrhoea, menstrual irregularity and white blood cell count at a rate similar to first generation antipsychotics.
Bipolar disorder
In those with bipolar disorder, quetiapine is used to treat depressive episodes; acute manic episodes associated with bipolar I disorder (as either monotherapy or adjunct therapy to lithium; valproate or lamotrigine); acute mixed episodes; and maintenance treatment of bipolar I disorder (as adjunct therapy to lithium or divalproex).
Major depressive disorder
Quetiapine is effective when used by itself and when used along with other medications in major depressive disorder (MDD). However, sedation is often an undesirable side effect.
In the United States, the United Kingdom and Australia (while not subsidised by the Australian Pharmaceutical Benefits Scheme for treatment of MDD), quetiapine is licensed for use as an add-on treatment in MDD.
Alzheimer's disease
Quetiapine does not decrease agitation among people with Alzheimer's disease. Quetiapine worsens intellectual functioning in the elderly with dementia and therefore is not recommended.
Insomnia
The use of low doses of quetiapine for insomnia, while common, is not recommended; there is little evidence of benefit and concerns regarding adverse effects. A 2022 network meta-analysis of 154 double-blind, randomized controlled trials of drug therapies vs. placebo for insomnia in adults found that quetiapine did not demonstrate any short-term benefits in sleep quality. Quetiapine, specifically, had an effect size (standardized mean difference) against placebo for treatment of insomnia of 0.05 (95% –1.21 to 1.11) at 4weeks of treatment, with the certainty of evidence rated as very low. Doses of quetiapine used for insomnia have ranged from 12.5 to 800mg, with low doses of 25 to 200mg being the most typical. Regardless of the dose used, some of the more serious adverse effects may still possibly occur at the lower dosing ranges, such as dyslipidemia and neutropenia. These safety concerns at low doses are corroborated by Danish observational studies that showed use of specifically low-dose quetiapine (prescriptions filled for tablet strengths >50 mg were excluded) was associated with an increased risk of major cardiovascular events as compared to use of Z-drugs, with most of the risk being driven by cardiovascular death. Laboratory data from an unpublished analysis of the same cohort also support the lack of dose-dependency of metabolic side effects, as new use of low-dose quetiapine was associated with a risk of increased fasting triglycerides at 1-year follow-up.
Others
It is sometimes used off-label, often as an augmentation agent, to treat conditions such as Tourette syndrome, musical hallucinations and anxiety disorders.
Quetiapine and clozapine are the most widely used medications for the treatment of Parkinson's disease psychosis due to their relatively low extrapyramidal side-effect liability. Owing to the risks associated with clozapine (e.g. agranulocytosis, diabetes mellitus, etc.), clinicians often attempt treatment with quetiapine first, although the evidence to support quetiapine's use for this indication is significantly weaker than that of clozapine.
Adverse effects
Sources for incidence lists:
Very common (>10% incidence) adverse effects
Dry mouth
Dizziness
Headache
Somnolence (drowsiness; of 15 antipsychotics quetiapine causes the 5th most sedation. Extended release (XR) formulations tend to produce less sedation, dose-by-dose, than the immediate release formulations.)
Common (1–10% incidence) adverse effects
High blood pressure
Orthostatic hypotension
High pulse rate
High blood cholesterol
Elevated serum triglycerides
Abdominal pain
Constipation
Increased appetite
Vomiting
Increased liver enzymes
Backache
Asthenia
Insomnia
Lethargy
Tremor
Agitation
Nasal congestion
Pharyngitis
Fatigue
Pain
Dyspepsia (Indigestion)
Peripheral oedema
Dysphagia
Extrapyramidal disease: Quetiapine and clozapine are noted for their relative lack of extrapyramidal side effects.
Weight gain: SMD 0.43 kg when compared to placebo. Produces roughly as much weight gain as risperidone, less weight gain than clozapine, olanzapine and zotepine and more weight gain than ziprasidone, lurasidone, aripiprazole and asenapine. As with many other atypical antipsychotics, this action is likely due to its actions at the H1 histamine receptor and 5-HT2C receptor.
Rare (<1% incidence) adverse effects
Prolonged QT interval (had an odds ratio for prolonging the QT interval over placebo of 0.17)
Sudden cardiac death
Syncope
Diabetic ketoacidosis
Restless legs syndrome
Hyponatraemia, low blood sodium.
Jaundice, yellowing of the eyes, skin and mucous membranes due to an impaired ability of the body to clear bilirubin, a by product of haem breakdown.
Pancreatitis, pancreas swelling.
Agranulocytosis, a potentially fatal drop in white blood cell count.
Leukopenia, a drop in white blood cell count, not as severe as agranulocytosis.
Neutropenia, a drop in neutrophils, the cell of the immune cells that defends the body against bacterial infections.
Eosinophilia
Anaphylaxis, a potentially fatal allergic reaction.
Seizure
Hypothyroidism, underactive thyroid gland.
Myocarditis, swelling of the myocardium.
Cardiomyopathy
Hepatitis, swelling of the liver.
Suicidal ideation
Priapism. A prolonged and painful erection.
Stevens–Johnson syndrome. A potentially fatal skin reaction.
Neuroleptic malignant syndrome a rare and potentially fatal complication of antipsychotic drug treatment. It is characterised by the following symptoms: tremor, rigidity, hyperthermia, tachycardia, mental status changes (e.g. confusion), etc.
Tardive dyskinesia. A rare and often irreversible neurological condition characterised by involuntary movements of the face, tongue, lips and rest of the body. Most commonly occurs after prolonged treatment with antipsychotics. It is believed to be particularly uncommon with atypical antipsychotics, especially quetiapine and clozapine
Both typical and atypical antipsychotics can cause tardive dyskinesia. According to one study, rates are lower with the atypicals at 3.9% as opposed to the typicals at 5.5%. Although quetiapine and clozapine are atypical antipsychotics, switching to these atypicals is an option to minimize symptoms of tardive dyskinesia caused by other atypicals.
Weight gain can be a problem for some, with quetiapine causing more weight gain than fluphenazine, haloperidol, loxapine, molindone, olanzapine, pimozide, risperidone, thioridazine, thiothixene, trifluoperazine, and ziprasidone, but less than chlorpromazine, clozapine, perphenazine, and sertindole.
As with some other anti-psychotics, quetiapine may lower the seizure threshold, and should be taken with caution in combination with drugs such as bupropion.
Discontinuation
The British National Formulary recommends a gradual withdrawal when discontinuing antipsychotics to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time.
There is tentative evidence that discontinuation of antipsychotics can result in psychosis. It may also result in reoccurrence of the condition that is being treated. Rarely tardive dyskinesia can occur when the medication is stopped.
Pregnancy and lactation
Placental exposure is least for quetiapine compared to other atypical antipsychotics. The evidence is insufficient to rule out any risk to the foetus but available data suggests it is unlikely to result in any major foetal malformations. It is secreted in breast milk and hence quetiapine-treated mothers are advised not to breastfeed.
Abuse potential
In contrast to most other antipsychotic drugs, which tend to be somewhat aversive and often show problems with patient compliance with prescribed medication regimes, quetiapine is sometimes associated with drug misuse and abuse potential, for its hypnotic and sedative effects. It has a limited potential for misuse, usually only in individuals with a history of polysubstance abuse and/or mental illness, and especially in those incarcerated in prisons or secure psychiatric facilities where access to alternative intoxicants is more limited. To a significantly greater extent than other atypical antipsychotic drugs, quetiapine was found to be associated with drug-seeking behaviors, and to have standardised street prices and slang terms associated with it, either by itself or in combination with other drugs (such as "Q-ball" for the intravenous injection of quetiapine mixed with cocaine). The pharmacological basis for this distinction from other second generation antipsychotic drugs is unclear, though it has been suggested that quetiapine's comparatively lower dopamine receptor affinity and strong antihistamine activity might mean it could be regarded as more similar to sedating antihistamines in this context. While these issues have not been regarded as sufficient cause for placing quetiapine under increased legal controls, prescribers have been urged to show caution when prescribing quetiapine to individuals with characteristics that might place them at increased risk for drug misuse.
Overdose
Most instances of acute overdosage result in only sedation, hypotension and tachycardia, but cardiac arrhythmia, coma and death have occurred in adults. Serum or plasma quetiapine concentrations are usually in the 1–10 mg/L range in overdose survivors, while postmortem blood levels of 10–25 mg/L are generally observed in fatal cases. Non-toxic levels in postmortem blood extend to around 0.8 mg/kg, but toxic levels in postmortem blood can begin at 0.35 mg/kg.
Pharmacology
Pharmacodynamics
Quetiapine has the following pharmacological actions:
Dopamine D1, D2, D3, D4, and D5 receptor antagonist
Serotonin 5-HT1A receptor partial agonist, 5-HT2A, 5-HT2B, 5-HT2C, 5-HT3, 5-HT6, and 5-HT7 receptor antagonist, and 5-HT1B, 5-HT1D, 5-HT1E, and 5-HT1F receptor ligand
α1- and α2-adrenergic receptor antagonist
Histamine H1 receptor antagonist
Muscarinic acetylcholine receptor antagonist
This means quetiapine is a dopamine, serotonin, and adrenergic antagonist, and a potent antihistamine with some anticholinergic properties. Quetiapine binds strongly to serotonin receptors; the drug acts as a partial agonist at 5-HT1A receptors and as an antagonist to all other serotonin receptors it has affinity for. Serial PET scans evaluating the D2 receptor occupancy of quetiapine have demonstrated that quetiapine very rapidly disassociates from the D2 receptor. Theoretically, this allows for normal physiological surges of dopamine to elicit normal effects in areas such as the nigrostriatal and tuberoinfundibular pathways, thus minimizing the risk of side-effects such as pseudo-parkinsonism as well as elevations in prolactin. Some of the antagonized receptors (serotonin, norepinephrine) are actually autoreceptors whose blockade tends to increase the release of neurotransmitters.
At very low doses, quetiapine acts primarily as a histamine receptor blocker (antihistamine) and α1-adrenergic blocker. When the dose is increased, quetiapine activates the adrenergic system and binds strongly to serotonin receptors and autoreceptors. At high doses, quetiapine starts blocking significant amounts of dopamine receptors. Due to the drug's sedating H1 activity, it is often prescribed at low doses for insomnia. While some feel that low doses of drugs with antihistamine effects like quetiapine and mirtazapine are safer than drugs associated with physical dependency or other risk factors, concern has been raised by some professionals that off-label prescribing has become too widespread due to underappreciated hazards.
When treating schizophrenia, antagonism of D2 receptor by quetiapine in the mesolimbic pathway relieves positive symptoms and antagonism of the 5-HT2A receptor in the frontal cortex of the brain may relieve negative symptoms and reduce severity of psychotic episodes. Quetiapine has fewer extrapyramidal side effects and is less likely to cause hyperprolactinemia when compared to other drugs used to treat schizophrenia, so is used as a first line treatment.
Pharmacokinetics
Peak levels of quetiapine occur 1.5 hours after a dose. The plasma protein binding of quetiapine is 83%. The major active metabolite of quetiapine is norquetiapine (N-desalkylquetiapine). Quetiapine has an elimination half-life of 6 or 7 hours. Its metabolite, norquetiapine, has a half-life of 9 to 12 hours. Quetiapine is excreted primarily via the kidneys (73%) and in feces (20%) after hepatic metabolism, the remainder (1%) is excreted as the drug in its unmetabolized form.
Chemistry
Quetiapine is a tetracyclic compound and is closely related structurally to clozapine, olanzapine, loxapine, and other tetracyclic antipsychotics.
Synthesis
The synthesis of quetiapine begins with a dibenzothiazepinone. The lactam is first treated with phosphoryl chloride to produce a dibenzothiazepine. A nucleophilic substitution is used to introduce the sidechain.
History
Sustained-release
AstraZeneca submitted a new drug application for a sustained-release version of quetiapine in the United States, Canada, and the European Union in the second half of 2006 for treatment of schizophrenia.
In May 2007, the US FDA approved Seroquel XR for acute treatment of schizophrenia. During its 2007 Q2 earnings conference, AstraZeneca announced plans to launch Seroquel XR in the U.S. during August 2007. However, Seroquel XR has become available in U.S. pharmacies only after the FDA approved Seroquel XR for use as maintenance treatment for schizophrenia, in addition to acute treatment of the illness, on 16 November 2007. The company has not provided a reason for the delay of Seroquel XR's launch.
Health Canada approved sale of Seroquel XR on 27 September 2007.
In October 2008, the FDA approved Seroquel XR for the treatment of bipolar depression and bipolar mania.
In December 2008, Biovail announced that the FDA had accepted the company's ANDA to market its own version of sustained-release quetiapine. Biovail's sustained-release tablets will compete with AstraZeneca's Seroquel XR.
In December 2008, AstraZeneca notified shareholders that the FDA had asked for additional information on the company's application to expand the use of sustained-release quetiapine for treatment of depression.
Society and culture
Regulatory status
In the United States, the Food and Drug Administration (FDA) has approved quetiapine for the treatment of schizophrenia and of acute manic episodes associated with bipolar disorder (bipolar mania) and for treatment of bipolar depression. In 2009, quetiapine XR was approved as adjunctive treatment of major depressive disorder.
Quetiapine received its initial approval from the US FDA for the treatment of schizophrenia in 1997. In 2004, it received its second indication for the treatment of mania-associated bipolar disorder. In 2007 and 2008, studies were conducted on quetiapine's efficacy in treating generalized anxiety disorder and major depression.
Patent protection for the product ended in 2012; however, in a number of regions, the long-acting version remained under patent until 2017.
Lawsuits
In April 2010, the U. S. Department of Justice fined AstraZeneca $520 million for the company's aggressive marketing of Seroquel for off-label uses. According to the Department of Justice, "the company recruited doctors to serve as authors of articles that were ghostwritten by medical literature companies and about studies the doctors in question did not conduct. AstraZeneca then used those studies and articles as the basis for promotional messages about unapproved uses of Seroquel."
Multiple lawsuits have been filed in relation to quetiapine's side-effects, in particular, diabetes.
Approximately 10,000 lawsuits have been filed against AstraZeneca, alleging that quetiapine caused problems ranging from slurred speech and chronic insomnia to deaths.
Controversy
In 2004, a young man named Dan Markingson committed suicide in a controversial Seroquel clinical trial at the University of Minnesota while under an involuntary commitment order. A group of University of Minnesota bioethicists charged that the trial involved an alarming number of ethical violations.
Nurofen Plus tampering case
In August 2011, the UK's Medicines and Healthcare products Regulatory Agency (MHRA) issued a class-4 drug alert following reports that some batches of Nurofen plus contained Seroquel XL tablets instead.
Following the issue of the Class-4 Drug Alert, Reckitt Benckiser (UK) Ltd received further reports of rogue blister strips in cartons of two additional batches of Nurofen Plus tablets. One of the new batches contained Seroquel XL 50 mg tablets and one contained the Pfizer product Neurontin 100 mg capsules.
Following discussions with the MHRA's Defective Medicines Report Centre (DMRC), Reckitt Benckiser (UK) Ltd decided to recall all remaining unexpired stock of Nurofen Plus tablets in any pack size, leading to a Class-1 Drug Alert. The contamination was later traced to in-store tampering by a customer.
References
External links
Alpha-1 blockers
Alpha-2 blockers
Antidepressants
Atypical antipsychotics
Drugs developed by AstraZeneca
Dibenzothiazepines
Ethers
Glycol ethers
H1 receptor antagonists
5-HT2A antagonists
Hypnotics
Mood stabilizers
Piperazines
Primary alcohols
Sedatives
Wikipedia medicine articles ready to translate
World Health Organization essential medicines | Quetiapine | [
"Chemistry",
"Biology"
] | 4,897 | [
"Hypnotics",
"Behavior",
"Functional groups",
"Organic compounds",
"Ethers",
"Sleep"
] |
185,427 | https://en.wikipedia.org/wiki/Function%20%28mathematics%29 | In mathematics, a function from a set to a set assigns to each element of exactly one element of . The set is called the domain of the function and the set is called the codomain of the function.
Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept.
A function is often denoted by a letter such as , or . The value of a function at an element of its domain (that is, the element of the codomain that is associated with ) is denoted by ; for example, the value of at is denoted by . Commonly, a specific function is defined by means of an expression depending on , such as in this case, some computation, called , may be needed for deducing the value of the function at a particular value; for example, if then
Given its domain and its codomain, a function is uniquely represented by the set of all pairs , called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane.
Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics.
The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details.
Definition
A function from a set to a set is an assignment of one element of to each element of . The set is called the domain of the function and the set is called the codomain of the function.
If the element in is assigned to in by the function , one says that maps to , and this is commonly written In this notation, is the argument or variable of the function. A specific element of is a value of the variable, and the corresponding element of is the value of the function at , or the image of under the function.
A function , its domain , and its codomain are often specified by the notation One may write instead of , where the symbol (read 'maps to') is used to specify where a particular element in the domain is mapped to by . This allows the definition of a function without naming. For example, the square function is the function
The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if is a real function, the determination of the domain of the function requires knowing the zeros of This is one of the reasons for which, in mathematical analysis, "a function may refer to a function having a proper subset of as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function.
The range or image of a function is the set of the images of all elements in the domain.
A function on a set means a function from the domain , without specifying a codomain. However, some authors use it as shorthand for saying that the function is .
Formal definition
The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets and is a subset of the set of all ordered pairs such that and The set of all these pairs is called the Cartesian product of and and denoted Thus, the above definition may be formalized as follows.
A function with domain and codomain is a binary relation between and that satisfies the two following conditions:
For every in there exists in such that
If and then
This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation):
A function is formed by three sets, the domain the codomain and the graph that satisfy the three following conditions.
Partial functions
Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from to is a binary relation between and such that, for every there is at most one in such that
Using functional notation, this means that, given either is in , or it is undefined.
The set of the elements of such that is defined and belongs to is called the domain of definition of the function. A partial function from to is thus a ordinary function that has as its domain a subset of called the domain of definition of the function. If the domain of definition equals , one often says that the partial function is a total function.
In several areas of mathematics the term "function" refers to partial functions rather than to ordinary functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain.
In calculus, a real-valued function of a real variable or real function is a partial function from the set of the real numbers to itself. Given a real function its multiplicative inverse is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse.
Similarly, a function of a complex variable is generally a partial function with a domain of definition included in the set of the complex numbers. The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis.
In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether belongs to its domain of definition (see Halting problem).
Multivariate functions
A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed.
Formally, a function of variables is a function whose domain is a set of -tuples.
For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. Commonly, an -tuple is denoted enclosed between parentheses, such as in When using functional notation, one usually omits the parentheses surrounding tuples, writing instead of
Given sets the set of all -tuples such that is called the Cartesian product of and denoted
Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain.
where the domain has the form
If all the are equal to the set of the real numbers or to the set of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables.
Notation
There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below.
Functional notation
The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter . Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in
The argument between the parentheses may be a variable, often , that represents an arbitrary element of the domain of the function, a specific element of the domain ( in the above example), or an expression that can be evaluated to an element of the domain ( in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let ".
When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write instead of .
Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "" for the sine function, in contrast to italic font for single-letter symbols.
The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let be a function". This is an abuse of notation that is useful for a simpler formulation.
Arrow notation
Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example, is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of is implied.
The domain and codomain can also be explicitly stated, for example:
This defines a function from the integers to the integers that returns the square of its input.
As a common application of the arrow notation, suppose is a function in two variables, and we want to refer to a partially applied function produced by fixing the second argument to the value without introducing a new function name. The map in question could be denoted using the arrow notation. The expression (read: "the map taking to of comma nought") represents this new function with just one argument, whereas the expression refers to the value of the function at the
Index notation
Index notation may be used instead of functional notation. That is, instead of writing , one writes
This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element is called the th element of the sequence.
The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map (see above) would be denoted using index notation, if we define the collection of maps by the formula for all .
Dot notation
In the notation
the symbol does not represent any value; it is simply a placeholder, meaning that, if is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, may be replaced by any symbol, often an interpunct "". This may be useful for distinguishing the function from its value at .
For example, may stand for the function , and may stand for a function defined by an integral with variable upper bound: .
Specialized notations
There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above.
Functions of more than one variable
In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function can be defined as mapping any pair of real numbers to the sum of their squares, . Such a function is commonly written as and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as , .
Other terms
A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from to instead of group homomorphism from to ). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function.
Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions.
In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map.
Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function.
Specifying a function
Given a function , by definition, to each element of the domain of the function , there is a unique element associated to it, the value of at . There are several ways to specify or describe how is related to , both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function .
By listing function values
On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if , then one can define a function by
By a formula
Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain.
For example, in the above example, can be defined by the formula , for .
When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from to the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative.
For example, defines a function whose domain is because is always positive if is a real number. On the other hand, defines a function from the reals to the reals whose domain is reduced to the interval . (In old texts, such a domain was called the domain of definition of the function.)
Functions can be classified by the nature of formulas that define them:
A quadratic function is a function that may be written where are constants.
More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example, and are polynomial functions of .
A rational function is the same, with divisions also allowed, such as and
An algebraic function is the same, with th roots and roots of polynomials also allowed.
An elementary function is the same, with logarithms and exponential functions allowed.
Inverse and implicit functions
A function with domain and codomain , is bijective, if for every in , there is one and only one element in such that . In this case, the inverse function of is the function that maps to the element such that . For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers.
If a function is not bijective, it may occur that one can select subsets and such that the restriction of to is a bijection from to , and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval onto the interval , and its inverse function, called arccosine, maps onto . The other inverse trigonometric functions are defined similarly.
More generally, given a binary relation between two sets and , let be a subset of such that, for every there is some such that . If one has a criterion allowing selecting such a for every this defines a function called an implicit function, because it is implicitly defined by the relation .
For example, the equation of the unit circle defines a relation on real numbers. If there are two possible values of , one positive and one negative. For , these two values become both equal to 0. Otherwise, there is no possible value of . This means that the equation defines two implicit functions with domain and respective codomains and .
In this example, the equation can be solved in , giving but, in more complicated examples, this is impossible. For example, the relation defines as an implicit function of , called the Bring radical, which has as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and th roots.
The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point.
Using differential calculus
Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of that is 0 for . Another common example is the error function.
More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for .
Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by . However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number.
By recurrence
Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations.
The factorial function on the nonnegative integers () is a basic example, as it can be defined by the recurrence relation
and the initial condition
Representing a function
A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts.
Graphs and plots
Given a function its graph is, formally, the set
In the frequent case where and are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element may be identified with a point having coordinates in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function
consisting of all points with coordinates for yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates the plot obtained is Fermat's spiral.
Tables
A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function defined as can be represented by the familiar multiplication table
On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places:
Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions.
Bar chart
A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element of the domain is represented by an interval of the -axis, and the corresponding value of the function, , is represented by a rectangle whose base is the interval corresponding to and whose height is (possibly negative, in which case the bar extends below the -axis).
General properties
This section describes general properties of functions, that are independent of specific properties of the domain and the codomain.
Standard functions
There are a number of standard functions that occur frequently:
For every set , there is a unique function, called the , or empty map, from the empty set to . The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function is not equal to if and only if , although their graphs are both the empty set.
For every set and every singleton set , there is a unique function from to , which maps every element of to . This is a surjection (see below) unless is the empty set.
Given a function the canonical surjection of onto its image is the function from to that maps to .
For every subset of a set , the inclusion map of into is the injective (see below) function that maps every element of to itself.
The identity function on a set , often denoted by , is the inclusion of into itself.
Function composition
Given two functions and such that the domain of is the codomain of , their composition is the function defined by
That is, the value of is obtained by first applying to to obtain and then applying to the result to obtain . In this notation, the function that is applied first is always written on the right.
The composition is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both and satisfy these conditions, the composition is not necessarily commutative, that is, the functions and need not be equal, but may deliver different values for the same argument. For example, let and , then and agree just for
The function composition is associative in the sense that, if one of and is defined, then the other is also defined, and they are equal, that is, Therefore, it is usual to just write
The identity functions and are respectively a right identity and a left identity for functions from to . That is, if is a function with domain , and codomain , one has
Image and preimage
Let The image under of an element of the domain is . If is any subset of , then the image of under , denoted , is the subset of the codomain consisting of all images of elements of , that is,
The image of is the image of the whole domain, that is, . It is also called the range of , although the term range may also refer to the codomain.
On the other hand, the inverse image or preimage under of an element of the codomain is the set of all elements of the domain whose images under equal . In symbols, the preimage of is denoted by and is given by the equation
Likewise, the preimage of a subset of the codomain is the set of the preimages of the elements of , that is, it is the subset of the domain consisting of all elements of whose images belong to . It is denoted by and is given by the equation
For example, the preimage of under the square function is the set .
By definition of a function, the image of an element of the domain is always a single element of the codomain. However, the preimage of an element of the codomain may be empty or contain any number of elements. For example, if is the function from the integers to themselves that maps every integer to 0, then .
If is a function, and are subsets of , and and are subsets of , then one has the following properties:
The preimage by of an element of the codomain is sometimes called, in some contexts, the fiber of under .
If a function has an inverse (see below), this inverse is denoted In this case may denote either the image by or the preimage by of . This is not a problem, as these sets are equal. The notation and may be ambiguous in the case of sets that contain some subsets as elements, such as In this case, some care may be needed, for example, by using square brackets for images and preimages of subsets and ordinary parentheses for images and preimages of elements.
Injective, surjective and bijective functions
Let be a function.
The function is injective (or one-to-one, or is an injection) if for every two different elements and of . Equivalently, is injective if and only if, for every the preimage contains at most one element. An empty function is always injective. If is not the empty set, then is injective if and only if there exists a function such that that is, if has a left inverse. Proof: If is injective, for defining , one chooses an element in (which exists as is supposed to be nonempty), and one defines by if and if Conversely, if and then and thus
The function is surjective (or onto, or is a surjection) if its range equals its codomain , that is, if, for each element of the codomain, there exists some element of the domain such that (in other words, the preimage of every is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then is surjective if and only if there exists a function such that that is, if has a right inverse. The axiom of choice is needed, because, if is surjective, one defines by where is an arbitrarily chosen element of
The function is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, is bijective if, for every the preimage contains exactly one element. The function is bijective if and only if it admits an inverse function, that is, a function such that and (Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward).
Every function may be factorized as the composition of a surjection followed by an injection, where is the canonical surjection of onto and is the canonical injection of into . This is the canonical factorization of .
"One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement " maps onto " differs from " maps into ", in that the former implies that is surjective, while the latter makes no assertion about the nature of . In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical.
Restriction and extension
If is a function and is a subset of , then the restriction of to S, denoted , is the function from to defined by
for all in . Restrictions can be used to define partial inverse functions: if there is a subset of the domain of a function such that is injective, then the canonical surjection of onto its image is a bijection, and thus has an inverse function from to . One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval . The image of this restriction is the interval , and thus the restriction has an inverse function from to , which is called arccosine and is denoted .
Function restriction may also be used for "gluing" functions together. Let be the decomposition of as a union of subsets, and suppose that a function is defined on each such that for each pair of indices, the restrictions of and to are equal. Then this defines a unique function such that for all . This is the way that functions on manifolds are defined.
An extension of a function is a function such that is a restriction of . A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane.
Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function such that . Its domain is the set of all real numbers different from and its image is the set of all real numbers different from If one extends the real line to the projectively extended real line by including , one may extend to a bijection from the extended real line to itself by setting and .
In calculus
The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined.
Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis.
Real function
A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions.
The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval.
Functions enjoy pointwise operations, that is, if and are functions, their sum, difference and product are functions defined by
The domains of the resulting functions are the intersection of the domains of and . The quotient of two functions is defined similarly by
but the domain of the resulting function is obtained by removing the zeros of from the intersection of the domains of and .
The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function whose graph is a hyperbola, and whose domain is the whole real line except for 0.
The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for , is a differentiable function called the natural logarithm.
A real function is monotonic in an interval if the sign of does not depend of the choice of and in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function is monotonic in an interval , it has an inverse function, which is a real function with domain and image . This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function.
Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation
such that
Vector-valued function
When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function.
Some vector-valued functions are defined on a subset of or other spaces that share geometric or topological properties of , such as manifolds. These vector-valued functions are given the name vector fields.
Function space
In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions.
Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces.
Multi-valued functions
Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point there are several possible starting values for the function.
For example, in defining the square root as the inverse function of the square function, for any positive real number there are two choices for the value of the square root, one of which is positive and denoted and another which is negative and denoted These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive , one value for 0 and no value for negative .
In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps to a root of (see the figure on the right). For one may choose either for . By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval and the image is ; for the second one, the domain is and the image is ; for the last one, the domain is and the image is . As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of that has three values for , and only one value for and .
Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets . There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy.
In the foundations of mathematics
The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions.
For example, the singleton set may be considered as a function Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions.
These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If is a set and is a function, then is a set.
In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus.
In computer science
In computer programming, a function is, in general, a piece of a computer program, which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. However, in many programming languages every subroutine is called a function, even when there is no output, and when the functionality consists simply of modifying some data in the computer memory.
Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions. For example, if_then_else is a function that takes three functions as arguments, and, depending on the result of the first function (true or false), returns the result of either the second or the third function. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below).
Except for computer-language terminology, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions.
General recursive functions are partial functions from integers to integers that can be defined from
constant functions,
successor, and
projection functions
via the operators
composition,
primitive recursion, and
minimization.
Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties:
a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, ...),
every sequence of symbols may be coded as a sequence of bits,
a bit sequence can be interpreted as the binary representation of an integer.
Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (-terms), or applications of functions to terms. Terms are manipulated through some rules, (the -equivalence, the -reduction, and the -conversion), which are the axioms of the theory and may be interpreted as rules of computation.
In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus.
See also
Subpages
History of the function concept
List of types of functions
List of functions
Function fitting
Implicit function
Generalizations
Higher-order function
Homomorphism
Morphism
Microfunction
Distribution
Functor
Related topics
Associative array
Closed-form expression
Elementary function
Functional
Functional decomposition
Functional predicate
Functional programming
Parametric equation
Set function
Simple function
Notes
References
Sources
Further reading
An approachable and diverting historical presentation.
External links
The Wolfram Functions – website giving formulae and visualizations of many mathematical functions
NIST Digital Library of Mathematical Functions
Basic concepts in set theory
Elementary mathematics | Function (mathematics) | [
"Mathematics"
] | 9,232 | [
"Functions and mappings",
"Mathematical analysis",
"Mathematical objects",
"Elementary mathematics",
"Basic concepts in set theory",
"Mathematical relations"
] |
185,732 | https://en.wikipedia.org/wiki/Quantum%20decoherence | Quantum decoherence is the loss of quantum coherence. Quantum decoherence has been studied to understand how quantum systems convert to systems which can be explained by classical mechanics. Beginning out of attempts to extend the understanding of quantum mechanics, the theory has developed in several directions and experimental studies have confirmed some of the key issues. Quantum computing relies on quantum coherence and is one of the primary practical applications of the concept.
Concept
In quantum mechanics, physical systems are described by a mathematical representation called a quantum state. Probabilities for the outcomes of experiments upon a system are calculated by applying the Born rule to the quantum state describing that system. Quantum states are either pure or mixed; pure states are also known as wavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system, i.e., that there exists a measurement for which one of the possible outcomes will occur with probability 1. In the absence of outside forces or interactions, a quantum state evolves unitarily over time. Consequently, a pure quantum state remains pure. However, if the system is not perfectly isolated, for example during a measurement, coherence is shared with the environment and appears to be lost with time ─ a process called quantum decoherence or environmental decoherence. The quantum coherence is not lost but rather mixed with many more degrees of freedom in the environment, analogous to the way energy appears to be lost in by friction in classical mechanics when it actually has produced heat in the environment.
Decoherence can be viewed as the loss of information from a system into the environment (often modeled as a heat bath), since every system is loosely coupled with the energetic state of its surroundings. Viewed in isolation, the system's dynamics are non-unitary (although the combined system plus environment evolves in a unitary fashion). Thus the dynamics of the system alone are irreversible. As with any coupling, entanglements are generated between the system and environment. These have the effect of sharing quantum information with—or transferring it to—the surroundings.
History and interpretation
Relation to interpretation of quantum mechanics
An interpretation of quantum mechanics is an attempt to explain how the mathematical theory of quantum physics might correspond to experienced reality. Decoherence calculations can be done in any interpretation of quantum mechanics, since those calculations are an application of the standard mathematical tools of quantum theory. However, the subject of decoherence has been closely related to the problem of interpretation throughout its history.
Decoherence has been used to understand the possibility of the collapse of the wave function in quantum mechanics. Decoherence does not generate actual wave-function collapse. It only provides a framework for apparent wave-function collapse, as the components of a quantum system entangle with other quantum systems within the same environment. That is, components of the wave function are decoupled from a coherent system and acquire phases from their immediate surroundings. A total superposition of the global or universal wavefunction still exists (and remains coherent at the global level), but its ultimate fate remains an interpretational issue.
With respect to the measurement problem, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive. Moreover, observation indicates that this mixture looks like a proper quantum ensemble in a measurement situation, as the measurements lead to the "realization" of precisely one state in the "ensemble".
The philosophical views of Werner Heisenberg and Niels Bohr have often been grouped together as the "Copenhagen interpretation", despite significant divergences between them on important points. In 1955, Heisenberg suggested that the interaction of a system with its surrounding environment would eliminate quantum interference effects. However, Heisenberg did not provide a detailed account of how this might transpire, nor did he make explicit the importance of entanglement in the process.
Origin of the concepts
Nevill Mott's solution to the iconic Mott problem in 1929 is considered in retrospect to be the first quantum decoherence work. It was cited by the first modern theoretical treatment.
Although he did not use the term, the concept of quantum decoherence was first introduced in 1951 by the American physicist David Bohm, who called it the "destruction of interference in the process of measurement". Bohm later used decoherence to handle the measurement process in the de Broglie-Bohm interpretation of quantum theory.
The significance of decoherence was further highlighted in 1970 by the German physicist H. Dieter Zeh, and it has been a subject of active research since the 1980s. Decoherence has been developed into a complete framework, but there is controversy as to whether it solves the measurement problem, as the founders of decoherence theory admit in their seminal papers.
The study of decoherence as a proper subject began in 1970, with H. Dieter Zeh's paper "On the Interpretation of Measurement in Quantum Theory". Zeh regarded the wavefunction as a physical entity, rather than a calculational device or a compendium of statistical information (as is typical for Copenhagen-type interpretations), and he proposed that it should evolve unitarily, in accord with the Schrödinger equation, at all times. Zeh was initially unaware of Hugh Everett III's earlier work, which also proposed a universal wavefunction evolving unitarily; he revised his paper to reference Everett after learning of Everett's "relative-state interpretation" through an article by Bryce DeWitt. (DeWitt was the one who termed Everett's proposal the many-worlds interpretation, by which name it is commonly known.) For Zeh, the question of how to interpret quantum mechanics was of key importance, and an interpretation along the lines of Everett's was the most natural. Partly because of a general disinterest among physicists for interpretational questions, Zeh's work remained comparatively neglected until the early 1980s, when two papers by Wojciech Zurek invigorated the subject. Unlike Zeh's publications, Zurek's articles were fairly agnostic about interpretation, focusing instead on specific problems of density-matrix dynamics. Zurek's interest in decoherence stemmed from furthering Bohr's analysis of the double-slit experiment in his reply to the Einstein–Podolsky–Rosen paradox, work he had undertaken with Bill Wootters, and he has since argued that decoherence brings a kind of rapprochement between Everettian and Copenhagen-type views.
Decoherence does not claim to provide a mechanism for some actual wave-function collapse; rather it puts forth a reasonable framework for the appearance of wave-function collapse. The quantum nature of the system is simply entangled into the environment so that a total superposition of the wave function still exists, but exists—at least for all practical purposes—beyond the realm of measurement. By definition, the claim that a merged but unmeasurable wave function still exists cannot be proven experimentally. Decoherence is needed to understand why a quantum system begins to obey classical probability rules after interacting with its environment (due to the suppression of the interference terms when applying Born's probability rules to the system).
Criticism of the adequacy of decoherence theory to solve the measurement problem has been expressed by Anthony Leggett.
Mechanisms
To examine how decoherence operates, an "intuitive" model is presented below. The model requires some familiarity with quantum theory basics. Analogies are made between visualizable classical phase spaces and Hilbert spaces. A more rigorous derivation in Dirac notation shows how decoherence destroys interference effects and the "quantum nature" of systems. Next, the density matrix approach is presented for perspective.
Phase-space picture
An N-particle system can be represented in non-relativistic quantum mechanics by a wave function , where each xi is a point in 3-dimensional space. This has analogies with the classical phase space. A classical phase space contains a real-valued function in 6N dimensions (each particle contributes 3 spatial coordinates and 3 momenta). In this case a "quantum" phase space, on the other hand, involves a complex-valued function on a 3N-dimensional space. The position and momenta are represented by operators that do not commute, and lives in the mathematical structure of a Hilbert space. Aside from these differences, however, the rough analogy holds.
Different previously isolated, non-interacting systems occupy different phase spaces. Alternatively we can say that they occupy different lower-dimensional subspaces in the phase space of the joint system. The effective dimensionality of a system's phase space is the number of degrees of freedom present, which—in non-relativistic models—is 6 times the number of a system's free particles. For a macroscopic system this will be a very large dimensionality. When two systems (the environment being one system) start to interact, though, their associated state vectors are no longer constrained to the subspaces. Instead the combined state vector time-evolves a path through the "larger volume", whose dimensionality is the sum of the dimensions of the two subspaces. The extent to which two vectors interfere with each other is a measure of how "close" they are to each other (formally, their overlap or Hilbert space multiplies together) in the phase space. When a system couples to an external environment, the dimensionality of, and hence "volume" available to, the joint state vector increases enormously. Each environmental degree of freedom contributes an extra dimension.
The original system's wave function can be expanded in many different ways as a sum of elements in a quantum superposition. Each expansion corresponds to a projection of the wave vector onto a basis. The basis can be chosen at will. Choosing an expansion where the resulting basis elements interact with the environment in an element-specific way, such elements will—with overwhelming probability—be rapidly separated from each other by their natural unitary time evolution along their own independent paths. After a very short interaction, there is almost no chance of further interference. The process is effectively irreversible. The different elements effectively become "lost" from each other in the expanded phase space created by coupling with the environment. In phase space, this decoupling is monitored through the Wigner quasi-probability distribution. The original elements are said to have decohered. The environment has effectively selected out those expansions or decompositions of the original state vector that decohere (or lose phase coherence) with each other. This is called "environmentally-induced superselection", or einselection. The decohered elements of the system no longer exhibit quantum interference between each other, as in a double-slit experiment. Any elements that decohere from each other via environmental interactions are said to be quantum-entangled with the environment. The converse is not true: not all entangled states are decohered from each other.
Any measuring device or apparatus acts as an environment, since at some stage along the measuring chain, it has to be large enough to be read by humans. It must possess a very large number of hidden degrees of freedom. In effect, the interactions may be considered to be quantum measurements. As a result of an interaction, the wave functions of the system and the measuring device become entangled with each other. Decoherence happens when different portions of the system's wave function become entangled in different ways with the measuring device. For two einselected elements of the entangled system's state to interfere, both the original system and the measuring in both elements device must significantly overlap, in the scalar product sense. If the measuring device has many degrees of freedom, it is very unlikely for this to happen.
As a consequence, the system behaves as a classical statistical ensemble of the different elements rather than as a single coherent quantum superposition of them. From the perspective of each ensemble member's measuring device, the system appears to have irreversibly collapsed onto a state with a precise value for the measured attributes, relative to that element. This provides one explanation of how the Born rule coefficients effectively act as probabilities as per the measurement postulate constituting a solution to the quantum measurement problem.
Dirac notation
Using Dirac notation, let the system initially be in the state
where the s form an einselected basis (environmentally induced selected eigenbasis), and let the environment initially be in the state . The vector basis of the combination of the system and the environment consists of the tensor products of the basis vectors of the two subsystems. Thus, before any interaction between the two subsystems, the joint state can be written as
where is shorthand for the tensor product . There are two extremes in the way the system can interact with its environment: either (1) the system loses its distinct identity and merges with the environment (e.g. photons in a cold, dark cavity get converted into molecular excitations within the cavity walls), or (2) the system is not disturbed at all, even though the environment is disturbed (e.g. the idealized non-disturbing measurement). In general, an interaction is a mixture of these two extremes that we examine.
System absorbed by environment
If the environment absorbs the system, each element of the total system's basis interacts with the environment such that
evolves into
and so
evolves into
The unitarity of time evolution demands that the total state basis remains orthonormal, i.e. the scalar or inner products of the basis vectors must vanish, since :
This orthonormality of the environment states is the defining characteristic required for einselection.
System not disturbed by environment
In an idealized measurement, the system disturbs the environment, but is itself undisturbed by the environment. In this case, each element of the basis interacts with the environment such that
evolves into the product
and so
evolves into
In this case, unitarity demands that
where was used. Additionally, decoherence requires, by virtue of the large number of hidden degrees of freedom in the environment, that
As before, this is the defining characteristic for decoherence to become einselection. The approximation becomes more exact as the number of environmental degrees of freedom affected increases.
Note that if the system basis were not an einselected basis, then the last condition is trivial, since the disturbed environment is not a function of , and we have the trivial disturbed environment basis . This would correspond to the system basis being degenerate with respect to the environmentally defined measurement observable. For a complex environmental interaction (which would be expected for a typical macroscale interaction) a non-einselected basis would be hard to define.
Loss of interference and the transition from quantum to classical probabilities
The utility of decoherence lies in its application to the analysis of probabilities, before and after environmental interaction, and in particular to the vanishing of quantum interference terms after decoherence has occurred. If we ask what is the probability of observing the system making a transition from to before has interacted with its environment, then application of the Born probability rule states that the transition probability is the squared modulus of the scalar product of the two states:
where , , and etc.
The above expansion of the transition probability has terms that involve ; these can be thought of as representing interference between the different basis elements or quantum alternatives. This is a purely quantum effect and represents the non-additivity of the probabilities of quantum alternatives.
To calculate the probability of observing the system making a quantum leap from to after has interacted with its environment, then application of the Born probability rule states that we must sum over all the relevant possible states of the environment before squaring the modulus:
The internal summation vanishes when we apply the decoherence/einselection condition , and the formula simplifies to
If we compare this with the formula we derived before the environment introduced decoherence, we can see that the effect of decoherence has been to move the summation sign from inside of the modulus sign to outside. As a result, all the cross- or quantum interference-terms
have vanished from the transition-probability calculation. The decoherence has irreversibly converted quantum behaviour (additive probability amplitudes) to classical behaviour (additive probabilities).
However, Ballentine shows that the significant impact of decoherence to reduce interference need not have significance for the transition of quantum systems to classical limits.
In terms of density matrices, the loss of interference effects corresponds to the diagonalization of the "environmentally traced-over" density matrix.
Density-matrix approach
The effect of decoherence on density matrices is essentially the decay or rapid vanishing of the off-diagonal elements of the partial trace of the joint system's density matrix, i.e. the trace, with respect to any environmental basis, of the density matrix of the combined system and its environment. The decoherence irreversibly converts the "averaged" or "environmentally traced-over" density matrix from a pure state to a reduced mixture; it is this that gives the appearance of wave-function collapse. Again, this is called "environmentally induced superselection", or einselection. The advantage of taking the partial trace is that this procedure is indifferent to the environmental basis chosen.
Initially, the density matrix of the combined system can be denoted as
where is the state of the environment.
Then if the transition happens before any interaction takes place between the system and the environment, the environment subsystem has no part and can be traced out, leaving the reduced density matrix for the system:
Now the transition probability will be given as
where , , and etc.
Now the case when transition takes place after the interaction of the system with the environment. The combined density matrix will be
To get the reduced density matrix of the system, we trace out the environment and employ the decoherence/einselection condition and see that the off-diagonal terms vanish (a result obtained by Erich Joos and H. D. Zeh in 1985):
Similarly, the final reduced density matrix after the transition will be
The transition probability will then be given as
which has no contribution from the interference terms
The density-matrix approach has been combined with the Bohmian approach to yield a reduced-trajectory approach, taking into account the system reduced density matrix and the influence of the environment.
Operator-sum representation
Consider a system S and environment (bath) B, which are closed and can be treated quantum-mechanically. Let and be the system's and bath's Hilbert spaces respectively. Then the Hamiltonian for the combined system is
where are the system and bath Hamiltonians respectively, is the interaction Hamiltonian between the system and bath, and are the identity operators on the system and bath Hilbert spaces respectively. The time-evolution of the density operator of this closed system is unitary and, as such, is given by
where the unitary operator is . If the system and bath are not entangled initially, then we can write . Therefore, the evolution of the system becomes
The system–bath interaction Hamiltonian can be written in a general form as
where is the operator acting on the combined system–bath Hilbert space, and are the operators that act on the system and bath respectively. This coupling of the system and bath is the cause of decoherence in the system alone. To see this, a partial trace is performed over the bath to give a description of the system alone:
is called the reduced density matrix and gives information about the system only. If the bath is written in terms of its set of orthogonal basis kets, that is, if it has been initially diagonalized, then . Computing the partial trace with respect to this (computational) basis gives
where are defined as the Kraus operators and are represented as (the index combines indices and ):
This is known as the operator-sum representation (OSR). A condition on the Kraus operators can be obtained by using the fact that ; this then gives
This restriction determines whether decoherence will occur or not in the OSR. In particular, when there is more than one term present in the sum for , then the dynamics of the system will be non-unitary, and hence decoherence will take place.
Semigroup approach
A more general consideration for the existence of decoherence in a quantum system is given by the master equation, which determines how the density matrix of the system alone evolves in time (see also the Belavkin equation for the evolution under continuous measurement). This uses the Schrödinger picture, where evolution of the state (represented by its density matrix) is considered. The master equation is
where is the system Hamiltonian along with a (possible) unitary contribution from the bath, and is the Lindblad decohering term. The Lindblad decohering term is represented as
The are basis operators for the M-dimensional space of bounded operators that act on the system Hilbert space and are the error generators. The matrix elements represent the elements of a positive semi-definite Hermitian matrix; they characterize the decohering processes and, as such, are called the noise parameters. The semigroup approach is particularly nice, because it distinguishes between the unitary and decohering (non-unitary) processes, which is not the case with the OSR. In particular, the non-unitary dynamics are represented by , whereas the unitary dynamics of the state are represented by the usual Heisenberg commutator. Note that when , the dynamical evolution of the system is unitary. The conditions for the evolution of the system density matrix to be described by the master equation are:
the evolution of the system density matrix is determined by a one-parameter semigroup
the evolution is "completely positive" (i.e. probabilities are preserved)
the system and bath density matrices are initially decoupled
Non-unitary modelling examples
Decoherence can be modelled as a non-unitary process by which a system couples with its environment (although the combined system plus environment evolves in a unitary fashion). Thus the dynamics of the system alone, treated in isolation, are non-unitary and, as such, are represented by irreversible transformations acting on the system's Hilbert space . Since the system's dynamics are represented by irreversible representations, then any information present in the quantum system can be lost to the environment or heat bath. Alternatively, the decay of quantum information caused by the coupling of the system to the environment is referred to as decoherence. Thus decoherence is the process by which information of a quantum system is altered by the system's interaction with its environment (which form a closed system), hence creating an entanglement between the system and heat bath (environment). As such, since the system is entangled with its environment in some unknown way, a description of the system by itself cannot be made without also referring to the environment (i.e. without also describing the state of the environment).
Rotational decoherence
Consider a system of N qubits that is coupled to a bath symmetrically. Suppose this system of N qubits undergoes a rotation around the eigenstates of . Then under such a rotation, a random phase will be created between the eigenstates , of . Thus these basis qubits and will transform in the following way:
This transformation is performed by the rotation operator
Since any qubit in this space can be expressed in terms of the basis qubits, then all such qubits will be transformed under this rotation. Consider the th qubit in a pure state where . Before application of the rotation this state is:
.
This state will decohere, since it is not ‘encoded’ with (dependent upon) the dephasing factor . This can be seen by examining the density matrix averaged over the random phase :
,
where is a probability measure of the random phase, . Although not entirely necessary, let us assume for simplicity that this is given by the Gaussian distribution, i.e. , where represents the spread of the random phase. Then the density matrix computed as above is
.
Observe that the off-diagonal elements—the coherence terms—decay as the spread of the random phase, , increases over time (which is a realistic expectation). Thus the density matrices for each qubit of the system become indistinguishable over time. This means that no measurement can distinguish between the qubits, thus creating decoherence between the various qubit states. In particular, this dephasing process causes the qubits to collapse to one of the pure states in . This is why this type of decoherence process is called collective dephasing, because the mutual phases between all qubits of the N-qubit system are destroyed.
Depolarizing
Depolarizing is a non-unitary transformation on a quantum system which maps pure states to mixed states. This is a non-unitary process because any transformation that reverses this process will map states out of their respective Hilbert space thus not preserving positivity (i.e. the original probabilities are mapped to negative probabilities, which is not allowed). The 2-dimensional case of such a transformation would consist of mapping pure states on the surface of the Bloch sphere to mixed states within the Bloch sphere. This would contract the Bloch sphere by some finite amount and the reverse process would expand the Bloch sphere, which cannot happen.
Dissipation
Dissipation is a decohering process by which the populations of quantum states are changed due to entanglement with a bath. An example of this would be a quantum system that can exchange its energy with a bath through the interaction Hamiltonian. If the system is not in its ground state and the bath is at a temperature lower than that of the system's, then the system will give off energy to the bath, and thus higher-energy eigenstates of the system Hamiltonian will decohere to the ground state after cooling and, as such, will all be non-degenerate. Since the states are no longer degenerate, they are not distinguishable, and thus this process is irreversible (non-unitary).
Timescales
Decoherence represents an extremely fast process for macroscopic objects, since these are interacting with many microscopic objects, with an enormous number of degrees of freedom in their natural environment. The process is needed if we are to understand why we tend not to observe quantum behavior in everyday macroscopic objects and why we do see classical fields emerge from the properties of the interaction between matter and radiation for large amounts of matter. The time taken for off-diagonal components of the density matrix to effectively vanish is called the decoherence time. It is typically extremely short for everyday, macroscale processes. A modern basis-independent definition of the decoherence time relies on the short-time behavior of the fidelity between the initial and the time-dependent state or, equivalently, the decay of the purity.
Mathematical details
Assume for the moment that the system in question consists of a subsystem A being studied and the "environment" , and the total Hilbert space is the tensor product of a Hilbert space describing A and a Hilbert space describing , that is,
This is a reasonably good approximation in the case where A and are relatively independent (e.g. there is nothing like parts of A mixing with parts of or conversely). The point is, the interaction with the environment is for all practical purposes unavoidable (e.g. even a single excited atom in a vacuum would emit a photon, which would then go off). Let's say this interaction is described by a unitary transformation U acting upon . Assume that the initial state of the environment is , and the initial state of A is the superposition state
where and are orthogonal, and there is no entanglement initially. Also, choose an orthonormal basis for . (This could be a "continuously indexed basis" or a mixture of continuous and discrete indexes, in which case we would have to use a rigged Hilbert space and be more careful about what we mean by orthonormal, but that's an inessential detail for expository purposes.) Then, we can expand
and
uniquely as
and
respectively. One thing to realize is that the environment contains a huge number of degrees of freedom, a good number of them interacting with each other all the time. This makes the following assumption reasonable in a handwaving way, which can be shown to be true in some simple toy models. Assume that there exists a basis for such that and are all approximately orthogonal to a good degree if i ≠ j and the same thing for and and also for and for any i and j (the decoherence property).
This often turns out to be true (as a reasonable conjecture) in the position basis because how A interacts with the environment would often depend critically upon the position of the objects in A. Then, if we take the partial trace over the environment, we would find the density state is approximately described by
that is, we have a diagonal mixed state, there is no constructive or destructive interference, and the "probabilities" add up classically. The time it takes for U(t) (the unitary operator as a function of time) to display the decoherence property is called the decoherence time.
Experimental observations
Quantitative measurement
The decoherence rate depends on a number of factors, including temperature or uncertainty in position, and many experiments have tried to measure it depending on the external environment.
The process of a quantum superposition gradually obliterated by decoherence was quantitatively measured for the first time by Serge Haroche and his co-workers at the École Normale Supérieure in Paris in 1996. Their approach involved sending individual rubidium atoms, each in a superposition of two states, through a microwave-filled cavity. The two quantum states both cause shifts in the phase of the microwave field, but by different amounts, so that the field itself is also put into a superposition of two states. Due to photon scattering on cavity-mirror imperfection, the cavity field loses phase coherence to the environment. Haroche and his colleagues measured the resulting decoherence via correlations between the states of pairs of atoms sent through the cavity with various time delays between the atoms.
In July 2011, researchers from University of British Columbia and University of California, Santa Barbara showed that applying high magnetic fields to single molecule magnets suppressed two of three known sources of decoherence. They were able to measure the dependence of decoherence on temperature and magnetic field strength.
Prevention
Concept
Decoherence causes the system to lose its quantumness, which invalidates the superposition principle and turns 'quantum' to 'classical'. It is a major challenge in quantum computing.
A real quantum system inevitably meets the surrounding environment, the interaction shows up as noise in physical process. It's extremely sensitive to environmental noise—such as electromagnetic fields, temperature fluctuations, and other external perturbations—as well as measurement, lead to decoherence.
Decoherence is a challenge for the practical realization of quantum computers, since such machines are expected to rely heavily on the undisturbed evolution of quantum coherences. They require that the coherence of states be preserved and that decoherence be managed, in order to actually perform quantum computation. Because of decoherence, we need to finish the quantum process before the qubit state is decayed.
The physical quantity coherence time is defined as the time that the quantum state holds its superposition principle.
The purpose of against decoherence is to extend the coherence time of quantum systems. It will improve the stability of the computing of information.
Methods and Tools
Researchers have developed many methods and tools to mitigate or eliminate the negative influences from decoherence. Several typical ways are listed below.
Isolation from Environment
The most basic and direct way to reduce decoherence is to prevent the quantum system from interacting with the environment by any type of isolation. Here are some typical examples of isolation methods.
High Vacuum: Placing qubits in an ultra-high vacuum environment to minimize interaction with air molecules.
Cryogenic Cooling: Operating quantum systems at extremely low temperatures to reduce thermal vibrations and noise.
Electromagnetic Shielding: Enclosing quantum systems in materials that block external electromagnetic fields - such as mu-metal or superconducting materials - reduces decoherence caused by unwanted electromagnetic interference.
Shielding Cosmic Rays: In August 2020 scientists reported that ionizing radiation from environmental radioactive materials and cosmic rays may substantially limit the coherence times of qubits if they aren't shielded adequately which may be critical for realizing fault-tolerant superconducting quantum computers in the future.
Better Materials: Fabricating qubits from special materials, like highly pure or isotopically enriched ones, to minimize intrinsic noise of the material, including noise from defects or nuclear spins.
Circuit Design: Optimizing the coherence ability when designing the construction of quantum circuits, similar to the concern in classical circuits.
Mechanical and Optical Isolation: Using equipment like vibration isolation tables and acoustic isolation materials, reducing sources of mechanical noise, and shielding against external light—common in physical experiments.
Quantum Error Correction (QEC)
One of the most powerful tools for combating quantum decoherence is Quantum Error Correction (QEC). QEC schemes encode quantum information redundantly across multiple physical qubits, allowing for the detection and correction of errors without directly measuring the quantum state. These QEC protocols rely on the assumption that errors affect only a small fraction of qubits at any given time, enabling the detection and correction of errors through redundant encoding. Here are some representative QEC protocols.
Shor Code: One of the first quantum error correction codes, it encodes a single qubit into nine physical qubits to protect against both bit-flip and phase-flip errors.
Steane Code: A 7-qubit code that provides error correction for arbitrary errors.
Surface Codes: A more scalable error correction code that uses a 2D lattice of qubits with high threshold for errors.
Bosonic Codes: A type of quantum error-correcting code designed specifically to protect quantum information in continuous-variable systems.
However, QEC comes at a significant cost: it requires a large number of physical qubits to encode a single logical qubit, and fault-tolerant error correction methods introduce additional computational overhead.
Dynamical Decoupling
Dynamical Decoupling (DD) is another typical quantum control technique used against decoherence, especially for systems that are coupled to noisy environments. DD involves applying an external sequence of control pulses to the quantum system at strategically timed intervals to average out environmental interactions. This technique effectively manipulates the irreversible component of quantum systems interact with surrounding environment by the external controllable interactions. Dynamical decoupling has been experimentally demonstrated in various systems, including trapped ions and superconducting qubits. Here are some examples of representative sequences.
Spin Echo (SE): SE is the consisting of a single π-pulse, which inverts the state of system.
Periodic Dynamical Decoupling (PDD): Appling control pulse periodically, PDD averages out the influence of the environment and decoupling the qubit.
Carr-Purcell-Meiboom-Gill (CPMG) Sequence: CPMG is an extension of SE. It applies a series of π-pulses.
See also
Dephasing
Dephasing rate SP formula
Einselection
Ghirardi–Rimini–Weber theory
H. Dieter Zeh
Interpretations of quantum mechanics
Objective-collapse theory
Partial trace
Photon polarization
Quantization
Quantum coherence
Quantum Darwinism
Quantum entanglement
Quantum superposition
Quantum Zeno effect
References
Further reading
Zurek, Wojciech H. (2003). "Decoherence and the transition from quantum to classical – REVISITED", (An updated version of PHYSICS TODAY, 44:36–44 (1991) article)
Berthold-Georg Englert, Marlan O. Scully & Herbert Walther, Quantum Optical Tests of Complementarity, Nature, Vol 351, pp 111–116 (9 May 1991) and (same authors) The Duality in Matter and Light Scientific American, pg 56–61, (December 1994). Demonstrates that complementarity is enforced, and quantum interference effects destroyed, by irreversible object-apparatus correlations, and not, as was previously popularly believed, by Heisenberg's uncertainty principle itself.
Mario Castagnino, Sebastian Fortin, Roberto Laura and Olimpia Lombardi, A general theoretical framework for decoherence in open and closed systems, Classical and Quantum Gravity, 25, pp. 154002–154013, (2008). A general theoretical framework for decoherence is proposed, which encompasses formalisms originally devised to deal just with open or closed systems.
1970 introductions
Articles containing video clips
Decoherence | Quantum decoherence | [
"Physics"
] | 7,618 | [
"Quantum measurement",
"Quantum mechanics"
] |
185,748 | https://en.wikipedia.org/wiki/Metalloprotein | Metalloprotein is a generic term for a protein that contains a metal ion cofactor. A large proportion of all proteins are part of this category. For instance, at least 1000 human proteins (out of ~20,000) contain zinc-binding protein domains although there may be up to 3000 human zinc metalloproteins.
Abundance
It is estimated that approximately half of all proteins contain a metal. In another estimate, about one quarter to one third of all proteins are proposed to require metals to carry out their functions. Thus, metalloproteins have many different functions in cells, such as storage and transport of proteins, enzymes and signal transduction proteins, or infectious diseases. The abundance of metal binding proteins may be inherent to the amino acids that proteins use, as even artificial proteins without evolutionary history will readily bind metals.
Most metals in the human body are bound to proteins. For instance, the relatively high concentration of iron in the human body is mostly due to the iron in hemoglobin.
Coordination chemistry principles
In metalloproteins, metal ions are usually coordinated by nitrogen, oxygen or sulfur centers belonging to amino acid residues of the protein. These donor groups are often provided by side-chains on the amino acid residues. Especially important are the imidazole substituent in histidine residues, thiolate substituents in cysteine residues, and carboxylate groups provided by aspartate. Given the diversity of the metalloproteome, virtually all amino acid residues have been shown to bind metal centers. The peptide backbone also provides donor groups; these include deprotonated amides and the amide carbonyl oxygen centers. Lead(II) binding in natural and artificial proteins has been reviewed.
In addition to donor groups that are provided by amino acid residues, many organic cofactors function as ligands. Perhaps most famous are the tetradentate N4 macrocyclic ligands incorporated into the heme protein. Inorganic ligands such as sulfide and oxide are also common.
Storage and transport metalloproteins
These are the second stage product of protein hydrolysis obtained by treatment with slightly stronger acids and alkalies.
Oxygen carriers
Hemoglobin, which is the principal oxygen-carrier in humans, has four subunits in which the iron(II) ion is coordinated by the planar macrocyclic ligand protoporphyrin IX (PIX) and the imidazole nitrogen atom of a histidine residue. The sixth coordination site contains a water molecule or a dioxygen molecule. By contrast the protein myoglobin, found in muscle cells, has only one such unit. The active site is located in a hydrophobic pocket. This is important as without it the iron(II) would be irreversibly oxidized to iron(III). The equilibrium constant for the formation of HbO2 is such that oxygen is taken up or released depending on the partial pressure of oxygen in the lungs or in muscle. In hemoglobin the four subunits show a cooperativity effect that allows for easy oxygen transfer from hemoglobin to myoglobin.
In both hemoglobin and myoglobin it is sometimes incorrectly stated that the oxygenated species contains iron(III). It is now known that the diamagnetic nature of these species is because the iron(II) atom is in the low-spin state. In oxyhemoglobin the iron atom is located in the plane of the porphyrin ring, but in the paramagnetic deoxyhemoglobin the iron atom lies above the plane of the ring. This change in spin state is a cooperative effect due to the higher crystal field splitting and smaller ionic radius of Fe2+ in the oxyhemoglobin moiety.
Hemerythrin is another iron-containing oxygen carrier. The oxygen binding site is a binuclear iron center. The iron atoms are coordinated to the protein through the carboxylate side chains of a glutamate and aspartate and five histidine residues. The uptake of O2 by hemerythrin is accompanied by two-electron oxidation of the reduced binuclear center to produce bound peroxide (OOH−). The mechanism of oxygen uptake and release have been worked out in detail.
Hemocyanins carry oxygen in the blood of most mollusks, and some arthropods such as the horseshoe crab. They are second only to hemoglobin in biological popularity of use in oxygen transport. On oxygenation the two copper(I) atoms at the active site are oxidized to copper(II) and the dioxygen molecules are reduced to peroxide, .
Chlorocruorin (as the larger carrier erythrocruorin) is an oxygen-binding hemeprotein present in the blood plasma of many annelids, particularly certain marine polychaetes.
Cytochromes
Oxidation and reduction reactions are not common in organic chemistry as few organic molecules can act as oxidizing or reducing agents. Iron(II), on the other hand, can easily be oxidized to iron(III). This functionality is used in cytochromes, which function as electron-transfer vectors. The presence of the metal ion allows metalloenzymes to perform functions such as redox reactions that cannot easily be performed by the limited set of functional groups found in amino acids. The iron atom in most cytochromes is contained in a heme group. The differences between those cytochromes lies in the different side-chains. For instance cytochrome a has a heme a prosthetic group and cytochrome b has a heme b prosthetic group. These differences result in different Fe2+/Fe3+ redox potentials such that various cytochromes are involved in the mitochondrial electron transport chain.
Cytochrome P450 enzymes perform the function of inserting an oxygen atom into a C−H bond, an oxidation reaction.
Rubredoxin
Rubredoxin is an electron-carrier found in sulfur-metabolizing bacteria and archaea. The active site contains an iron ion coordinated by the sulfur atoms of four cysteine residues forming an almost regular tetrahedron. Rubredoxins perform one-electron transfer processes. The oxidation state of the iron atom changes between the +2 and +3 states. In both oxidation states the metal is high spin, which helps to minimize structural changes.
Plastocyanin
Plastocyanin is one of the family of blue copper proteins that are involved in electron transfer reactions. The copper-binding site is described as distorted trigonal pyramidal. The trigonal plane of the pyramidal base is composed of two nitrogen atoms (N1 and N2) from separate histidines and a sulfur (S1) from a cysteine. Sulfur (S2) from an axial methionine forms the apex. The distortion occurs in the bond lengths between the copper and sulfur ligands. The Cu−S1 contact is shorter (207 pm) than Cu−S2 (282 pm).
The elongated Cu−S2 bonding destabilizes the Cu(II) form and increases the redox potential of the protein. The blue color (597 nm peak absorption) is due to the Cu−S1 bond where S(pπ) to Cu(dx2−y2) charge transfer occurs.
In the reduced form of plastocyanin, His-87 will become protonated with a pKa of 4.4. Protonation prevents it acting as a ligand and the copper site geometry becomes trigonal planar.
Metal-ion storage and transfer
Iron
Iron is stored as iron(III) in ferritin. The exact nature of the binding site has not yet been determined. The iron appears to be present as a hydrolysis product such as FeO(OH). Iron is transported by transferrin whose binding site consists of two tyrosines, one aspartic acid and one histidine. The human body has no controlled mechanism for excretion of iron. This can lead to iron overload problems in patients treated with blood transfusions, as, for instance, with β-thalassemia. Iron is actually excreted in urine and is also concentrated in bile which is excreted in feces.
Copper
Ceruloplasmin is the major copper-carrying protein in the blood. Ceruloplasmin exhibits oxidase activity, which is associated with possible oxidation of Fe(II) into Fe(III), therefore assisting in its transport in the blood plasma in association with transferrin, which can carry iron only in the Fe(III) state.
Calcium
Osteopontin is involved in mineralization in the extracellular matrices of bones and teeth.
Metalloenzymes
Metalloenzymes all have one feature in common, namely that the metal ion is bound to the protein with one labile coordination site. As with all enzymes, the shape of the active site is crucial. The metal ion is usually located in a pocket whose shape fits the substrate. The metal ion catalyzes reactions that are difficult to achieve in organic chemistry.
Carbonic anhydrase
In aqueous solution, carbon dioxide forms carbonic acid
CO2 + H2O H2CO3
This reaction is very slow in the absence of a catalyst, but quite fast in the presence of the hydroxide ion
CO2 + OH−
A reaction similar to this is almost instantaneous with carbonic anhydrase. The structure of the active site in carbonic anhydrases is well known from a number of crystal structures. It consists of a zinc ion coordinated by three imidazole nitrogen atoms from three histidine units. The fourth coordination site is occupied by a water molecule. The coordination sphere of the zinc ion is approximately tetrahedral. The positively-charged zinc ion polarizes the coordinated water molecule, and nucleophilic attack by the negatively-charged hydroxide portion on carbon dioxide proceeds rapidly. The catalytic cycle produces the bicarbonate ion and the hydrogen ion as the equilibrium:
H2CO3 + H+
favouring dissociation of carbonic acid at biological pH values.
Vitamin B12-dependent enzymes
The cobalt-containing Vitamin B12 (also known as cobalamin) catalyzes the transfer of methyl (−CH3) groups between two molecules, which involves the breaking of C−C bonds, a process that is energetically expensive in organic reactions. The metal ion lowers the activation energy for the process by forming a transient Co−CH3 bond. The structure of the coenzyme was famously determined by Dorothy Hodgkin and co-workers, for which she received a Nobel Prize in Chemistry. It consists of a cobalt(II) ion coordinated to four nitrogen atoms of a corrin ring and a fifth nitrogen atom from an imidazole group. In the resting state there is a Co−C sigma bond with the 5′ carbon atom of adenosine. This is a naturally occurring organometallic compound, which explains its function in trans-methylation reactions, such as the reaction carried out by methionine synthase.
Nitrogenase (nitrogen fixation)
The fixation of atmospheric nitrogen is an energy-intensive process, as it involves breaking the very stable triple bond between the nitrogen atoms. The nitrogenases catalyze the process. One such enzyme occurs in Rhizobium bacteria. There are three components to its action: a molybdenum atom at the active site, iron–sulfur clusters that are involved in transporting the electrons needed to reduce the nitrogen, and an abundant energy source in the form of magnesium ATP. This last is provided by a mutualistic symbiosis between the bacteria and a host plant, often a legume. The reaction may be written symbolically as
N2 + 16 MgATP + 8 e− → 2 NH3 + 16 MgADP +16 Pi + H2
where Pi stands for inorganic phosphate. The precise structure of the active site has been difficult to determine. It appears to contain a MoFe7S8 cluster that is able to bind the dinitrogen molecule and, presumably, enable the reduction process to begin. The electrons are transported by the associated "P" cluster, which contains two cubical Fe4S4 clusters joined by sulfur bridges.
Superoxide dismutase
The superoxide ion, is generated in biological systems by reduction of molecular oxygen. It has an unpaired electron, so it behaves as a free radical. It is a powerful oxidizing agent. These properties render the superoxide ion very toxic and are deployed to advantage by phagocytes to kill invading microorganisms. Otherwise, the superoxide ion must be destroyed before it does unwanted damage in a cell. The superoxide dismutase enzymes perform this function very efficiently.
The formal oxidation state of the oxygen atoms is −. In solutions at neutral pH, the superoxide ion disproportionates to molecular oxygen and hydrogen peroxide.
2 + 2 H+ → O2 + H2O2
In biology this type of reaction is called a dismutation reaction. It involves both oxidation and reduction of superoxide ions. The superoxide dismutase (SOD) group of enzymes increase the rate of reaction to near the diffusion-limited rate. The key to the action of these enzymes is a metal ion with variable oxidation state that can act either as an oxidizing agent or as a reducing agent.
Oxidation: M(n+1)+ + → Mn+ + O2
Reduction: Mn+ + + 2 H+ → M(n+1)+ + H2O2.
In human SOD, the active metal is copper, as Cu(II) or Cu(I), coordinated tetrahedrally by four histidine residues. This enzyme also contains zinc ions for stabilization and is activated by copper chaperone for superoxide dismutase (CCS). Other isozymes may contain iron, manganese or nickel. The activity of Ni-SOD involves nickel(III), an unusual oxidation state for this element. The active site nickel geometry cycles from square planar Ni(II), with thiolate (Cys2 and Cys6) and backbone nitrogen (His1 and Cys2) ligands, to square pyramidal Ni(III) with an added axial His1 side chain ligand.
Chlorophyll-containing proteins
Chlorophyll plays a crucial role in photosynthesis. It contains a magnesium enclosed in a chlorin ring. However, the magnesium ion is not directly involved in the photosynthetic function and can be replaced by other divalent ions with little loss of activity. Rather, the photon is absorbed by the chlorin ring, whose electronic structure is well-adapted for this purpose.
Initially, the absorption of a photon causes an electron to be excited into a singlet state of the Q band. The excited state undergoes an intersystem crossing from the singlet state to a triplet state in which there are two electrons with parallel spin. This species is, in effect, a free radical, and is very reactive and allows an electron to be transferred to acceptors that are adjacent to the chlorophyll in the chloroplast. In the process chlorophyll is oxidized. Later in the photosynthetic cycle, chlorophyll is reduced back again. This reduction ultimately draws electrons from water, yielding molecular oxygen as a final oxidation product.
Hydrogenase
Hydrogenases are subclassified into three different types based on the active site metal content: iron–iron hydrogenase, nickel–iron hydrogenase, and iron hydrogenase.
All hydrogenases catalyze reversible H2 uptake, but while the [FeFe] and [NiFe] hydrogenases are true redox catalysts, driving H2 oxidation and H+ reduction
H2 2 H+ + 2 e−
the [Fe] hydrogenases catalyze the reversible heterolytic cleavage of H2.
H2 H+ + H−
Ribozyme and deoxyribozyme
Since discovery of ribozymes by Thomas Cech and Sidney Altman in the early 1980s, ribozymes have been shown to be a distinct class of metalloenzymes. Many ribozymes require metal ions in their active sites for chemical catalysis; hence they are called metalloenzymes. Additionally, metal ions are essential for structural stabilization of ribozymes. Group I intron is the most studied ribozyme which has three metals participating in catalysis. Other known ribozymes include group II intron, RNase P, and several small viral ribozymes (such as hammerhead, hairpin, HDV, and VS) and the large subunit of ribosomes. Several classes of ribozymes have been described.
Deoxyribozymes, also called DNAzymes or catalytic DNA, are artificial DNA-based catalysts that were first produced in 1994. Almost all DNAzymes require metal ions. Although ribozymes mostly catalyze cleavage of RNA substrates, a variety of reactions can be catalyzed by DNAzymes including RNA/DNA cleavage, RNA/DNA ligation, amino acid phosphorylation and dephosphorylation, and carbon–carbon bond formation. Yet, DNAzymes that catalyze RNA cleavage reaction are the most extensively explored ones. 10-23 DNAzyme, discovered in 1997, is one of the most studied catalytic DNAs with clinical applications as a therapeutic agent. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), the CA1-3 DNAzymes (copper-specific), the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific).
Signal-transduction metalloproteins
Calmodulin
Calmodulin is an example of a signal-transduction protein. It is a small protein that contains four EF-hand motifs, each of which is able to bind a Ca2+ ion.
In an EF-hand loop protein domain, the calcium ion is coordinated in a pentagonal bipyramidal configuration. Six glutamic acid and aspartic acid residues involved in the binding are in positions 1, 3, 5, 7 and 9 of the polypeptide chain. At position 12, there is a glutamate or aspartate ligand that behaves as a bidentate ligand, providing two oxygen atoms. The ninth residue in the loop is necessarily glycine due to the conformational requirements of the backbone. The coordination sphere of the calcium ion contains only carboxylate oxygen atoms and no nitrogen atoms. This is consistent with the hard nature of the calcium ion.
The protein has two approximately symmetrical domains, separated by a flexible "hinge" region. Binding of calcium causes a conformational change to occur in the protein. Calmodulin participates in an intracellular signaling system by acting as a diffusible second messenger to the initial stimuli.
Troponin
In both cardiac and skeletal muscles, muscular force production is controlled primarily by changes in the intracellular calcium concentration. In general, when calcium rises, the muscles contract and, when calcium falls, the muscles relax. Troponin, along with actin and tropomyosin, is the protein complex to which calcium binds to trigger the production of muscular force.
Transcription factors
Many transcription factors contain a structure known as a zinc finger, a structural module in which a region of protein folds around a zinc ion. The zinc does not directly contact the DNA that these proteins bind to. Instead, the cofactor is essential for the stability of the tightly folded protein chain. In these proteins, the zinc ion is usually coordinated by pairs of cysteine and histidine side-chains.
Other metalloenzymes
There are two types of carbon monoxide dehydrogenase: one contains iron and molybdenum, the other contains iron and nickel. Parallels and differences in catalytic strategies have been reviewed.
Pb2+ (lead) can replace Ca2+ (calcium) as, for example, with calmodulin or Zn2+ (zinc) as with metallocarboxypeptidases.
Some other metalloenzymes are given in the following table, according to the metal involved.
See also
References
External links
Catherine Drennan's Seminar: Snapshots of Metalloproteins
Medicinal inorganic chemistry
Bioinorganic chemistry | Metalloprotein | [
"Chemistry",
"Biology"
] | 4,306 | [
"Medicinal inorganic chemistry",
"Medicinal chemistry",
"Biochemistry",
"Metalloproteins",
"Bioinorganic chemistry"
] |
186,262 | https://en.wikipedia.org/wiki/Belousov%E2%80%93Zhabotinsky%20reaction | A Belousov–Zhabotinsky reaction, or BZ reaction, is one of a class of reactions that serve as a classical example of non-equilibrium thermodynamics, resulting in the establishment of a nonlinear chemical oscillator. The only common element in these oscillators is the inclusion of bromine and an acid. The reactions are important to theoretical chemistry in that they show that chemical reactions do not have to be dominated by equilibrium thermodynamic behavior. These reactions are far from equilibrium and remain so for a significant length of time and evolve chaotically. In this sense, they provide an interesting chemical model of nonequilibrium biological phenomena; as such, mathematical models and simulations of the BZ reactions themselves are of theoretical interest, showing phenomenon as noise-induced order.
An essential aspect of the BZ reaction is its so called "excitability"; under the influence of stimuli, patterns develop in what would otherwise be a perfectly quiescent medium. Some clock reactions such as Briggs–Rauscher and BZ using tris(bipyridine)ruthenium(II) chloride as catalyst can be excited into self-organising activity through the influence of light.
History
The discovery of the phenomenon is credited to Boris Belousov. In 1951, while trying to find the non-organic analog to the Krebs cycle, he noted that in a mix of potassium bromate, cerium(IV) sulfate, malonic acid, and citric acid in dilute sulfuric acid, the ratio of concentration of the cerium(IV) and cerium(III) ions oscillated, causing the colour of the solution to oscillate between a yellow solution and a colorless solution. This is due to the cerium(IV) ions being reduced by malonic acid to cerium(III) ions, which are then oxidized back to cerium(IV) ions by bromate(V) ions.
Belousov made two attempts to publish his finding, but was rejected on the grounds that he could not explain his results to the satisfaction of the editors of the journals to which he submitted his results. Soviet biochemist Simon El'evich Shnoll encouraged Belousov to continue his efforts to publish his results. In 1959 his work was finally published in a less respectable, nonreviewed journal.
After Belousov's publication, Shnoll gave the project in 1961 to a graduate student, Anatol Zhabotinsky, who investigated the reaction sequence in detail; however, the results of these men's work were still not widely disseminated, and were not known in the West until a conference in Prague in 1968.
A number of BZ cocktails are available in the chemical literature and on the web. Ferroin, a complex of phenanthroline and iron, is a common indicator. These reactions, if carried out in petri dishes, result in the formation first of colored spots. These spots grow into a series of expanding concentric rings or perhaps expanding spirals similar to the patterns generated by a cyclic cellular automaton. The colors disappear if the dishes are shaken, and then reappear. The waves continue until the reagents are consumed. The reaction can also be performed in a beaker using a magnetic stirrer.
Andrew Adamatzky, a computer scientist in the University of the West of England, reported on liquid logic gates using the BZ reaction. The BZ reaction has also been used by Juan Pérez-Mercader and his group at Harvard University to create an entirely chemical Turing machine, capable of recognizing a Chomsky type-1 language.
Strikingly similar oscillatory spiral patterns appear elsewhere in nature, at very different spatial and temporal scales, for example the growth pattern of Dictyostelium discoideum, a soil-dwelling amoeba colony. In the BZ reaction, the size of the interacting elements is molecular and the time scale of the reaction is minutes. In the case of the soil amoeba, the size of the elements is typical of single-celled organisms and the times involved are on the order of days to years.
Investigators are also exploring the creation of a "wet computer", using self-creating "cells" and other techniques to mimic certain properties of neurons.
Chemical mechanism
The mechanism for this reaction is very complex and is thought to involve around 18 different steps which have been the subject of a number of research papers.
In a way similar to the Briggs–Rauscher reaction, two key processes (both of which are auto-catalytic) occur; process A generates molecular bromine, giving the red colour, and process B consumes the bromine to give bromide ions. Theoretically, the reaction resembles the ideal Turing pattern, a system that emerges qualitatively from solving the reaction diffusion equations for a reaction that generates both a reaction inhibitor and a reaction promoter, of which the two diffuse across the medium at different rates.
One of the most common variations on this reaction uses malonic acid (CH2(CO2H)2) as the acid and potassium bromate (KBrO3) as the source of bromine. The overall equation is:
3 CH2(CO2H)2 + 4 → 4 Br− + 9 CO2 + 6 H2O
Variants
Many variants of the reaction exist. The only key chemical is the bromate oxidizer. The catalyst ion is most often cerium, but it can be also manganese, or complexes of iron, ruthenium, cobalt, copper, chromium, silver, nickel and osmium. Many different reductants can be used. (Zhabotinsky, 1964b; Field and Burger, 1985)
Many different patterns can be observed when the reaction is run in a microemulsion.
See also
Autowave
Autowave reverberator
Briggs–Rauscher reaction
Dissipation
Excitable medium
Noise-induced order
Patterns in nature
Reaction–diffusion
Self-oscillation
Self-organization
Stochastic Resonance
Alan Turing who mathematically predicted oscillating chemical reactions in the early 1950s
Brusselator
Oregonator
References
Further reading
External links
Interactive Science Experiment Showcasing the BZ Reaction (A-Level)
A Survey Article on the Mathematics of the BZ Reaction
The Scholarpedia article on the Belousov-Zhabotinsky reaction
The Belousov–Zhabotinski Reaction
The Belousov–Zhabotinsky Reaction
The Phenomenology of the Belousov–Zhabotinsky Reaction, with pictures
BZ reaction and explanation at The Periodic Table of Videos
The Belousov–Zhabotinski Reaction (PDF file)
"Paper cargo surfs chemical waves"—Oscillating chemical waves induced by BZ reactions can propel small objects, New Scientist, 18 February 2008
The home page of Anatol M. Zhabotinsky
Simulating Belousov-Zhabotinsky Reactions in Pixel Bender A simulation of the Belousov–Zhabotinsky reaction running inside Flash Player
Name reactions
Non-equilibrium thermodynamics
Pattern formation
Clock reactions | Belousov–Zhabotinsky reaction | [
"Chemistry",
"Mathematics"
] | 1,469 | [
"Clock reactions",
"Non-equilibrium thermodynamics",
"Name reactions",
"Chemical kinetics",
"Dynamical systems"
] |
186,351 | https://en.wikipedia.org/wiki/In-situ%20conservation | In-situ conservation is the on-site conservation or the conservation of genetic resources in natural populations of plant or animal species, such as forest genetic resources in natural populations of tree species. This process protects the inhabitants and ensures the sustainability of the environment and ecosystem.
Its converse is ex situ conservation, where threatened species are moved to another location. These can include places like seed libraries, gene banks and more where they are protected through human intervention.
Methods
Nature reserves
Nature reserves (or biosphere reserves) cover very large areas, often more than 5000 km2. They are used to protect species for a long time. There are 3 different classifications for these reserves:
Strict Natural Areas
Managed Natural Areas
Wilderness Areas
Strict natural areas are creates to protect the state of nature in a given region. It is not made for the purpose of protecting any species within its limits. managed natural areas alternatively are made specifically for the purpose of protecting a certain species or community that is at the point it may be at risk being in a strict natural area. This is a more controlled environment that is created to be the most optimal habitat for the species concerned to thrive. Finally, a wilderness area serves a dual purpose of providing a protection for the natural region as well as providing recreational opportunities for patrons (excluding motorized transport)
National parks
A national park is an area dedicated for the conservation of wildlife along with its environment. A national park is an area which is used to conserve scenery, natural and historical objects. It is usually a small reserve covering an area of about 100 to 500 square kilometers. Within biosphere reserves, one or more national parks may also exist.
Wildlife sanctuaries
Wildlife sanctuaries can provide a higher quality of life for animals who are moved there. These animals are placed in specialized habitats that allows for more species-specific behaviors to take place. Wildlife sanctuaries are often used for animals that have been in zoos, circuses, laboratories and more for a long time, and then live the rest of their lives with greater autonomy in these habitats.
Biodiversity hotspots
Several international organizations focus their conservation work on areas designated as biodiversity hotspots.
According to Conservation International, to qualify as a biodiversity hotspot a region must meet two strict criteria:
it must contain at least 1,500 species of vascular plants (∆ 0.5% of the world's total) as endemics,
it has to have lost at least 70% of its original habitat.
Biodiversity hotspots make up 1.4% of the earth's land area, yet they contain more than half of our planets species.
Gene sanctuary
A gene sanctuary is an area where plants are conserved. It includes both biosphere reserves as well as national parks. Biosphere reserves are developed to be both a place for biodiversity conservation as well as sustainable development. The concept was first developed in the 1970s and include a core, buffer and transition zones. These zones act together to harmonize the conservation and development aspects of the biosphere.
Since 2004, and 30 years following the invention of the biosphere reserve concept, there have been about 459 conservation areas developed in 97 countries.
Benefits
One benefit of in situ conservation is that it maintains recovering populations in the environment where they have developed their distinctive properties. Another benefit is that this strategy helps ensure the ongoing processes of evolution and adaptation within their environments. As a last resort, ex situ conservation may be used on some or all of the population, when in situ conservation is too difficult, or impossible. The species gets adjusted to the natural disasters like drought, floods, forest fires and this method is very cheap and convenient.
Reserves
Wildlife and livestock conservation involves the protection of wildlife habitats. Sufficiently large reserves must be maintained to enable the target species to exist in large numbers. The population size must be sufficient to enable the necessary genetic diversity to survive, so that it has a good chance of continuing to adapt and evolve over time. This reserve size can be calculated for target species by examining the population density in naturally occurring situations. The reserves must then be protected from intrusion or destruction by man, and against other catastrophes.
Agriculture
In agriculture, in situ conservation techniques are an effective way to improve, maintain, and use traditional or native varieties of agricultural crops. Such methodologies link the positive output of scientific research with farmers' experience and field work.
First, the accessions of a variety stored at a germplasm bank and those of the same variety multiplied by farmers are jointly tested in the producers field and in the laboratory, under different situations and stresses. Thus, the scientific knowledge about the production characteristics of the native varieties is enhanced. Later, the best tested accessions are crossed, mixed, and multiplied under replicable situations. At last, these improved accessions are supplied to the producers. Thus, farmers are enabled to crop improved selections of their own varieties, instead of being lured to substitute their own varieties with commercial ones or to abandon their crop. This technique of conservation of agricultural biodiversity is more successful in marginal areas, where commercial varieties are not expedient, due to climate and soil fertility constraints, or where the taste and cooking characteristics of traditional varieties compensate for their lower yields.
In India
About 4% of the total geographical area of India is used for in situ conservation.
There are 18 biosphere reserves in India, including Nanda Devi in Uttarakhand, Nokrek in Meghalaya, Manas National Park in Assam and Sundarban in West Bengal.
There are 106 national parks in India, including The Kaziranga National Park which conserves The one-horned rhino, Periyar National Park conserving the tiger and elephant, and Ranthambore National Park conserving the tiger.
There are 551 wildlife sanctuaries in India.
Biodiversity hotspots include the Himalayas, the Western Ghats, the Indo-Burma region and the Sundaland.
India has set up its first gene sanctuary in the Garo Hills of Meghalaya for wild relatives of citrus. Efforts are also being made to set up gene sanctuaries for banana, sugarcane, rice and mango.
Community reserves were established as a type of protected area in India in the Wildlife Protection Amendment Act 2002, to provide legal support to community or privately owned reserves which cannot be designated as national park or wildlife sanctuary.
Sacred groves are tracts of forest set aside where all the trees and wildlife within are venerated and given total protection.
In China
China has up to 2538 nature reserves covering 15% of the country.
The majority of in situ conservation areas are concentrated in the regions of Tibet, Qinghai, and Xinjiang. These provinces, all in western China, account for about 56% of the country's nature reserves.
Eastern and southern China contain 90% of the country's population, and there are few nature reserves in these areas. In these regions, nature reserves actively compete with human development projects to support a growing demand for infrastructure. One consequence of this competing development has been the movement of the South China tiger out of its natural habitat.
In eastern and southern China, many undeveloped natural landscapes are fragmented; however, nature reserves may provide crucial refuge for key species and ecosystem services.
See also
Arid Forest Research Institute
Biodiversity
Food plot – the practice of planting crops specifically to support wildlife
Genetic erosion
Habitat corridor
Habitat fragmentation
Refuge (ecology)
Reintroduction
Regional Red List
Restoration ecology
Wildlife corridor
References
Further reading
External links
In-Situ Conservation, The Convention on Biological Diversity
Ex-Situ Conservation, The Convention on Biological Diversity
IUCN/SSC Re-introduction Specialist Group
IUCN Red List of Threatened Species
The Convention on Biological Diversity
In situ conservation
Guidelines: In vivo conservation of animal genetic resources, Food and Agriculture Organization of the UN
Conservation biology
Ecological restoration
Environmental design
Environmental conservation | In-situ conservation | [
"Chemistry",
"Engineering",
"Biology"
] | 1,560 | [
"Environmental design",
"Ecological restoration",
"Environmental engineering",
"Design",
"Conservation biology"
] |
186,497 | https://en.wikipedia.org/wiki/Mefloquine | Mefloquine, sold under the brand name Lariam among others, is a medication used to prevent or treat malaria. When used for prevention it is typically started before potential exposure and continued for several weeks after potential exposure. It can be used to treat mild or moderate malaria but is not recommended for severe malaria. It is taken by mouth.
Common side effects include vomiting, diarrhea, headaches, sleep disorders, and a rash. Serious side effects include potentially long-term mental health problems such as depression, hallucinations, and anxiety and neurological side effects such as poor balance, seizures, and ringing in the ears. It is therefore not recommended in people with a history of mental health problems or epilepsy. It appears to be safe during pregnancy and breastfeeding.
Mefloquine was developed by the United States Army in the 1970s and came into use in the mid-1980s. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses
Mefloquine is used to both prevent and treat certain forms of malaria.
Malaria prevention
Mefloquine is useful for the prevention of malaria in all areas except for those where parasites may have resistance to multiple medications, and is one of several anti-malarial medications recommended by the United States Centers for Disease Control and Prevention for this purpose. It is also recommended by the Infectious Disease Society of America for malaria prophylaxis as a first or second-line agent, depending on resistance patterns in the malaria found in the geographic region visited. It is typically taken for one to two weeks before entering an area with malaria. Doxycycline and atovaquone/proguanil provide protection within one to two days and may be better tolerated. If a person becomes ill with malaria despite prophylaxis with mefloquine, the use of halofantrine and quinine for treatment may be ineffective.
Malaria treatment
Mefloquine is used as a treatment for chloroquine-sensitive or resistant Plasmodium falciparum malaria, and is deemed a reasonable alternative for uncomplicated chloroquine-resistant Plasmodium vivax malaria. It is one of several drugs recommended by the United States' Centers for Disease Control and Prevention.
It is not recommended for severe malaria infections, particularly infections from P. falciparum, which should be treated with intravenous antimalarials. Mefloquine does not eliminate parasites in the liver phase of the disease, and people with P. vivax malaria should be treated with a second drug that is effective for the liver phase, such as primaquine.
Resistance to mefloquine
Resistance to mefloquine is common around the west border in Cambodia and other parts of Southeast Asia. The mechanism of resistance is by increase in Pfmdr1 copy number.
Adverse effects
Common side effects include vomiting, diarrhea, headaches, and a rash. Severe side effects requiring hospitalization are rare, but include mental health problems such as depression, hallucinations, anxiety and neurological side effects such as poor balance, seizures, and ringing in the ears. Mefloquine is therefore not recommended in people with a history of psychiatric disorders or epilepsy.
Neurologic and psychiatric
In 2013, the U.S. Food and Drug Administration (FDA) added a boxed warning to the prescription label of mefloquine regarding the potential for neuropsychiatric side effects that may persist even after discontinuing administration of the medication. In 2013 the FDA stated "Neurologic side effects can occur at any time during drug use, and can last for months to years after the drug is stopped or can be permanent." Neurologic effects include dizziness, loss of balance, seizures, and tinnitus. Psychiatric effects include nightmares, visual hallucinations, auditory hallucinations, anxiety, depression, unusual behavior, and suicidal ideations.
Central nervous system events requiring hospitalization occur in about one in 10,000 people taking mefloquine for malaria prevention, with milder events (e.g., dizziness, headache, insomnia, and vivid dreams) in up to 25%. When some measure of subjective severity is applied to the rating of adverse events, about 11–17% of travelers are incapacitated to some degree.
Cardiac
Mefloquine may cause abnormalities with heart rhythms that are visible on electrocardiograms. Combining mefloquine with other drugs that cause similar effects, such as quinine or quinidine, can increase these effects. Combining mefloquine with halofantrine can cause significant increases in QTc intervals.
Contraindications
Mefloquine is contraindicated in those with a previous history of seizures or a recent history of psychiatric disorders.
Pregnancy and breastfeeding
Available data suggests that mefloquine is safe and effective for use by pregnant women during all trimesters of pregnancy, and it is widely used for this indication. In pregnant women, mefloquine appears to pose minimal risk to the fetus, and is not associated with increased risk of birth defects or miscarriages. Compared to other malaria chemoprophylaxis regimens, however, mefloqinone may produce more side effects in non-pregnant travelers.
Mefloquine is also safe and effective for use during breastfeeding, though it appears in breast milk in low concentrations. The World Health Organization (WHO) gives approval for the use of mefloquine in the second and third trimesters of pregnancy and use in the first trimester does not mandate termination of pregnancy.
Pharmacology
Elimination
Mefloquine is metabolized primarily through the liver. Its elimination in persons with impaired liver function may be prolonged, resulting in higher plasma levels and an increased risk of adverse reactions. The mean elimination plasma half-life of mefloquine is between two and four weeks. Total clearance is through the liver, and the primary means of excretion is through the bile and feces, as opposed to only 4% to 9% excreted through the urine. During long-term use, the plasma half-life remains unchanged.
Liver function tests should be performed during long-term administration of mefloquine. Alcohol use should be avoided during treatment with mefloquine.
Chemistry
Specifically it is used as mefloquine hydrochloride.
Mefloquine is a chiral molecule with two asymmetric carbon centres, which means it has four different stereoisomers. The drug is currently manufactured and sold as a racemate of the (R,S)- and (S,R)-enantiomers by Hoffmann-La Roche, a Swiss pharmaceutical company. Essentially, it is two drugs in one. Plasma concentrations of the (–)-enantiomer are significantly higher than those for the (+)-enantiomer, and the pharmacokinetics between the two enantiomers are significantly different. The (+)-enantiomer has a shorter half-life than the (–)-enantiomer.
History
Mefloquine was formulated at Walter Reed Army Institute of Research (WRAIR) in the 1970s shortly after the end of the Vietnam war. Mefloquine was number 142,490 of a total of 250,000 antimalarial compounds screened during the study.
Mefloquine was the first Public-Private Venture (PPV) between the US Department of Defense and a pharmaceutical company. WRAIR transferred all its phase I and phase II clinical trial data to Hoffman-LaRoche and Smith Kline. FDA approval as a treatment for malaria was swift. Most notably, phase III safety and tolerability trials were skipped.
The drug was first approved in Switzerland in 1984 by Hoffmann-LaRoche, who brought it to market with the name Lariam.
However, mefloquine was not approved by the FDA for prophylactic use until 1989. This approval was based primarily on compliance, while safety and tolerability were overlooked. Because of the drug's very long half-life, the Centers for Disease Control originally recommended a mefloquine dosage of 250 mg every two weeks; however, this caused an unacceptably high malaria rate in the Peace Corps volunteers who participated in the approval study, so the drug regimen was switched to once a week.
By 1991, Hoffman was marketing the drug on a worldwide basis.
By the 1992 UNITAF, Canadian soldiers were being prescribed the drug en masse.
By 1994, medical professionals were noting "severe psychiatric side effects observed during prophylaxis and treatment with mefloquine", and recommending that "the absence of contraindications and minor side effects during an initial course of mefloquine should be confirmed before another course is prescribed." Other doctors at the University Hospital of Zurich noted in a case of "a 47-year-old, previously healthy Japanese tourist" who had severe neuropsychiatric side-effects from the drug that
The first randomized, controlled trial on a mixed population was performed in 2001. Prophylaxis with mefloquine was compared to prophylaxis with atovaquone-proguanil. Roughly 67% of participants in the mefloquine arm reported greater than or equal to one adverse event, versus 71% in the atovaquone-proguanil arm. In the mefloquine arm, 5% of the users reported severe events requiring medical attention, versus 1.2% in the atovaquone-proguanil arm.
In August 2009, Roche stopped marketing Lariam in the United States.
Retired soldier Johnny Mercer, who was later appointed Minister for Veterans Affairs by Boris Johnson, told in 2015 that he had received "a letter about once or twice a week" about ill-effects from the drug. In July 2016, Roche took this brand off the market in Ireland.
Military
In 2006, the Australian military deemed mefloquine "a third-line drug" alternative, and over the five years from 2011 only 25 soldiers had been prescribed the drug, and only in cases of their intolerance for other alternatives. Between 2001 and 2012, 16,000 Canadian soldiers sent to Afghanistan were given the drug as a preventative measure. In 2013, the US Army banned mefloquine from use by its special forces such as the Green Berets. In autumn 2016, the UK military followed suit with their Australian peers after a parliamentary inquiry into the matter revealed that it can cause permanent side effects and brain damage.
In early December 2016, the German defence ministry removed mefloquine from the list of medications it would provide to its soldiers.
In autumn 2016, Canadian Surgeon General Brigadier General Hugh Colin MacKay told a parliamentary committee that faulty science supported the assertion that the drug has indelible noxious side effects. An expert from Health Canada named Barbara Raymond told the same committee that the evidence she had read failed to support the conclusion of indelible side effects. Canadian soldiers who took mefloquine when deployed overseas have claimed they have been left with ongoing mental health problems.
In 2020 the UK Ministry of Defence (MoD) admitted to a breach of duty regarding the use of Mefloquine. by acknowledging numerous instances of failure to assess the risks and warn of potential side effects of the drug.
Research
In June 2010, the first case report appeared of a progressive multifocal leukoencephalopathy being successfully treated with mefloquine. Mefloquine can also act against the JC virus. Administration of mefloquine seemed to eliminate the virus from the patient's body and prevented further neurological deterioration.
Mefloquine alters cholinergic synaptic transmission through both postsynaptic and presynaptic actions. The postsynaptic action to inhibit acetylcholinesterase changes transmission across synapses in the brain.
References
Further reading
External links
American inventions
Antimalarial agents
Chirality
Drug safety
Drugs developed by Hoffmann-La Roche
Medical controversies
Piperidines
Quinolines
Racemic mixtures
Trifluoromethyl compounds
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Mefloquine | [
"Physics",
"Chemistry",
"Biology"
] | 2,562 | [
"Pharmacology",
"Racemic mixtures",
"Origin of life",
"Stereochemistry",
"Drug safety",
"Chirality",
"Chemical mixtures",
"Asymmetry",
"Biochemistry",
"Symmetry",
"Biological hypotheses"
] |
187,185 | https://en.wikipedia.org/wiki/Calmodulin | Calmodulin (CaM) (an abbreviation for calcium-modulated protein) is a multifunctional intermediate calcium-binding messenger protein expressed in all eukaryotic cells. It is an intracellular target of the secondary messenger Ca2+, and the binding of Ca2+ is required for the activation of calmodulin. Once bound to Ca2+, calmodulin acts as part of a calcium signal transduction pathway by modifying its interactions with various target proteins such as kinases or phosphatases.
Structure
Calmodulin is a small, highly conserved protein that is 148 amino acids long (16.7 kDa). The protein has two approximately symmetrical globular domains (the N- and C- domains) each containing a pair of EF hand motifs separated by a flexible linker region for a total of four Ca2+ binding sites, two in each globular domain. In the Ca2+-free state, the helices that form the four EF-hands are collapsed in a compact orientation, and the central linker is disordered; in the Ca2+-saturated state, the EF-hand helices adopt an open orientation roughly perpendicular to one another, and the central linker forms an extended alpha-helix in the crystal structure, but remains largely disordered in solution. The C-domain has a higher binding affinity for Ca2+ than the N-domain.
Calmodulin is structurally quite similar to troponin C, another Ca2+-binding protein containing four EF-hand motifs. However, troponin C contains an additional alpha-helix at its N-terminus, and is constitutively bound to its target, troponin I. It therefore does not exhibit the same diversity of target recognition as does calmodulin.
Importance of flexibility in calmodulin
Calmodulin's ability to recognize a tremendous range of target proteins is due in large part to its structural flexibility. In addition to the flexibility of the central linker domain, the N- and C-domains undergo open-closed conformational cycling in the Ca2+-bound state. Calmodulin also exhibits great structural variability, and undergoes considerable conformational fluctuations, when bound to targets. Moreover, the predominantly hydrophobic nature of binding between calmodulin and most of its targets allows for recognition of a broad range of target protein sequences. Together, these features allow calmodulin to recognize some 300 target proteins exhibiting a variety of CaM-binding sequence motifs.
Mechanism
Binding of Ca2+ by the EF-hands causes an opening of the N- and C-domains, which exposes hydrophobic target-binding surfaces. These surfaces interact with complementary nonpolar segments on target proteins, typically consisting of groups of bulky hydrophobic amino acids separated by 10–16 polar and/or basic amino acids. The flexible central domain of calmodulin allows the protein to wrap around its target, although alternate modes of binding are known. "Canonical" targets of calmodulin, such as myosin light-chain kinases and CaMKII, bind only to the Ca2+-bound protein, whereas some proteins, such as NaV channels and IQ-motif proteins, also bind to calmodulin in the absence of Ca2+. Binding of calmodulin induces conformational rearrangements in the target protein via "mutually induced fit", leading to changes in the target protein's function.
Calcium binding by calmodulin exhibits considerable cooperativity, making calmodulin an unusual example of a monomeric (single-chain) cooperative binding protein. Furthermore, target binding alters the binding affinity of calmodulin toward Ca2+ ions, which allows for complex allosteric interplay between Ca2+ and target binding interactions. This influence of target binding on Ca2+ affinity is believed to allow for Ca2+ activation of proteins that are constitutively bound to calmodulin, such as small-conductance Ca2+-activated potassium (SK) channels.
Although calmodulin principally operates as a Ca2+ binding protein, it also coordinates other metal ions. For example, in the presence of typical intracellular concentrations of Mg2+ (0.5–1.0 mM) and resting concentrations of Ca2+ (100 nM), calmodulin's Ca2+ binding sites are at least partially saturated by Mg2+. This Mg2+ is displaced by the higher concentrations of Ca2+ generated by signaling events. Similarly, Ca2+ may itself be displaced by other metal ions, such as the trivalent lanthanides, that associate with calmodulin's binding pockets even more strongly than Ca2+. Though such ions distort calmodulin's structure and are generally not physiologically relevant due to their scarcity in vivo, they have nonetheless seen wide scientific use as reporters of calmodulin structure and function.
Role in animals
Calmodulin mediates many crucial processes such as inflammation, metabolism, apoptosis, smooth muscle contraction, intracellular movement, short-term and long-term memory, and the immune response. Calcium participates in an intracellular signaling system by acting as a diffusible second messenger to the initial stimuli. It does this by binding various targets in the cell including a large number of enzymes, ion channels, aquaporins and other proteins. Calmodulin is expressed in many cell types and can have different subcellular locations, including the cytoplasm, within organelles, or associated with the plasma or organelle membranes, but it is always found intracellularly. Many of the proteins that calmodulin binds are unable to bind calcium themselves, and use calmodulin as a calcium sensor and signal transducer. Calmodulin can also make use of the calcium stores in the endoplasmic reticulum, and the sarcoplasmic reticulum. Calmodulin can undergo post-translational modifications, such as phosphorylation, acetylation, methylation and proteolytic cleavage, each of which has potential to modulate its actions.
Specific examples
Role in smooth muscle contraction
Calmodulin plays an important role in excitation contraction (EC) coupling and the initiation of the cross-bridge cycling in smooth muscle, ultimately causing smooth muscle contraction. In order to activate contraction of smooth muscle, the head of the myosin light chain must be phosphorylated. This phosphorylation is done by myosin light chain (MLC) kinase. This MLC kinase is activated by a calmodulin when it is bound by calcium, thus making smooth muscle contraction dependent on the presence of calcium, through the binding of calmodulin and activation of MLC kinase.
Another way that calmodulin affects muscle contraction is by controlling the movement of Ca2+ across both the cell and sarcoplasmic reticulum membranes. The Ca2+ channels, such as the ryanodine receptor of the sarcoplasmic reticulum, can be inhibited by calmodulin bound to calcium, thus affecting the overall levels of calcium in the cell. Calcium pumps take calcium out of the cytoplasm or store it in the endoplasmic reticulum and this control helps regulate many downstream processes.
This is a very important function of calmodulin because it indirectly plays a role in every physiological process that is affected by smooth muscle contraction such as digestion and contraction of arteries (which helps distribute blood and regulate blood pressure).
Role in metabolism
Calmodulin plays an important role in the activation of phosphorylase kinase, which ultimately leads to glucose being cleaved from glycogen by glycogen phosphorylase.
Calmodulin also plays an important role in lipid metabolism by affecting calcitonin. Calcitonin is a polypeptide hormone that lowers blood Ca2+ levels and activates Gs protein cascades that leads to the generation of cAMP. The actions of calcitonin can be blocked by inhibiting the actions of calmodulin, suggesting that calmodulin plays a crucial role in the activation of calcitonin.
Role in short-term and long-term memory
Ca2+/calmodulin-dependent protein kinase II (CaMKII) plays a crucial role in a type of synaptic plasticity known as long-term potentiation (LTP) which requires the presence of calcium/calmodulin. CaMKII contributes to the phosphorylation of an AMPA receptor which increases the sensitivity of AMPA receptors. Furthermore, research shows that inhibiting CaMKII interferes with LTP.
Role in plants
While yeasts have only a single CaM gene, plants and vertebrates contain an evolutionarily conserved form of CaM genes. The difference between plants and animals in Ca2+ signaling is that the plants contain an extended family of the CaM in addition to the evolutionarily conserved form. Calmodulins play an essential role in plant development and adaptation to environmental stimuli.
Calcium plays a key role in the structural integrity of the cell wall and the membrane system of the cell. However, high calcium levels can be toxic to a plant's cellular energy metabolism and, hence, the Ca2+ concentration in the cytosol is maintained at a submicromolar level by removing the cytosolic Ca2+ to either the apoplast or the lumen of the intracellular organelles. Ca2+ pulses created due to increased influx and efflux act as cellular signals in response to external stimuli such as hormones, light, gravity, abiotic stress factors and also interactions with pathogens.
CMLs (CaM-related proteins)
Plants contain CaM-related proteins (CMLs) apart from the typical CaM proteins. The CMLs have about 15% amino acid similarity with the typical CaMs. Arabidopsis thaliana contains about 50 different CML genes, which leads to the question of what purpose these diverse ranges of proteins serve in the cellular function. All plant species exhibit this diversity in the CML genes. The different CaMs and CMLs differ in their affinity to bind and activate the CaM-regulated enzymes in vivo. The CaM or CMLs are also found to be located in different organelle compartments.
Plant growth and development
In Arabidopsis, the protein DWF1 plays an enzymatic role in the biosynthesis of brassinosteroids, steroid hormones in plants that are required for growth. An interaction occurs between CaM and DWF1, and DWF1 being unable to bind CaM is unable to produce a regular growth phenotype in plants. Hence, CaM is essential for the DWF1 function in plant growth.
CaM binding proteins are also known to regulate reproductive development in plants. For instance, the CaM-binding protein kinase in tobacco acts as a negative regulator of flowering. However, these CaM-binding protein kinase are also present in the shoot apical meristem of tobacco and a high concentration of these kinases in the meristem causes a delayed transition to flowering in the plant.
S-locus receptor kinase (SRK) is another protein kinase that interacts with CaM. SRK is involved in the self-incompatibility responses involved in pollen-pistil interactions in Brassica.
CaM targets in Arabidopsis are also involved in pollen development and fertilization. Ca2+ transporters are essential for pollen tube growth. Hence, a constant Ca2+ gradient is maintained at the apex of pollen tube for elongation during the process of fertilization. Similarly, CaM is also essential at the pollen tube apex, where its primarily role involves the guidance of the pollen tube growth.
Interaction with microbes
Nodule formation
Ca2+ plays an important role in nodule formation in legumes. Nitrogen is an essential element required in plants and many legumes, unable to fix nitrogen independently, pair symbiotically with nitrogen-fixing bacteria that reduce nitrogen to ammonia. This legume-Rhizobium interaction establishment requires the Nod factor that is produced by the Rhizobium bacteria. The Nod factor is recognized by the root hair cells that are involved in the nodule formation in legumes. Ca2+ responses of varied nature are characterized to be involved in the Nod factor recognition. There is a Ca2+ flux at the tip of the root hair initially followed by repetitive oscillation of Ca2+ in the cytosol and also Ca2+ spike occurs around the nucleus. DMI3, an essential gene for Nod factor signaling functions downstream of the Ca2+ spiking signature, might be recognizing the Ca2+ signature. Further, several CaM and CML genes in Medicago and Lotus are expressed in nodules.
Pathogen defense
Among the diverse range of defense strategies plants utilize against pathogens, Ca2+ signaling is very common. Free Ca2+ levels in the cytoplasm increases in response to a pathogenic infection. Ca2+ signatures of this nature usually activate the plant defense system by inducing defense-related genes and the hypersensitive cell death. CaMs, CMLs and CaM-binding proteins are some of the recently identified elements of the plant defense signaling pathways. Several CML genes in tobacco, bean and tomato are responsive to pathogens. CML43 is a CaM-related protein that, as isolated from APR134 gene in the disease-resistant leaves of Arabidopsis for gene expression analysis, is rapidly induced when the leaves are inoculated with Pseudomonas syringae. These genes are also found in tomatoes (Solanum lycopersicum). The CML43 from the APR134 also binds to Ca2+ ions in vitro which shows that CML43 and APR134 are, hence, involved in the Ca2+-dependent signaling during the plant immune response to bacterial pathogens. The CML9 expression in Arabidopsis thaliana is rapidly induced by phytopathogenic bacteria, flagellin and salicylic acid. Expression of soybean SCaM4 and SCaM5 in transgenic tobacco and Arabidopsis causes an activation of genes related to pathogen resistance and also results in enhanced resistance to a wide spectrum of pathogen infection. The same is not true for soybean SCaM1 and SCaM2 that are highly conserved CaM isoforms. The AtBAG6 protein is a CaM-binding protein that binds to CaM only in the absence of Ca2+ and not in the presence of it. AtBAG6 is responsible for the hypersensitive response of programmed cell death in order to prevent the spread of pathogen infection or to restrict pathogen growth. Mutations in the CaM binding proteins can lead to severe effects on the defense response of the plants towards pathogen infections. Cyclic nucleotide-gated channels (CNGCs) are functional protein channels in the plasma membrane that have overlapping CaM binding sites transport divalent cations such as Ca2+. However, the exact role of the positioning of the CNGCs in this pathway for plant defense is still unclear.
Abiotic stress response in plants
Change in intracellular Ca2+ levels is used as a signature for diverse responses towards mechanical stimuli, osmotic and salt treatments, and cold and heat shocks. Different root cell types show a different Ca2+ response to osmotic and salt stresses and this implies the cellular specificities of Ca2+ patterns. In response to external stress CaM activates glutamate decarboxylase (GAD) that catalyzes the conversion of -glutamate to GABA. A tight control on the GABA synthesis is important for plant development and, hence, increased GABA levels can essentially affect plant development. Therefore, external stress can affect plant growth and development and CaM are involved in that pathway controlling this effect.
Plant examples
Sorghum
The plant sorghum is well established model organism and can adapt in hot and dry environments. For this reason, it is used as a model to study calmodulin's role in plants. Sorghum contains seedlings that express a glycine-rich RNA-binding protein, SbGRBP. This particular protein can be modulated by using heat as a stressor. Its unique location in the cell nucleus and cytosol demonstrates interaction with calmodulin that requires the use of Ca2+. By exposing the plant to versatile stress conditions, it can cause different proteins that enable the plant cells to tolerate environmental changes to become repressed. These modulated stress proteins are shown to interact with CaM. The CaMBP genes expressed in the sorghum are depicted as a “model crop” for researching the tolerance to heat and drought stress.
Arabidopsis
In an Arabidopsis thaliana study, hundreds of different proteins demonstrated the possibility to bind to CaM in plants.
Family members
Calmodulin 1 ()
Calmodulin 2 ()
Calmodulin 3 ()
calmodulin 1 pseudogene 1 ()
Calmodulin-like 3 ()
Calmodulin-like 4 ()
Calmodulin-like 5 ()
Calmodulin-like 6 ()
Other calcium-binding proteins
Calmodulin belongs to one of the two main groups of calcium-binding proteins, called EF hand proteins. The other group, called annexins, bind calcium and phospholipids such as lipocortin. Many other proteins bind calcium, although binding calcium may not be considered their principal function in the cell.
See also
Protein kinase
Ca2+/calmodulin-dependent protein kinase
References
External links
Proteopedia page for Calmodulin and its conformational change
EF-hand-containing proteins
Cell signaling
Signal transduction
Calcium signaling | Calmodulin | [
"Chemistry",
"Biology"
] | 3,655 | [
"Biochemistry",
"Neurochemistry",
"Calcium signaling",
"Signal transduction"
] |
7,478,096 | https://en.wikipedia.org/wiki/Nuclear%20pharmacy | Nuclear pharmacy, also known as radiopharmacy, involves preparation of radioactive materials for patient administration that will be used to diagnose and treat specific diseases in nuclear medicine. It generally involves the practice of combining a radionuclide tracer with a pharmaceutical component that determines the biological localization in the patient. Radiopharmaceuticals are generally not designed to have a therapeutic effect themselves, but there is a risk to staff from radiation exposure and to patients from possible contamination in production. Due to these intersecting risks, nuclear pharmacy is a heavily regulated field. The majority of diagnostic nuclear medicine investigations are performed using technetium-99m.
History
The concept of nuclear pharmacy was first described in 1960 by Captain William H. Briner while at the National Institutes of Health (NIH) in Bethesda, Maryland. Along with Mr. Briner, John E. Christian, who was a professor in the School of Pharmacy at Purdue University, had written articles and contributed in other ways to set the stage of nuclear pharmacy. William Briner started the NIH Radiopharmacy in 1958. John Christian and William Briner were both active on key national committees responsible for the development, regulation and utilization of radiopharmaceuticals. A technetium-99m generator was commercially available, followed by the availability of a number of Tc-99m based radiopharmaceuticals.
In the United States nuclear pharmacy was the first pharmacy specialty established in 1978 by the Board of Pharmacy Specialties.
Various models of production exist internationally. Institutional nuclear pharmacy is typically operated through large medical centers or hospitals while commercial centralized nuclear pharmacies provide their services to subscriber hospitals. They prepare and dispense radiopharmaceuticals as unit doses that are then delivered to the subscriber hospital by nuclear pharmacy personnel.
Operation
A few basic steps are typically involved in technetium based preparations. First the active technetium is obtained from a radionuclide generator on site, which is then added to a non-radioactive kit containing the pharmaceutical component. Further steps may be required depending on the materials in question to ensure full binding of the two components. These procedures are usually carried out in a clean room or isolator to provide radiation shielding and sterile conditions.
For Positron Emission Tomography (PET), Fludeoxyglucose (18F) is the most common radiopharmaceutical, with the radioactive component usually obtained from a cyclotron. The short half life of Fluorine-18 and many other PET isotopes necessitates rapid production. PET radiopharmaceuticals are now often produced by automated computer controlled systems to reduce complexity and radiation doses to staff.
Training and regulation
Radiopharmacy is a heavily regulated field, as it combines several practices and fields which may come under the purview of multiple regulators and legislation. These include occupational exposure of staff to ionising radiation, preparation of medicines, patient exposure to ionising radiation, transport of radioactive materials, and environmental exposure to ionising radiation. Different regulations may cover the various stages involved in radiopharmacies, ranging from production of "cold" (non-radioactive) kits, to the marketing and distribution of final products.
Staff working in nuclear pharmacies require extensive training on aspects of good manufacturing practice, radiation safety concerns and aseptic dispensing. In the United States an authorised nuclear pharmacist must be a fully qualified pharmacist with evidence of additional training and qualification in nuclear pharmacy practice. Several European Union directives cover radiopharmaceuticals as a special group of medicines, reflecting the wide range of types of producers and staff groups that may be involved. In the UK qualified pharmacists may be involved along with clinical scientists or technologists, with relevant training.
See also
Nuclear medicine
Pharmacy
Radiopharmacology
References
Pharmacy
Nuclear medicine
Medical physics | Nuclear pharmacy | [
"Physics",
"Chemistry"
] | 801 | [
"Pharmacology",
"Applied and interdisciplinary physics",
"Medical physics",
"Pharmacy"
] |
7,479,239 | https://en.wikipedia.org/wiki/Implicit%20solvation | Implicit solvation (sometimes termed continuum solvation) is a method to represent solvent as a continuous medium instead of individual “explicit” solvent molecules, most often used in molecular dynamics simulations and in other applications of molecular mechanics. The method is often applied to estimate free energy of solute-solvent interactions in structural and chemical processes, such as folding or conformational transitions of proteins, DNA, RNA, and polysaccharides, association of biological macromolecules with ligands, or transport of drugs across biological membranes.
The implicit solvation model is justified in liquids, where the potential of mean force can be applied to approximate the averaged behavior of many highly dynamic solvent molecules. However, the interfaces and the interiors of biological membranes or proteins can also be considered as media with specific solvation or dielectric properties. These media are not necessarily uniform, since their properties can be described by different analytical functions, such as “polarity profiles” of lipid bilayers.
There are two basic types of implicit solvent methods: models based on accessible surface areas (ASA) that were historically the first, and more recent continuum electrostatics models, although various modifications and combinations of the different methods are possible.
The accessible surface area (ASA) method is based on experimental linear relations between Gibbs free energy of transfer and the surface area of a solute molecule. This method operates directly with free energy of solvation, unlike molecular mechanics or electrostatic methods that include only the enthalpic component of free energy. The continuum representation of solvent also significantly improves the computational speed and reduces errors in statistical averaging that arise from incomplete sampling of solvent conformations, so that the energy landscapes obtained with implicit and explicit solvent are different. Although the implicit solvent model is useful for simulations of biomolecules, this is an approximate method with certain limitations and problems related to parameterization and treatment of ionization effects.
Accessible surface area-based method
The free energy of solvation of a solute molecule in the simplest ASA-based method is given by:
where is the accessible surface area of atom i, and
is solvation parameter of atom i, i.e., a contribution to the free energy of solvation of the particular atom i per surface unit area. The needed solvation parameters for different types of atoms (carbon (C), nitrogen (N), oxygen (O), sulfur (S), etc.) are usually determined by a least squares fit of the calculated and experimental transfer free energies for a series of organic compounds. The experimental energies are determined from partition coefficients of these compounds between different solutions or media using standard mole concentrations of the solutes.
Notably, solvation energy is the free energy needed to transfer a solute molecule from a solvent to vacuum (gas phase). This energy can supplement the intramolecular energy in vacuum calculated in molecular mechanics. Thus, the needed atomic solvation parameters were initially derived from water-gas partition data. However, the dielectric properties of proteins and lipid bilayers are much more similar to those of nonpolar solvents than to vacuum. Newer parameters have thus been derived from octanol-water partition coefficients or other similar data. Such parameters actually describe transfer energy between two condensed media or the difference of two solvation energies.
Poisson-Boltzmann
The Poisson-Boltzmann equation (PB) describes the electrostatic environment of a solute in a solvent containing ions. It can be written in cgs units as:
or (in mks):
where represents the position-dependent dielectric, represents the electrostatic potential, represents the charge density of the solute, represents the concentration of the ion i at a distance of infinity from the solute, is the valence of the ion, q is the charge of a proton, k is the Boltzmann constant, T is the temperature, and is a factor for the position-dependent accessibility of position r to the ions in solution (often set to uniformly 1). If the potential is not large, the equation can be linearized to be solved more efficiently.
Although this equation has solid theoretical justification, it is computationally expensive to calculate without approximations. A number of numerical Poisson-Boltzmann equation solvers of varying generality and efficiency have been developed, including one application with a specialized computer hardware platform. However, performance from PB solvers does not yet equal that from the more commonly used generalized Born approximation.
Generalized Born model
The Generalized Born (GB) model is an approximation to the exact (linearized) Poisson-Boltzmann equation. It is based on modeling the solute as a set of spheres whose internal dielectric constant differs from the external solvent. The model has the following functional form:
where
and
where is the permittivity of free space, is the dielectric constant of the solvent being modeled, is the electrostatic charge on particle i, is the distance between particles i and j, and is a quantity (with the dimension of length) termed the effective Born radius. The effective Born radius of an atom characterizes its degree of burial inside the solute; qualitatively it can be thought of as the distance from the atom to the molecular surface. Accurate estimation of the effective Born radii is critical for the GB model.
With accessible surface area
The Generalized Born (GB) model augmented with the hydrophobic solvent accessible surface area (SA) term is GBSA. It is among the most commonly used implicit solvent model combinations. The use of this model in the context of molecular mechanics is termed MM/GBSA. Although this formulation has been shown to successfully identify the native states of short peptides with well-defined tertiary structure, the conformational ensembles produced by GBSA models in other studies differ significantly from those produced by explicit solvent and do not identify the protein's native state. In particular, salt bridges are overstabilized, possibly due to insufficient electrostatic screening, and a higher-than-native alpha helix population was observed. Variants of the GB model have also been developed to approximate the electrostatic environment of membranes, which have had some success in folding the transmembrane helixes of integral membrane proteins.
Ad hoc fast solvation models
Another possibility is to use ad hoc quick strategies to estimate solvation free energy. A first generation of fast implicit solvents is based on the calculation of a per-atom solvent accessible surface area. For each of group of atom types, a different parameter scales its contribution to solvation ("ASA-based model" described above).
Another strategy is implemented for the CHARMM19 force-field and is called EEF1. EEF1 is based on a Gaussian-shaped solvent exclusion. The solvation free energy is
The reference solvation free energy of i corresponds to a suitably chosen small molecule in which group i is essentially fully solvent-exposed. The integral is over the volume Vj of group j and the summation is over all groups j around i. EEF1 additionally uses a distance-dependent (non-constant) dielectric, and ionic side-chains of proteins are simply neutralized. It is only 50% slower than a vacuum simulation. This model was later augmented with the hydrophobic effect and called Charmm19/SASA.
Hybrid implicit-explicit solvation models
It is possible to include a layer or sphere of water molecules around the solute, and model the bulk with an implicit solvent. Such an approach is proposed by M. J. Frisch and coworkers and by other authors. For instance in Ref. the bulk solvent is modeled with a Generalized Born approach and the multi-grid method used for Coulombic pairwise particle interactions. It is reported to be faster than a full explicit solvent simulation with the particle mesh Ewald summation (PME) method of electrostatic calculation. There are a range of hybrid methods available capable of accessing and acquiring information on solvation.
Effects unaccounted for
The hydrophobic effect
Models like PB and GB allow estimation of the mean electrostatic free energy but do not account for the (mostly) entropic effects arising from solute-imposed constraints on the organization of the water or solvent molecules. This is termed the hydrophobic effect and is a major factor in the folding process of globular proteins with hydrophobic cores. Implicit solvation models may be augmented with a term that accounts for the hydrophobic effect. The most popular way to do this is by taking the solvent accessible surface area (SASA) as a proxy of the extent of the hydrophobic effect. Most authors place the extent of this effect between 5 and 45 cal/(Å2 mol). Note that this surface area pertains to the solute, while the hydrophobic effect is mostly entropic in nature at physiological temperatures and occurs on the side of the solvent.
Viscosity
Implicit solvent models such as PB, GB, and SASA lack the viscosity that water molecules impart by randomly colliding and impeding the motion of solutes through their van der Waals repulsion. In many cases, this is desirable because it makes sampling of configurations and phase space much faster. This acceleration means that more configurations are visited per simulated time unit, on top of whatever CPU acceleration is achieved in comparison to explicit solvent. It can, however, lead to misleading results when kinetics are of interest.
Viscosity may be added back by using Langevin dynamics instead of Hamiltonian mechanics and choosing an appropriate damping constant for the particular solvent. In practical bimolecular simulations one can often speed-up conformational search significantly (up to 100 times in some cases) by using much lower collision frequency . Recent work has also been done developing thermostats based on fluctuating hydrodynamics to account for momentum transfer through the solvent and related thermal fluctuations. One should keep in mind, though, that the folding rate of proteins does not depend linearly on viscosity for all regimes.
Hydrogen bonds with solvent
Solute-solvent hydrogen bonds in the first solvation shell are important for solubility of organic molecules and especially ions. Their average energetic contribution can be reproduced with an implicit solvent model.
Problems and limitations
All implicit solvation models rest on the simple idea that nonpolar atoms of a solute tend to cluster together or occupy nonpolar media, whereas polar and charged groups of the solute tend to remain in water. However, it is important to properly balance the opposite energy contributions from different types of atoms. Several important points have been discussed and investigated over the years.
Choice of model solvent
It has been noted that wet 1-octanol solution is a poor approximation of proteins or biological membranes because it contains ~2M of water, and that cyclohexane would be a much better approximation. Investigation of passive permeability barriers for different compounds across lipid bilayers led to conclusion that 1,9-decadiene can serve as a good approximations of the bilayer interior, whereas 1-octanol was a very poor approximation. A set of solvation parameters derived for protein interior from protein engineering data was also different from octanol scale: it was close to cyclohexane scale for nonpolar atoms but intermediate between cyclohexane and octanol scales for polar atoms. Thus, different atomic solvation parameters should be applied for modeling of protein folding and protein-membrane binding. This issue remains controversial. The original idea of the method was to derive all solvation parameters directly from experimental partition coefficients of organic molecules, which allows calculation of solvation free energy. However, some of the recently developed electrostatic models use ad hoc values of 20 or 40 cal/(Å2 mol) for all types of atoms. The non-existent “hydrophobic” interactions of polar atoms are overridden by large electrostatic energy penalties in such models.
Solid-state applications
Strictly speaking, ASA-based models should only be applied to describe solvation, i.e., energetics of transfer between liquid or uniform media. It is possible to express van der Waals interaction energies in the solid state in the surface energy units. This was sometimes done for interpreting protein engineering and ligand binding energetics, which leads to “solvation” parameter for aliphatic carbon of ~40 cal/(Å2 mol), which is 2 times bigger than ~20 cal/(Å2 mol) obtained for transfer from water to liquid hydrocarbons, because the parameters derived by such fitting represent sum of the hydrophobic energy (i.e., 20 cal/Å2 mol) and energy of van der Waals attractions of aliphatic groups in the solid state, which corresponds to fusion enthalpy of alkanes. Unfortunately, the simplified ASA-based model cannot capture the "specific" distance-dependent interactions between different types of atoms in the solid state which are responsible for clustering of atoms with similar polarities in protein structures and molecular crystals. Parameters of such interatomic interactions, together with atomic solvation parameters for the protein interior, have been approximately derived from protein engineering data. The implicit solvation model breaks down when solvent molecules associate strongly with binding cavities in a protein, so that the protein and the solvent molecules form a continuous solid body. On the other hand, this model can be successfully applied for describing transfer from water to the fluid lipid bilayer.
Importance of extensive testing
More testing is needed to evaluate the performance of different implicit solvation models and parameter sets. They are often tested only for a small set of molecules with very simple structure, such as hydrophobic and amphiphilic alpha helixes (α). This method was rarely tested for hundreds of protein structures.
Treatment of ionization effects
Ionization of charged groups has been neglected in continuum electrostatic models of implicit solvation, as well as in standard molecular mechanics and molecular dynamics. The transfer of an ion from water to a nonpolar medium with dielectric constant of ~3 (lipid bilayer) or 4 to 10 (interior of proteins) costs significant energy, as follows from the Born equation and from experiments. However, since the charged protein residues are ionizable, they simply lose their charges in the nonpolar environment, which costs relatively little at the neutral pH: ~4 to 7 kcal/mol for Asp, Glu, Lys, and Arg amino acid residues, according to the Henderson-Hasselbalch equation, ΔG = 2.3RT (pH - pK). The low energetic costs of such ionization effects have indeed been observed for protein mutants with buried ionizable residues. and hydrophobic α-helical peptides in membranes with a single ionizable residue in the middle. However, all electrostatic methods, such as PB, GB, or GBSA assume that ionizable groups remain charged in the nonpolar environments, which leads to grossly overestimated electrostatic energy. In the simplest accessible surface area-based models, this problem was treated using different solvation parameters for charged atoms or Henderson-Hasselbalch equation with some modifications. However even the latter approach does not solve the problem. Charged residues can remain charged even in the nonpolar environment if they are involved in intramolecular ion pairs and H-bonds. Thus, the energetic penalties can be overestimated even using the Henderson-Hasselbalch equation. More rigorous theoretical methods describing such ionization effects have been developed, and there are ongoing efforts to incorporate such methods into the implicit solvation models.
See also
References
Molecular modelling
Computational chemistry
Molecular dynamics
Protein structure | Implicit solvation | [
"Physics",
"Chemistry"
] | 3,199 | [
"Molecular physics",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Theoretical chemistry",
"Molecular modelling",
"Structural biology",
"Protein structure"
] |
7,480,035 | https://en.wikipedia.org/wiki/Machine%20Age | The Machine Age is an era that includes the early-to-mid 20th century, sometimes also including the late 19th century. An approximate dating would be about 1880 to 1945. Considered to be at its peak in the time between the first and second world wars, the Machine Age overlaps with the late part of the Second Industrial Revolution (which ended around 1914 at the start of World War I) and continues beyond it until 1945 at the end of World War II. The 1940s saw the beginning of the Atomic Age, where modern physics saw new applications such as the atomic bomb, the first computers, and the transistor. The Digital Revolution ended the intellectual model of the machine age founded in the mechanical and heralding a new more complex model of high technology. The digital era has been called the Second Machine Age, with its increased focus on machines that do mental tasks.
Universal chronology
Developments
Artifacts of the Machine Age include:
Reciprocating steam engine replaced by gas turbines, internal combustion engines and electric motors
Electrification based on large hydroelectric and thermal electric power production plants and distribution systems
Mass production of high-volume goods on moving assembly lines, particularly of the automobile
Gigantic production machinery, especially for producing and working metal, such as steel rolling mills, bridge component fabrication, and car body presses
Powerful earthmoving equipment
Steel-framed buildings of great height (skyscrapers)
Radio and phonograph technology
High-speed printing presses, enabling the production of low-cost newspapers and mass-market magazines
Low cost appliances for the mass market that employ fractional power electric motors, such as vacuum cleaners and washing machines
Fast and comfortable long-distance travel by railways, cars, and aircraft
Development and employment of modern war machines such as tanks, aircraft, submarines and the modern battleship
Streamline designs in cars and trains, influenced by aircraft design
Social influence
The rise of mass market advertising and consumerism
Nationwide branding and distribution of goods, replacing local arts and crafts
Nationwide cultural leveling due to exposure to films and network broadcasting
Mass-produced government propaganda through print, audio, and motion pictures
Replacement of skilled crafts with low skilled labor
Growth of strong corporations through their abilities to exploit economies of scale in materials and equipment acquisition, manufacturing, and distribution
Corporate exploitation of labor leading to the creation of strong trade unions as a countervailing force
Aristocracy with weighted suffrage or male-only suffrage replaced by democracy with universal suffrage, parallel to one-party states
First-wave feminism
Increased economic planning, including five-year plans, public works and occasional war economy, including nationwide conscription and rationing
Environmental influence
Exploitation of natural resources with little concern for the ecological consequences; a continuation of 19th century practices but at a larger scale.
Release of synthetic dyes, artificial flavorings, and toxic materials into the consumption stream without testing for adverse health effects.
Rise of petroleum as a strategic resource
International relations
Conflicts between nations regarding access to energy sources (particularly oil) and material resources (particularly iron and various metals with which it is alloyed) required to ensure national self-sufficiency. Such conflicts were contributory to two devastating world wars.
Climax of New Imperialism and beginning of decolonization
Arts and architecture
The Machine Age is considered to have influenced:
Dystopian films including Charlie Chaplin's Modern Times and Fritz Lang's Metropolis
Streamline Moderne appliance design and architecture
Bauhaus style
Modern art
Cubism
Art Deco decorative style
Futurism
Music
See also
Second Industrial Revolution
References
Historical eras
History of technology
Second Industrial Revolution
19th century in technology
20th century in technology
Machines | Machine Age | [
"Physics",
"Technology",
"Engineering"
] | 708 | [
"Machines",
"Science and technology studies",
"Physical systems",
"Mechanical engineering",
"History of technology",
"History of science and technology"
] |
7,482,029 | https://en.wikipedia.org/wiki/Completely%20distributive%20lattice | In the mathematical area of order theory, a completely distributive lattice is a complete lattice in which arbitrary joins distribute over arbitrary meets.
Formally, a complete lattice L is said to be completely distributive if, for any doubly indexed family
{xj,k | j in J, k in Kj} of L, we have
where F is the set of choice functions f choosing for each index j of J some index f(j) in Kj.
Complete distributivity is a self-dual property, i.e. dualizing the above statement yields the same class of complete lattices.
Alternative characterizations
Various different characterizations exist. For example, the following is an equivalent law that avoids the use of choice functions. For any set S of sets, we define the set S# to be the set of all subsets X of the complete lattice that have non-empty intersection with all members of S. We then can define complete distributivity via the statement
The operator ( )# might be called the crosscut operator. This version of complete distributivity only implies the original notion when admitting the Axiom of Choice.
Properties
In addition, it is known that the following statements are equivalent for any complete lattice L:
L is completely distributive.
L can be embedded into a direct product of chains [0,1] by an order embedding that preserves arbitrary meets and joins.
Both L and its dual order Lop are continuous posets.
Direct products of [0,1], i.e. sets of all functions from some set X to [0,1] ordered pointwise, are also called cubes.
Free completely distributive lattices
Every poset C can be completed in a completely distributive lattice.
A completely distributive lattice L is called the free completely distributive lattice over a poset C if and only if there is an order embedding such that for every completely distributive lattice M and monotonic function , there is a unique complete homomorphism satisfying . For every poset C, the free completely distributive lattice over a poset C exists and is unique up to isomorphism.
This is an instance of the concept of free object. Since a set X can be considered as a poset with the discrete order, the above result guarantees the existence of the free completely distributive lattice over the set X.
Examples
The unit interval [0,1], ordered in the natural way, is a completely distributive lattice.
More generally, any complete chain is a completely distributive lattice.
The power set lattice for any set X is a completely distributive lattice.
For every poset C, there is a free completely distributive lattice over C. See the section on Free completely distributive lattices above.
See also
Glossary of order theory
Distributive lattice
References
Order theory | Completely distributive lattice | [
"Mathematics"
] | 630 | [
"Order theory"
] |
7,482,797 | https://en.wikipedia.org/wiki/Morphism%20of%20schemes | In algebraic geometry, a morphism of schemes generalizes a morphism of algebraic varieties just as a scheme generalizes an algebraic variety. It is, by definition, a morphism in the category of schemes.
A morphism of algebraic stacks generalizes a morphism of schemes.
Definition
By definition, a morphism of schemes is just a morphism of locally ringed spaces. Isomorphisms are defined accordingly.
A scheme, by definition, has open affine charts and thus a morphism of schemes can also be described in terms of such charts (compare the definition of morphism of varieties). Let ƒ:X→Y be a morphism of schemes. If x is a point of X, since ƒ is continuous, there are open affine subsets U = Spec A of X containing x and V = Spec B of Y such that ƒ(U) ⊆ V. Then ƒ: U → V is a morphism of affine schemes and thus is induced by some ring homomorphism B → A (cf. #Affine case.) In fact, one can use this description to "define" a morphism of schemes; one says that ƒ:X→Y is a morphism of schemes if it is locally induced by ring homomorphisms between coordinate rings of affine charts.
Note: It would not be desirable to define a morphism of schemes as a morphism of ringed spaces. One trivial reason is that there is an example of a ringed-space morphism between affine schemes that is not induced by a ring homomorphism (for example, a morphism of ringed spaces:
that sends the unique point to s and that comes with .) More conceptually, the definition of a morphism of schemes needs to capture "Zariski-local nature" or localization of rings; this point of view (i.e., a local-ringed space) is essential for a generalization (topos).
Let be a morphism of schemes with . Then, for each point x of X, the homomorphism on the stalks:
is a local ring homomorphism: i.e., and so induces an injective homomorphism of residue fields
.
(In fact, φ maps th n-th power of a maximal ideal to the n-th power of the maximal ideal and thus induces the map between the (Zariski) cotangent spaces.)
For each scheme X, there is a natural morphism
which is an isomorphism if and only if X is affine; θ is obtained by gluing U → target which come from restrictions to open affine subsets U of X. This fact can also be stated as follows: for any scheme X and a ring A, there is a natural bijection:
(Proof: The map from the right to the left is the required bijection. In short, θ is an adjunction.)
Moreover, this fact (adjoint relation) can be used to characterize an affine scheme: a scheme X is affine if and only if for each scheme S, the natural map
is bijective. (Proof: if the maps are bijective, then and X is isomorphic to by Yoneda's lemma; the converse is clear.)
A morphism as a relative scheme
Fix a scheme S, called a base scheme. Then a morphism is called a scheme over S or an S-scheme; the idea of the terminology is that it is a scheme X together with a map to the base scheme S. For example, a vector bundle E → S over a scheme S is an S-scheme.
An S-morphism from p:X →S to q:Y →S is a morphism ƒ:X →Y of schemes such that p = q ∘ ƒ. Given an S-scheme , viewing S as an S-scheme over itself via the identity map, an S-morphism is called a S-section or just a section.
All the S-schemes form a category: an object in the category is an S-scheme and a morphism in the category an S-morphism. (This category is the slice category of the category of schemes with the base object S.)
Affine case
Let be a ring homomorphism and let
be the induced map. Then
is continuous.
If is surjective, then is a homeomorphism onto its image.
For every ideal I of A,
has dense image if and only if the kernel of consists of nilpotent elements. (Proof: the preceding formula with I = 0.) In particular, when B is reduced, has dense image if and only if is injective.
Let f: Spec A → Spec B be a morphism of schemes between affine schemes with the pullback map : B → A. That it is a morphism of locally ringed spaces translates to the following statement: if is a point of Spec A,
.
(Proof: In general, consists of g in A that has zero image in the residue field k(x); that is, it has the image in the maximal ideal . Thus, working in the local rings, . If , then is a unit element and so is a unit element.)
Hence, each ring homomorphism B → A defines a morphism of schemes Spec A → Spec B and, conversely, all morphisms between them arise this fashion.
Examples
Basic ones
Let R be a field or For each R-algebra A, to specify an element of A, say f in A, is to give a R-algebra homomorphism such that . Thus, . If X is a scheme over S = Spec R, then taking and using the fact Spec is a right adjoint to the global section functor, we get where . Note the equality is that of rings.
Similarly, for any S-scheme X, there is the identification of the multiplicative groups: where is the multiplicative group scheme.
Many examples of morphisms come from families parameterized by some base space. For example, is a projective morphism of projective varieties where the base space parameterizes quadrics in .
Graph morphism
Given a morphism of schemes over a scheme S, the morphism induced by the identity and f is called the graph morphism of f. The graph morphism of the identity is called the diagonal morphism.
Types of morphisms
Finite type
Morphisms of finite type are one of the basic tools for constructing families of varieties. A morphism is of finite type if there exists a cover such that the fibers can be covered by finitely many affine schemes making the induced ring morphisms into finite-type morphisms. A typical example of a finite-type morphism is a family of schemes. For example,
is a morphism of finite type. A simple non-example of a morphism of finite-type is where is a field. Another is an infinite disjoint union
Closed immersion
A morphism of schemes is a closed immersion if the following conditions hold:
defines a homeomorphism of onto its image
is surjective
This condition is equivalent to the following: given an affine open there exists an ideal such that
Examples
Of course, any (graded) quotient defines a subscheme of (). Consider the quasi-affine scheme and the subset of the -axis contained in . Then if we take the open subset the ideal sheaf is while on the affine open there is no ideal since the subset does not intersect this chart.
Separated
Separated morphisms define families of schemes which are "Hausdorff". For example, given a separated morphism in the associated analytic spaces are both Hausdorff. We say a morphism of scheme is separated if the diagonal morphism is a closed immersion. In topology, an analogous condition for a space to be Hausdorff is if the diagonal set
is a closed subset of . Nevertheless, most schemes are not Hausdorff as topological spaces, as the Zariski topology is in general highly non-Hausdorff.
Examples
Most morphisms encountered in scheme theory will be separated. For example, consider the affine scheme
over Since the product scheme is
the ideal defining the diagonal is generated by
showing the diagonal scheme is affine and closed. This same computation can be used to show that projective schemes are separated as well.
Non-examples
The only time care must be taken is when you are gluing together a family of schemes. For example, if we take the diagram of inclusions
then we get the scheme-theoretic analogue of the classical line with two-origins.
Proper
A morphism is called proper if
it is separated
of finite-type
universally closed
The last condition means that given a morphism the base change morphism is a closed immersion. Most known examples of proper morphisms are in fact projective; but, examples of proper varieties which are not projective can be found using toric geometry.
Projective
Projective morphisms define families of projective varieties over a fixed base scheme. Note that there are two definitions: Hartshornes which states that a morphism is called projective if there exists a closed immersion and the EGA definition which states that a scheme is projective if there is a quasi-coherent -module of finite type such that there is a closed immersion . The second definition is useful because an exact sequence of modules can be used to define projective morphisms.
Projective morphism over a point
A projective morphism defines a projective scheme. For example,
defines a projective curve of genus over .
Family of projective hypersurfaces
If we let then the projective morphism
defines a family of Calabi-Yau manifolds which degenerate.
Lefschetz pencil
Another useful class of examples of projective morphisms are Lefschetz Pencils: they are projective morphisms over some field . For example, given smooth hypersurfaces defined by the homogeneous polynomials there is a projective morphism
giving the pencil.
EGA projective
A nice classical example of a projective scheme is by constructing projective morphisms which factor through rational scrolls. For example, take and the vector bundle . This can be used to construct a -bundle over . If we want to construct a projective morphism using this sheaf we can take an exact sequence, such as
which defines the structure sheaf of the projective scheme in
Flat
Intuition
Flat morphisms have an algebraic definition but have a very concrete geometric interpretation: flat families correspond to families of varieties which vary "continuously". For example,
is a family of smooth affine quadric curves which degenerate to the normal crossing divisor
at the origin.
Properties
One important property that a flat morphism must satisfy is that the dimensions of the fibers should be the same. A simple non-example of a flat morphism then is a blowup since the fibers are either points or copies of some .
Definition
Let be a morphism of schemes. We say that is flat at a point if the induced morphism yields an exact functor Then, is flat if it is flat at every point of . It is also faithfully flat if it is a surjective morphism.
Non-example
Using our geometric intuition it obvious that
is not flat since the fiber over is with the rest of the fibers are just a point. But, we can also check this using the definition with local algebra: Consider the ideal Since we get a local algebra morphism
If we tensor
with , the map
has a non-zero kernel due the vanishing of . This shows that the morphism is not flat.
Unramified
A morphism of affine schemes is unramified if . We can use this for the general case of a morphism of schemes . We say that is unramified at if there is an affine open neighborhood and an affine open such that and Then, the morphism is unramified if it is unramified at every point in .
Geometric example
One example of a morphism which is flat and generically unramified, except for at a point, is
We can compute the relative differentials using the sequence
showing
if we take the fiber , then the morphism is ramified since
otherwise we have
showing that it is unramified everywhere else.
Etale
A morphism of schemes is called étale if it is flat and unramfied. These are the algebro-geometric analogue of covering spaces. The two main examples to think of are covering spaces and finite separable field extensions. Examples in the first case can be constructed by looking at branched coverings and restricting to the unramified locus.
Morphisms as points
By definition, if X, S are schemes (over some base scheme or ring B), then a morphism from S to X (over B) is an S-point of X and one writes:
for the set of all S-points of X. This notion generalizes the notion of solutions to a system of polynomial equations in classical algebraic geometry. Indeed, let X = Spec(A) with . For a B-algebra R, to give an R-point of X is to give an algebra homomorphism A →R, which in turn amounts to giving a homomorphism
that kills fi's. Thus, there is a natural identification:
Example: If X is an S-scheme with structure map π: X → S, then an S-point of X (over S) is the same thing as a section of π.
In category theory, Yoneda's lemma says that, given a category C, the contravariant functor
is fully faithful (where means the category of presheaves on C). Applying the lemma to C = the category of schemes over B, this says that a scheme over B is determined by its various points.
It turns out that in fact it is enough to consider S-points with only affine schemes S, precisely because schemes and morphisms between them are obtained by gluing affine schemes and morphisms between them. Because of this, one usually writes X(R) = X(Spec R) and view X as a functor from the category of commutative B-algebras to Sets.
Example: Given S-schemes X, Y with structure maps p, q,
.
Example: With B still denoting a ring or scheme, for each B-scheme X, there is a natural bijection
{ the isomorphism classes of line bundles L on X together with n + 1 global sections generating L. };
in fact, the sections si of L define a morphism . (See also Proj construction#Global Proj.)
Remark: The above point of view (which goes under the name functor of points and is due to Grothendieck) has had a significant impact on the foundations of algebraic geometry. For example, working with a category-valued (pseudo-)functor instead of a set-valued functor leads to the notion of a stack, which allows one to keep track of morphisms between points (i.e., morphisms between morphisms).
Rational map
A rational map of schemes is defined in the same way for varieties. Thus, a rational map from a reduced scheme X to a separated scheme Y is an equivalence class of a pair consisting of an open dense subset U of X and a morphism . If X is irreducible, a rational function on X is, by definition, a rational map from X to the affine line or the projective line
A rational map is dominant if and only if it sends the generic point to the generic point.
A ring homomorphism between function fields need not induce a dominant rational map (even just a rational map). For example, Spec k[x] and Spec k(x) and have the same function field (namely, k(x)) but there is no rational map from the former to the latter. However, it is true that any inclusion of function fields of algebraic varieties induces a dominant rational map (see morphism of algebraic varieties#Properties.)
See also
Regular embedding
Constructible set (topology)
Universal homeomorphism
Notes
References
Milne, Review of Algebraic Geometry at Algebraic Groups: The theory of group schemes of finite type over a field.
Algebraic geometry | Morphism of schemes | [
"Mathematics"
] | 3,414 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
7,483,985 | https://en.wikipedia.org/wiki/Atkin%E2%80%93Lehner%20theory | In mathematics, Atkin–Lehner theory is part of the theory of modular forms describing when they arise at a given integer level N in such a way that the theory of Hecke operators can be extended to higher levels.
Atkin–Lehner theory is based on the concept of a newform, which is a cusp form 'new' at a given level N, where the levels are the nested congruence subgroups:
of the modular group, with N ordered by divisibility. That is, if M divides N, Γ0(N) is a subgroup of Γ0(M). The oldforms for Γ0(N) are those modular forms f(τ) of level N of the form g(d τ) for modular forms g of level M with M a proper divisor of N, where d divides N/M. The newforms are defined as a vector subspace of the modular forms of level N, complementary to the space spanned by the oldforms, i.e. the orthogonal space with respect to the Petersson inner product.
The Hecke operators, which act on the space of all cusp forms, preserve the subspace of newforms and are self-adjoint and commuting operators (with respect to the Petersson inner product) when restricted to this subspace. Therefore, the algebra of operators on newforms they generate is a finite-dimensional C*-algebra that is commutative; and by the spectral theory of such operators, there exists a basis for the space of newforms consisting of eigenforms for the full Hecke algebra.
Atkin–Lehner involutions
Consider a Hall divisor e of N, which means that not only does e divide N, but also e and N/e are relatively prime (often denoted e||N). If N has s distinct prime divisors, there are 2s Hall divisors of N; for example, if N = 360 = 23⋅32⋅51, the 8 Hall divisors of N are 1, 23, 32, 51, 23⋅32, 23⋅51, 32⋅51, and 23⋅32⋅51.
For each Hall divisor e of N, choose an integral matrix We of the form
with det We = e. These matrices have the following properties:
The elements We normalize Γ0(N): that is, if A is in Γ0(N), then WeAW is in Γ0(N).
The matrix W, which has determinant e2, can be written as eA where A is in Γ0(N). We will be interested in operators on cusp forms coming from the action of We on Γ0(N) by conjugation, under which both the scalar e and the matrix A act trivially. Therefore, the equality W = eA implies that the action of We squares to the identity; for this reason, the resulting operator is called an Atkin–Lehner involution.
If e and f are both Hall divisors of N, then We and Wf commute modulo Γ0(N). Moreover, if we define g to be the Hall divisor g = ef/(e,f)2, their product is equal to Wg modulo Γ0(N).
If we had chosen a different matrix W ′e instead of We, it turns out that We ≡ W ′e modulo Γ0(N), so We and W ′e would determine the same Atkin–Lehner involution.
We can summarize these properties as follows. Consider the subgroup of GL(2,Q) generated by Γ0(N) together with the matrices We; let Γ0(N)+ denote its quotient by positive scalar matrices. Then Γ0(N) is a normal subgroup of Γ0(N)+ of index 2s (where s is the number of distinct prime factors of N); the quotient group is isomorphic to (Z/2Z)s and acts on the cusp forms via the Atkin–Lehner involutions.
References
Mocanu, Andreea. (2019). "Atkin-Lehner Theory of Γ1(m)-Modular Forms"
Koichiro Harada (2010) "Moonshine" of Finite Groups, page 13, European Mathematical Society
Modular forms | Atkin–Lehner theory | [
"Mathematics"
] | 918 | [
"Modular forms",
"Number theory"
] |
7,484,936 | https://en.wikipedia.org/wiki/Odyssey%20Space%20Research | Odyssey Space Research, LLC is a small business based in Houston, Texas near NASA Lyndon B. Johnson Space Center providing engineering research and analysis services. This start-up in the space industry founded in November 2003 has already won major contracts and is the only private company working on the 5 next human-rated spacecraft (ATV, HTV, Orion, and both COTS spacecraft with SpaceX and Orbital Sciences Corporation).
Projects
June 9, 2011 Odyssey Space Research, L.L.C., announced a space-based, experimental app, dubbed SpaceLab for iOS, which will be used for space research aboard the International Space Station (ISS). The SpaceLab for iOS app will make its way to the ISS on an iPhone 4 aboard the orbiter Atlantis on the space shuttle fleet's historic final mission, STS-135, and will remain there for several months for the ISS crew to conduct a series of experiments. Odyssey also announced it is bringing the astronauts' on-orbit experimental tasks down to earth for "terrestrial" consumers to enjoy via the SpaceLab for iOS app available today from the App Store.
August 31, 2006 NASA announced the results of the Orion crew exploration vehicle (CEV) development contract competition. Odyssey Space Research is part of the winning Lockheed Martin team supporting NASA's Orion project. The Odyssey role will include support of the vehicle guidance, navigation and control (GN&C), simulation development, and related analysis.
August 18, 2006 NASA announced the results of the Commercial Orbital Transportation Services (COTS) demonstration competition. Odyssey Space Research is part of one of the two COTS winning teams: SpaceX. Odyssey's role will include support of the Dragon vehicle guidance, navigation and control (GN&C) development, selected simulation and test-bed development, related analyses, systems engineering and operations.
References
Aerospace engineering organizations
Aerospace companies of the United States
Companies based in Houston
Technology companies established in 2003
Private spaceflight companies
2003 establishments in Texas
American companies established in 2003
it:Skip reentry | Odyssey Space Research | [
"Astronomy",
"Engineering"
] | 410 | [
"Outer space",
"Aerospace engineering organizations",
"Aeronautics organizations",
"Astronomy stubs",
"Aerospace engineering",
"Outer space stubs"
] |
7,485,092 | https://en.wikipedia.org/wiki/Montipora | Montipora is a genus of Scleractinian corals in the phylum Cnidaria. Members of the genus Montipora may exhibit many different growth morphologies. With eighty five known species, Montipora is the second most species rich coral genus after Acropora.
Description
Growth morphologies for the genus Montipora include submassive, laminar, foliaceous, encrusting, and branching. It is not uncommon for a single Montipora colony to display more than one growth morphology. Healthy Montipora corals can be a variety of colors, including orange, brown, pink, green, blue, purple, yellow, grey, or tan. Although they are typically uniform in color, some species, such as Montipora spumosa or Montipora verrucosa, may display a mottled appearance.
Montipora corals have the smallest corallites of any coral family. Columellae are not present. Coenosteum and corallite walls are porous, which can result in elaborate structures. The coenosteum of each Montipora species is different, making it useful for identification. Polyps are typically only extended at night.
Montipora corals are commonly mistaken for members of the genus Porites based on their visual similarities, however, Porites can be distinguished from Montipora by examining the structure of the corallites.
Distribution
Montipora corals are common on reefs and lagoons of the Red Sea, the western Indian Ocean and the southern Pacific Ocean, but are entirely absent in the Atlantic Ocean.
Ecology
Montipora corals are hermaphroditic broadcast spawners. Spawning typically happens in spring. The eggs of Montipora corals already contain zooxanthellae, so none is obtained from the environment. This process is known as direct or vertical transmission.
Montipora corals are preyed upon by corallivorous fish, such as butterflyfish. Montipora corals are known to host endo- and ectoparasites such as Allopodion mirum and Xarifia extensa. A currently undescribed species of nudibranch in the genus Phestilla has also been reported in the scientific and aquarium hobbyist literature to feed on the genus.
Montipora corals are susceptible to the same stresses as other Scleractinian corals, such as anthropogenic pollution, sediment, algal growth, and other competitive organisms.
Evolutionary history
A 2007 study found that the genus Montipora formed a strongly supported clade with Anacropora, making it the genus with the closest genetic relationship to Montipora. It is thought that Anacropora evolved from Montipora relatively recently.
Gallery
Species
Montipora aequituberculata Bernard, 1897
Montipora altasepta Nemenzo, 1967
Montipora angulata Lamarck, 1816
Montipora aspergillus Veron, DeVantier & Turak, 2000
Montipora australiensis Bernard, 1897
Montipora biformis Nemenzo, 1988
Montipora cactus Bernard, 1897
Montipora calcarea Bernard, 1897
Montipora calculata Dana, 1846
Montipora capitata Dana, 1846
Montipora capricornis Veron, 1985
Montipora cebuensis Nemenzo, 1976
Montipora circumvallata Ehrenberg, 1834
Montipora cocosensis Vaughan, 1918
Montipora confusa Nemenzo, 1967
Montipora conspicua Nemenzo, 1979
Montipora contorta Nemenzo & Montecillo, 1981
Montipora corbettensis Veron & Wallace, 1984
Montipora crassituberculata Bernard, 1897
Montipora cryptus Veron, 2000
Montipora danae Milne Edwards & Haime, 1851
Montipora delicatula Veron, 2000
Montipora digitata Dana, 1846
Montipora dilatata Studer, 1901
Montipora echinata Veron, DeVantier & Turak, 2000
Montipora edwardsi Bernard, 1897
Montipora efflorescens Bernard, 1897
Montipora effusa Dana, 1846
Montipora ehrenbergi Verrill, 1872
Montipora explanata Brüggemann, 1879
Montipora flabellata Studer, 1901
Montipora florida Nemenzo, 1967
Montipora floweri Wells, 1954
Montipora foliosa Pallas, 1766
Montipora foveolata Dana, 1846
Montipora friabilis Bernard, 1897
Montipora gaimardi Bernard, 1897
Montipora gracilis Klunzinger, 1879
Montipora grisea Bernard, 1897
Montipora hemispherica Veron, 2000
Montipora hirsuta Nemenzo, 1967
Montipora hispida Dana, 1846Montipora hodgsoni Veron, 2000
Montipora hoffmeisteri Wells, 1954
Montipora incrassata Dana, 1846
Montipora informis Bernard, 1897
Montipora kellyi Veron, 2000
Montipora lobulata Bernard, 1897
Montipora mactanensis Nemenzo, 1979
Montipora malampaya Nemenzo, 1967
Montipora maldivensis Pillai & Scheer, 1976
Montipora manauliensis Pillai, 1967
Montipora meandrina Ehrenberg, 1834
Montipora millepora Crossland, 1952
Montipora mollis Bernard, 1897
Montipora monasteriata Forskåi, 1775
Montipora niugini Veron, 2000
Montipora nodosa Dana, 1846
Montipora orientalis Nemenzo, 1967
Montipora pachytuberculata Veron, DeVantier & Turak
Montipora palawanensis Veron, 2000
Montipora patula Verrill, 1870
Montipora peltiformis Bernard, 1897
Montipora porites Veron, 2000
Montipora samarensis Nemenzo, 1967
Montipora saudii Veron, DeVantier & Turak
Montipora setosa Nemenzo, 1976
Montipora sinuosa Pillai & Scheer, 1976
Montipora spongiosa Ehrenberg, 1834
Montipora spongodes Bernard, 1897
Montipora spumosa Lamarck, 1816
Montipora stellata Bernard, 1897
Montipora stilosa
Montipora suvadivae Pillai & Scheer, 1976
Montipora taiwanensis Veron, 2000
Montipora tortuosa Dana, 1846
Montipora tuberculosa Lamarck, 1816
Montipora turgescens Bernard, 1897
Montipora turtlensis Veron & Wallace, 1984
Montipora undata Bernard, 1897
Montipora venosa Ehrenberg, 1834
Montipora verrilli Vaughan, 1907
Montipora verrucosa Lamarck, 1816
Montipora verruculosa Veron, 2000
Montipora vietnamensis'' Veron, 2000
References
Acroporidae
Coral reefs
Scleractinia genera | Montipora | [
"Biology"
] | 1,480 | [
"Biogeomorphology",
"Coral reefs"
] |
64,972 | https://en.wikipedia.org/wiki/Angiogenesis | Angiogenesis is the physiological process through which new blood vessels form from pre-existing vessels, formed in the earlier stage of vasculogenesis. Angiogenesis continues the growth of the vasculature mainly by processes of sprouting and splitting, but processes such as coalescent angiogenesis, vessel elongation and vessel cooption also play a role. Vasculogenesis is the embryonic formation of endothelial cells from mesoderm cell precursors, and from neovascularization, although discussions are not always precise (especially in older texts). The first vessels in the developing embryo form through vasculogenesis, after which angiogenesis is responsible for most, if not all, blood vessel growth during development and in disease.
Angiogenesis is a normal and vital process in growth and development, as well as in wound healing and in the formation of granulation tissue. However, it is also a fundamental step in the transition of tumors from a benign state to a malignant one, leading to the use of angiogenesis inhibitors in the treatment of cancer. The essential role of angiogenesis in tumor growth was first proposed in 1971 by Judah Folkman, who described tumors as "hot and bloody," illustrating that, at least for many tumor types, flush perfusion and even hyperemia are characteristic.
Types
Sprouting angiogenesis
Sprouting angiogenesis was the first identified form of angiogenesis and because of this, it is much more understood than intussusceptive angiogenesis. It occurs in several well-characterized stages. The initial signal comes from tissue areas that are devoid of vasculature. The hypoxia that is noted in these areas causes the tissues to demand the presence of nutrients and oxygen that will allow the tissue to carry out metabolic activities. Because of this, parenchymal cells will secrete vascular endothelial growth factor (VEGF-A) which is a proangiogenic growth factor. These biological signals activate receptors on endothelial cells present in pre-existing blood vessels. Second, the activated endothelial cells, also known as tip cells, begin to release enzymes called proteases that degrade the basement membrane to allow endothelial cells to escape from the original (parent) vessel walls. The endothelial cells then proliferate into the surrounding matrix and form solid sprouts connecting neighboring vessels. The cells that are proliferating are located behind the tip cells and are known as stalk cells. The proliferation of these cells allows the capillary sprout to grow in length simultaneously.
As sprouts extend toward the source of the angiogenic stimulus, endothelial cells migrate in tandem, using adhesion molecules called integrins. These sprouts then form loops to become a full-fledged vessel lumen as cells migrate to the site of angiogenesis. Sprouting occurs at a rate of several millimeters per day, and enables new vessels to grow across gaps in the vasculature. It is markedly different from splitting angiogenesis because it forms entirely new vessels as opposed to splitting existing vessels.
Intussusceptive angiogenesis
Intussusceptive angiogenesis, also known as splitting angiogenesis, is the formation of a new blood vessel by splitting an existing blood vessel into two.
Intussusception was first observed in neonatal rats. In this type of vessel formation, the capillary wall extends into the lumen to split a single vessel in two. There are four phases of intussusceptive angiogenesis. First, the two opposing capillary walls establish a zone of contact. Second, the endothelial cell junctions are reorganized and the vessel bilayer is perforated to allow growth factors and cells to penetrate into the lumen. Third, a core is formed between the 2 new vessels at the zone of contact that is filled with pericytes and myofibroblasts. These cells begin laying collagen fibers into the core to provide an extracellular matrix for growth of the vessel lumen. Finally, the core is fleshed out with no alterations to the basic structure. Intussusception is important because it is a reorganization of existing cells. It allows a vast increase in the number of capillaries without a corresponding increase in the number of endothelial cells. This is especially important in embryonic development as there are not enough resources to create a rich microvasculature with new cells every time a new vessel develops.
Coalescent angiogenesis
Coalescent angiogenesis is a mode of angiogenesis, considered to be the opposite of intussusceptive angiogenesis, where capillaries fuse, or coalesce, to make a larger bloodvessel, thereby increasing blood flow and circulation. Coalescent angiogenesis has extended out of the domain of embryology. It is assumed to play a role in the formation of neovasculature, such as in a tumor.
Physiology
Mechanical stimulation
Mechanical stimulation of angiogenesis is not well characterized. There is a significant amount of controversy with regard to shear stress acting on capillaries to cause angiogenesis, although current knowledge suggests that increased muscle contractions may increase angiogenesis. This may be due to an increase in the production of nitric oxide during exercise. Nitric oxide results in vasodilation of blood vessels.
Chemical stimulation
Chemical stimulation of angiogenesis is performed by various angiogenic proteins e.g. integrins and prostaglandins, including several growth factors e.g. VEGF, FGF.
Overview
FGF
The fibroblast growth factor (FGF) family with its prototype members FGF-1 (acidic FGF) and FGF-2 (basic FGF) consists to date of at least 22 known members. Most are single-chain peptides of 16-18 kDa and display high affinity to heparin and heparan sulfate. In general, FGFs stimulate a variety of cellular functions by binding to cell surface FGF-receptors in the presence of heparin proteoglycans. The FGF-receptor family is composed of seven members, and all the receptor proteins are single-chain receptor tyrosine kinases that become activated through autophosphorylation induced by a mechanism of FGF-mediated receptor dimerization. Receptor activation gives rise to a signal transduction cascade that leads to gene activation and diverse biological responses, including cell differentiation, proliferation, and matrix dissolution, thus initiating a process of mitogenic activity critical for the growth of endothelial cells, fibroblasts, and smooth muscle cells.
FGF-1, unique among all 22 members of the FGF family, can bind to all seven FGF-receptor subtypes, making it the broadest-acting member of the FGF family, and a potent mitogen for the diverse cell types needed to mount an angiogenic response in damaged (hypoxic) tissues, where upregulation of FGF-receptors occurs. FGF-1 stimulates the proliferation and differentiation of all cell types necessary for building an arterial vessel, including endothelial cells and smooth muscle cells; this fact distinguishes FGF-1 from other pro-angiogenic growth factors, such as vascular endothelial growth factor (VEGF), which primarily drives the formation of new capillaries.
Besides FGF-1, one of the most important functions of fibroblast growth factor-2 (FGF-2 or bFGF) is the promotion of endothelial cell proliferation and the physical organization of endothelial cells into tube-like structures, thus promoting angiogenesis. FGF-2 is a more potent angiogenic factor than VEGF or PDGF (platelet-derived growth factor); however, it is less potent than FGF-1. As well as stimulating blood vessel growth, aFGF (FGF-1) and bFGF (FGF-2) are important players in wound healing. They stimulate the proliferation of fibroblasts and endothelial cells that give rise to angiogenesis and developing granulation tissue; both increase blood supply and fill up a wound space/cavity early in the wound-healing process.
VEGF
Vascular endothelial growth factor (VEGF) has been demonstrated to be a major contributor to angiogenesis, increasing the number of capillaries in a given network. Initial in vitro studies demonstrated bovine capillary endothelial cells will proliferate and show signs of tube structures upon stimulation by VEGF and bFGF, although the results were more pronounced with VEGF. Upregulation of VEGF is a major component of the physiological response to exercise and its role in angiogenesis is suspected to be a possible treatment in vascular injuries. In vitro studies clearly demonstrate that VEGF is a potent stimulator of angiogenesis because, in the presence of this growth factor, plated endothelial cells will proliferate and migrate, eventually forming tube structures resembling capillaries.
VEGF causes a massive signaling cascade in endothelial cells. Binding to VEGF receptor-2 (VEGFR-2) starts a tyrosine kinase signaling cascade that stimulates the production of factors that variously stimulate vessel permeability (eNOS, producing NO), proliferation/survival (bFGF), migration (ICAMs/VCAMs/MMPs) and finally differentiation into mature blood vessels. Mechanically, VEGF is upregulated with muscle contractions as a result of increased blood flow to affected areas. The increased flow also causes a large increase in the mRNA production of VEGF receptors 1 and 2. The increase in receptor production means muscle contractions could cause upregulation of the signaling cascade relating to angiogenesis. As part of the angiogenic signaling cascade, NO is widely considered to be a major contributor to the angiogenic response because inhibition of NO significantly reduces the effects of angiogenic growth factors. However, inhibition of NO during exercise does not inhibit angiogenesis, indicating there are other factors involved in the angiogenic response.
Angiopoietins
The angiopoietins, Ang1 and Ang2, are required for the formation of mature blood vessels, as demonstrated by mouse knock out studies. Ang1 and Ang2 are protein growth factors which act by binding their receptors, Tie-1 and Tie-2; while this is somewhat controversial, it seems that cell signals are transmitted mostly by Tie-2; though some papers show physiologic signaling via Tie-1 as well. These receptors are tyrosine kinases. Thus, they can initiate cell signaling when ligand binding causes a dimerization that initiates phosphorylation on key tyrosines.
MMP
Another major contributor to angiogenesis is matrix metalloproteinase (MMP). MMPs help degrade the proteins that keep the vessel walls solid. This proteolysis allows the endothelial cells to escape into the interstitial matrix as seen in sprouting angiogenesis. Inhibition of MMPs prevents the formation of new capillaries. These enzymes are highly regulated during the vessel formation process because destruction of the extracellular matrix would decrease the integrity of the microvasculature.
Dll4
Delta-like ligand 4 (Dll4) is a protein with a negative regulatory effect on angiogenesis. Dll4 is a transmembrane ligand, for the notch family of receptors. There have been many studies conducted that have served to determine consequences of the Delta-like Ligand 4. One study in particular evaluated the effects of Dll4 on tumor vascularity and growth. In order for a tumor to grow and develop, it must have the proper vasculature. The VEGF pathway is vital to the development of vasculature that in turn, helps the tumors to grow. The combined blockade of VEGF and Dll4 results in the inhibition of tumor progression and angiogenesis throughout the tumor. This is due to the hindrance of signaling in endothelial cell signaling which cuts off the proliferation and sprouting of these endothelial cells. With this inhibition, the cells do not uncontrollably grow, therefore, the cancer is stopped at this point. if the blockade, however, were to be lifted, the cells would begin their proliferation once again.
Class 3 semaphorins
Class 3 semaphorins (SEMA3s) regulate angiogenesis by modulating endothelial cell adhesion, migration, proliferation, survival and the recruitment of pericytes. Furthermore, semaphorins can interfere with VEGF-mediated angiogenesis since both SEMA3s and VEGF-A compete for neuropilin receptor binding at endothelial cells. The relative expression levels of SEMA3s and VEGF-A may therefore be important for angiogenesis.
Chemical inhibition
An angiogenesis inhibitor can be endogenous or come from outside as drug or a dietary component.
Application in medicine
Angiogenesis as a therapeutic target
Angiogenesis may be a target for combating diseases such as heart disease characterized by either poor vascularisation or abnormal vasculature. Application of specific compounds that may inhibit or induce the creation of new blood vessels in the body may help combat such diseases. The presence of blood vessels where there should be none may affect the mechanical properties of a tissue, increasing the likelihood of failure. The absence of blood vessels in a repairing or otherwise metabolically active tissue may inhibit repair or other essential functions. Several diseases, such as ischemic chronic wounds, are the result of failure or insufficient blood vessel formation and may be treated by a local expansion of blood vessels, thus bringing new nutrients to the site, facilitating repair. Other diseases, such as age-related macular degeneration, may be created by a local expansion of blood vessels, interfering with normal physiological processes.
The modern clinical application of the principle of angiogenesis can be divided into two main areas: anti-angiogenic therapies, which angiogenic research began with, and pro-angiogenic therapies. Whereas anti-angiogenic therapies are being employed to fight cancer and malignancies, which require an abundance of oxygen and nutrients to proliferate, pro-angiogenic therapies are being explored as options to treat cardiovascular diseases, the number one cause of death in the Western world. One of the first applications of pro-angiogenic methods in humans was a German trial using fibroblast growth factor 1 (FGF-1) for the treatment of coronary artery disease.
Regarding the mechanism of action, pro-angiogenic methods can be differentiated into three main categories: gene therapy, targeting genes of interest for amplification or inhibition; protein replacement therapy, which primarily manipulates angiogenic growth factors like FGF-1 or vascular endothelial growth factor, VEGF; and cell-based therapies, which involve the implantation of specific cell types.
There are still serious, unsolved problems related to gene therapy. Difficulties include effective integration of the therapeutic genes into the genome of target cells, reducing the risk of an undesired immune response, potential toxicity, immunogenicity, inflammatory responses, and oncogenesis related to the viral vectors used in implanting genes and the sheer complexity of the genetic basis of angiogenesis. The most commonly occurring disorders in humans, such as heart disease, high blood pressure, diabetes and Alzheimer's disease, are most likely caused by the combined effects of variations in many genes, and, thus, injecting a single gene may not be significantly beneficial in such diseases.
By contrast, pro-angiogenic protein therapy uses well-defined, precisely structured proteins, with previously defined optimal doses of the individual protein for disease states, and with well-known biological effects. On the other hand, an obstacle of protein therapy is the mode of delivery. Oral, intravenous, intra-arterial, or intramuscular routes of protein administration are not always as effective, as the therapeutic protein may be metabolized or cleared before it can enter the target tissue. Cell-based pro-angiogenic therapies are still early stages of research, with many open questions regarding best cell types and dosages to use.
Tumor angiogenesis
Cancer cells are cells that have lost their ability to divide in a controlled fashion. A malignant tumor consists of a population of rapidly dividing and growing cancer cells that progressively accrues mutations. However, tumors need a dedicated blood supply to provide the oxygen and other essential nutrients they require in order to grow beyond a certain size (generally 1–2 mm3).
Tumors induce blood vessel growth (angiogenesis) by secreting various growth factors (e.g. VEGF) and proteins. Growth factors such as bFGF and VEGF can induce capillary growth into the tumor, which some researchers suspect supply required nutrients, allowing for tumor expansion. Unlike normal blood vessels, tumor blood vessels are dilated with an irregular shape. Other clinicians believe angiogenesis really serves as a waste pathway, taking away the biological end products secreted by rapidly dividing cancer cells. In either case, angiogenesis is a necessary and required step for transition from a small harmless cluster of cells, often said to be about the size of the metal ball at the end of a ball-point pen, to a large tumor. Angiogenesis is also required for the spread of a tumor, or metastasis. Single cancer cells can break away from an established solid tumor, enter the blood vessel, and be carried to a distant site, where they can implant and begin the growth of a secondary tumor. Evidence now suggests the blood vessel in a given solid tumor may, in fact, be mosaic vessels, composed of endothelial cells and tumor cells. This mosaicity allows for substantial shedding of tumor cells into the vasculature, possibly contributing to the appearance of circulating tumor cells in the peripheral blood of patients with malignancies. The subsequent growth of such metastases will also require a supply of nutrients and oxygen and a waste disposal pathway.
Endothelial cells have long been considered genetically more stable than cancer cells. This genomic stability confers an advantage to targeting endothelial cells using antiangiogenic therapy, compared to chemotherapy directed at cancer cells, which rapidly mutate and acquire drug resistance to treatment. For this reason, endothelial cells are thought to be an ideal target for therapies directed against them.
Formation of tumor blood vessels
The mechanism of blood vessel formation by angiogenesis is initiated by the spontaneous dividing of tumor cells due to a mutation. Angiogenic stimulators are then released by the tumor cells. These then travel to already established, nearby blood vessels and activates their endothelial cell receptors. This induces a release of proteolytic enzymes from the vasculature. These enzymes target a particular point on the blood vessel and cause a pore to form. This is the point where the new blood vessel will grow from. The reason tumour cells need a blood supply is because they cannot grow any more than 2-3 millimeters in diameter without an established blood supply which is equivalent to about 50-100 cells. Certain studies have indicated that vessels formed inside the tumor tissue are of higher irregularity and bigger in size, which is as well associated with poorer prognosis.
Angiogenesis for cardiovascular disease
Angiogenesis represents an excellent therapeutic target for the treatment of cardiovascular disease. It is a potent, physiological process that underlies the natural manner in which our bodies respond to a diminution of blood supply to vital organs, namely neoangiogenesis: the production of new collateral vessels to overcome the ischemic insult. A large number of preclinical studies have been performed with protein-, gene- and cell-based therapies in animal models of cardiac ischemia, as well as models of peripheral artery disease. Reproducible and credible successes in these early animal studies led to high enthusiasm that this new therapeutic approach could be rapidly translated to a clinical benefit for millions of patients in the Western world with these disorders. A decade of clinical testing both gene- and protein-based therapies designed to stimulate angiogenesis in underperfused tissues and organs, however, has led from one disappointment to another. Although all of these preclinical readouts, which offered great promise for the transition of angiogenesis therapy from animals to humans, were in one fashion or another, incorporated into early stage clinical trials, the FDA has, to date (2007), insisted that the primary endpoint for approval of an angiogenic agent must be an improvement in exercise performance of treated patients.
These failures suggested that either these are the wrong molecular targets to induce neovascularization, that they can only be effectively used if formulated and administered correctly, or that their presentation in the context of the overall cellular microenvironment may play a vital role in their utility. It may be necessary to present these proteins in a way that mimics natural signaling events, including the concentration, spatial and temporal profiles, and their simultaneous or sequential presentation with other appropriate factors.
Exercise
Angiogenesis is generally associated with aerobic exercise and endurance exercise. While arteriogenesis produces network changes that allow for a large increase in the amount of total flow in a network, angiogenesis causes changes that allow for greater nutrient delivery over a long period of time. Capillaries are designed to provide maximum nutrient delivery efficiency, so an increase in the number of capillaries allows the network to deliver more nutrients in the same amount of time. A greater number of capillaries also allows for greater oxygen exchange in the network. This is vitally important to endurance training, because it allows a person to continue training for an extended period of time. However, no experimental evidence suggests that increased capillarity is required in endurance exercise to increase the maximum oxygen delivery.
Macular degeneration
Overexpression of VEGF causes increased permeability in blood vessels in addition to stimulating angiogenesis. In wet macular degeneration, VEGF causes proliferation of capillaries into the retina. Since the increase in angiogenesis also causes edema, blood and other retinal fluids leak into the retina, causing loss of vision. Anti-angiogenic drugs targeting the VEGF pathways are now used successfully to treat this type of macular degeneration
Tissue engineered constructs
Angiogenesis of vessels from the host body into an implanted tissue engineered constructs is essential. Successful integration is often dependent on thorough vascularisation of the construct as it provides oxygen and nutrients and prevents necrosis in the central areas of the implant. PDGF has been shown to stabilize vascularisation in collagen-glycosaminoglycan scaffolds.
History
The first report of angiogenesis can be traced back to the book A treatise on the blood, inflammation, and gun-shot wounds published in 1794, where Scottish anatomist John Hunter's research findings were compiled. In his study, Hunter observed the growth process of new blood vessels in rabbits. However, he did not coin the term "Angiogenesis," which is now widely used by scholars. Hunter also erroneously attributed the growth process of new blood vessels to the effect of an innate vital principle within the blood. The term "angiogenesis" is believed to have emerged not until the 1900s. The inception of modern angiogenesis research is marked by Judah Folkman's report on the pivotal role of angiogenesis in tumor growth.
Quantification
Quantifying vasculature parameters such as microvascular density has various complications due to preferential staining or limited representation of tissues by histological sections. Recent research has shown complete 3D reconstruction of tumor vascular structure and quantification of vessel structures in whole tumors in animal models.
See also
Aerobic exercise
Angiogenin
The Angiogenesis Foundation
Arteriogenesis
COL41
Neuroangiogenesis
Proteases in angiogenesis
Vasculogenic mimicry
References
External links
Angiogenesis for Heart Disease from Angioplasty.Org
Angiogenesis - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Visualizing Angiogenesis with GFP
NCI Understanding Cancer series on Angiogenesis
Angiogenesis | Angiogenesis | [
"Biology"
] | 5,021 | [
"Angiogenesis"
] |
65,041 | https://en.wikipedia.org/wiki/Singleton%20pattern | In object-oriented programming, the singleton pattern is a software design pattern that restricts the instantiation of a class to a singular instance. It is one of the well-known "Gang of Four" design patterns, which describe how to solve recurring problems in object-oriented software. The pattern is useful when exactly one object is needed to coordinate actions across a system.
More specifically, the singleton pattern allows classes to:
Ensure they only have one instance
Provide easy access to that instance
Control their instantiation (for example, hiding the constructors of a class)
The term comes from the mathematical concept of a singleton.
Common uses
Singletons are often preferred to global variables because they do not pollute the global namespace (or their containing namespace). Additionally, they permit lazy allocation and initialization, whereas global variables in many languages will always consume resources.
The singleton pattern can also be used as a basis for other design patterns, such as the abstract factory, factory method, builder and prototype patterns. Facade objects are also often singletons because only one facade object is required.
Logging is a common real-world use case for singletons, because all objects that wish to log messages require a uniform point of access and conceptually write to a single source.
Implementations
Implementations of the singleton pattern ensure that only one instance of the singleton class ever exists and typically provide global access to that instance.
Typically, this is accomplished by:
Declaring all constructors of the class to be private, which prevents it from being instantiated by other objects
Providing a static method that returns a reference to the instance
The instance is usually stored as a private static variable; the instance is created when the variable is initialized, at some point before when the static method is first called.
This C++23 implementation is based on the pre-C++98 implementation in the book .
import std;
class Singleton {
public:
// defines an class operation that lets clients access its unique instance.
static Singleton& get() {
// may be responsible for creating its own unique instance.
if (nullptr == instance) instance = new Singleton;
return *instance;
}
Singleton(const Singleton&) = delete; // rule of three
Singleton& operator=(const Singleton&) = delete;
static void destruct() {
delete instance;
instance = nullptr;
}
// existing interface goes here
int getValue() {
return value;
}
void setValue(int value_) {
value = value_;
}
private:
Singleton() = default; // no public constructor
~Singleton() = default; // no public destructor
static Singleton* instance; // declaration class variable
int value;
};
Singleton* Singleton::instance = nullptr; // definition class variable
int main() {
Singleton::get().setValue(42);
std::println("value={}", Singleton::get().getValue());
Singleton::destruct();
}
The program output is
value=42
This is an implementation of the Meyers singleton in C++11. The Meyers singleton has no destruct method. The program output is the same as above.
import std;
class Singleton {
public:
static Singleton& get() {
static Singleton instance;
return instance;
}
int getValue() {
return value;
}
void setValue(int value_) {
value = value_;
}
private:
Singleton() = default;
~Singleton() = default;
int value;
};
int main() {
Singleton::get().setValue(42);
std::println("value={}", Singleton::get().getValue());
}
Lazy initialization
A singleton implementation may use lazy initialization in which the instance is created when the static method is first invoked. In multithreaded programs, this can cause race conditions that result in the creation of multiple instances. The following Java 5+ example is a thread-safe implementation, using lazy initialization with double-checked locking.
public class Singleton {
private static volatile Singleton instance = null;
private Singleton() {}
public static Singleton getInstance() {
if (instance == null) {
synchronized(Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
}
Criticism
Some consider the singleton to be an anti-pattern that introduces global state into an application, often unnecessarily. This introduces a potential dependency on the singleton by other objects, requiring analysis of implementation details to determine whether a dependency actually exists. This increased coupling can introduce difficulties with unit testing. In turn, this places restrictions on any abstraction that uses the singleton, such as preventing concurrent use of multiple instances.
Singletons also violate the single-responsibility principle because they are responsible for enforcing their own uniqueness along with performing their normal functions.
See also
Initialization-on-demand holder idiom
Multiton pattern
Software design pattern
References
External links
Complete article "Java Singleton Pattern Explained"
Four different ways to implement singleton in Java "Ways to implement singleton in Java"
Book extract: Implementing the Singleton Pattern in C# by Jon Skeet
Singleton at Microsoft patterns & practices Developer Center
IBM article "Double-checked locking and the Singleton pattern" by Peter Haggar
Google Singleton Detector (analyzes Java bytecode to detect singletons)
Software design patterns
Anti-patterns
Articles with example Java code | Singleton pattern | [
"Technology"
] | 1,143 | [
"Anti-patterns"
] |
65,132 | https://en.wikipedia.org/wiki/Inverted%20repeat | An inverted repeat (or IR) is a single stranded sequence of nucleotides followed downstream by its reverse complement. The intervening sequence of nucleotides between the initial sequence and the reverse complement can be any length including zero. For example, is an inverted repeat sequence. When the intervening length is zero, the composite sequence is a palindromic sequence.
Both inverted repeats and direct repeats constitute types of nucleotide sequences that occur repetitively. These repeated DNA sequences often range from a pair of nucleotides to a whole gene, while the proximity of the repeat sequences varies between widely dispersed and simple tandem arrays. The short tandem repeat sequences may exist as just a few copies in a small region to thousands of copies dispersed all over the genome of most eukaryotes. Repeat sequences with about 10–100 base pairs are known as minisatellites, while shorter repeat sequences having mostly 2–4 base pairs are known as microsatellites. The most common repeats include the dinucleotide repeats, which have the bases AC on one DNA strand, and GT on the complementary strand. Some elements of the genome with unique sequences function as exons, introns and regulatory DNA. Though the most familiar loci of the repetitive sequences are the centromere and the telomere, a large portion of the repeated sequences in the genome are found among the noncoding DNA.
Inverted repeats have a number of important biological functions. They define the boundaries in transposons and indicate regions capable of self-complementary base pairing (regions within a single sequence which can base pair with each other). These properties play an important role in genome instability and contribute not only to cellular evolution and genetic diversity but also to mutation and disease. In order to study these effects in detail, a number of programs and databases have been developed to assist in discovery and annotation of inverted repeats in various genomes.
Understanding inverted repeats
Example of an inverted repeat
Beginning with this initial sequence:
The complement created by base pairing is:
The reverse complement is:
And, the inverted repeat sequence is:
"nnnnnn" represents any number of intervening nucleotides.
Vs. direct repeat
A direct repeat occurs when a sequence is repeated with the same pattern downstream. There is no inversion and no reverse complement associated with a direct repeat. The nucleotide sequence written in bold characters signifies the repeated sequence. It may or may not have intervening nucleotides.
Linguistically, a typical direct repeat is comparable to rhyming, as in "time on a dime".
Vs. tandem repeat
A direct repeat with no intervening nucleotides between the initial sequence and its downstream copy is a Tandem repeat. The nucleotide sequence written in bold characters signifies the repeated sequence.
Linguistically, a typical tandem repeat is comparable to stuttering, or deliberately repeated words, as in "bye-bye".
Vs. palindrome
An inverted repeat sequence with no intervening nucleotides between the initial sequence and its downstream reverse complement is a palindrome. EXAMPLE:
Step 1: start with an inverted repeat:
Step 2: remove intervening nucleotides:
This resulting sequence is palindromic because it is the reverse complement of itself.
test sequence (from Step 2 with intervening nucleotides removed)
complement of test sequence
reverse complement This is the same as the test sequence above, and thus, it is a palindrome.
Biological features and functionality
Conditions that favor synthesis
The diverse genome-wide repeats are derived from transposable elements, which are now understood to "jump" about different genomic locations, without transferring their original copies. Subsequent shuttling of the same sequences over numerous generations ensures their multiplicity throughout the genome. The limited recombination of the sequences between two distinct sequence elements known as conservative site-specific recombination (CSSR) results in inversions of the DNA segment, based on the arrangement of the recombination recognition sequences on the donor DNA and recipient DNA. Again, the orientation of two of the recombining sites within the donor DNA molecule relative to the asymmetry of the intervening DNA cleavage sequences, known as the crossover region, is pivotal to the formation of either inverted repeats or direct repeats. Thus, recombination occurring at a pair of inverted sites will invert the DNA sequence between the two sites. Very stable chromosomes have been observed with comparatively fewer numbers of inverted repeats than direct repeats, suggesting a relationship between chromosome stability and the number of repeats.
Regions where presence is obligatory
Terminal inverted repeats have been observed in the DNA of various eukaryotic transposons, even though their source remains unknown. Inverted repeats are principally found at the origins of replication of cell organism and organelles that range from phage plasmids, mitochondria, and eukaryotic viruses to mammalian cells. The replication origins of the phage G4 and other related phages comprise a segment of nearly 139 nucleotide bases that include three inverted repeats that are essential for replication priming.
In the genome
To a large extent, portions of nucleotide repeats are quite often observed as part of rare DNA combinations. The three main repeats which are largely found in particular DNA constructs include the closely precise homopurine-homopyrimidine inverted repeats, which is otherwise referred to as H palindromes, a common occurrence in triple helical H conformations that may comprise either the TAT or CGC nucleotide triads. The others could be described as long inverted repeats having the tendency to produce hairpins and cruciform, and finally direct tandem repeats, which commonly exist in structures described as slipped-loop, cruciform and left-handed Z-DNA.
Common in different organisms
Past studies suggest that repeats are a common feature of eukaryotes unlike the prokaryotes and archaea. Other reports suggest that irrespective of the comparative shortage of repeat elements in prokaryotic genomes, they nevertheless contain hundreds or even thousands of large repeats. Current genomic analysis seem to suggest the existence of a large excess of perfect inverted repeats in many prokaryotic genomes as compared to eukaryotic genomes.
For quantification and comparison of inverted repeats between several species, namely on archaea, see
Inverted repeats in pseudoknots
Pseudoknots are common structural motifs found in RNA. They are formed by two nested stem-loops such that the stem of one structure is formed from the loop of the other. There are multiple folding topologies among pseudoknots and great variation in loop lengths, making them a structurally diverse group.
Inverted repeats are a key component of pseudoknots as can be seen in the illustration of a naturally occurring pseudoknot found in the human telomerase RNA component. Four different sets of inverted repeats are involved in this structure. Sets 1 and 2 are the stem of stem-loop A and are part of the loop for stem-loop B. Similarly, sets 3 and 4 are the stem for stem-loop B and are part of the loop for stem-loop A.
Pseudoknots play a number of different roles in biology. The telomerase pseudoknot in the illustration is critical to that enzyme's activity. The ribozyme for the hepatitis delta virus (HDV) folds into a double-pseudoknot structure and self-cleaves its circular genome to produce a single-genome-length RNA. Pseudoknots also play a role in programmed ribosomal frameshifting found in some viruses and required in the replication of retroviruses.
In riboswitches
Inverted repeats play an important role in riboswitches, which are RNA regulatory elements that control the expression of genes that produce the mRNA, of which they are part. A simplified example of the flavin mononucleotide (FMN) riboswitch is shown in the illustration. This riboswitch exists in the mRNA transcript and has several stem-loop structures upstream from the coding region. However, only the key stem-loops are shown in the illustration, which has been greatly simplified to help show the role of the inverted repeats. There are multiple inverted repeats in this riboswitch as indicated in green (yellow background) and blue (orange background).
In the absence of FMN, the Anti-termination structure is the preferred conformation for the mRNA transcript. It is created by base-pairing of the inverted repeat region circled in red. When FMN is present, it may bind to the loop and prevent formation of the Anti-termination structure. This allows two different sets of inverted repeats to base-pair and form the Termination structure. The stem-loop on the 3' end is a transcriptional terminator because the sequence immediately following it is a string of uracils (U). If this stem-loop forms (due to the presence of FMN) as the growing RNA strand emerges from the RNA polymerase complex, it will create enough structural tension to cause the RNA strand to dissociate and thus terminate transcription. The dissociation occurs easily because the base-pairing between the U's in the RNA and the A's in the template strand are the weakest of all base-pairings. Thus, at higher concentration levels, FMN down-regulates its own transcription by increasing the formation of the termination structure.
Mutations and disease
Inverted repeats are often described as "hotspots" of eukaryotic and prokaryotic genomic instability. Long inverted repeats are deemed to greatly influence the stability of the genome of various organisms. This is exemplified in E. coli, where genomic sequences with long inverted repeats are seldom replicated, but rather deleted with rapidity. Again, the long inverted repeats observed in yeast greatly favor recombination within the same and adjacent chromosomes, resulting in an equally very high rate of deletion. Finally, a very high rate of deletion and recombination were also observed in mammalian chromosomes regions with inverted repeats. Reported differences in the stability of genomes of interrelated organisms are always an indication of a disparity in inverted repeats. The instability results from the tendency of inverted repeats to fold into hairpin- or cruciform-like DNA structures. These special structures can hinder or confuse DNA replication and other genomic activities. Thus, inverted repeats lead to special configurations in both RNA and DNA that can ultimately cause mutations and disease.
The illustration shows an inverted repeat undergoing cruciform extrusion. DNA in the region of the inverted repeat unwinds and then recombines, forming a four-way junction with two stem-loop structures. The cruciform structure occurs because the inverted repeat sequences self-pair to each other on their own strand.
Extruded cruciforms can lead to frameshift mutations when a DNA sequence has inverted repeats in the form of a palindrome combined with regions of direct repeats on either side. During transcription, slippage and partial dissociation of the polymerase from the template strand can lead to both deletion and insertion mutations. Deletion occurs when a portion of the unwound template strand forms a stem-loop that gets "skipped" by the transcription machinery. Insertion occurs when a stem-loop forms in a dissociated portion of the nascent (newly synthesized) strand causing a portion of the template strand to be transcribed twice.
Antithrombin deficiency from a point mutation
Imperfect inverted repeats can lead to mutations through intrastrand and interstrand switching. The antithrombin III gene's coding region is an example of an imperfect inverted repeat as shown in the figure on the right.
The stem-loop structure forms with a bump at the bottom because the G and T do not pair up. A strand switch event could result in the G (in the bump) being replaced by an A which removes the "imperfection" in the inverted repeat and provides a stronger stem-loop structure. However, the replacement also creates a point mutation converting the GCA codon to ACA. If the strand switch event is followed by a second round of DNA replication, the mutation may become fixed in the genome and lead to disease. Specifically, the missense mutation would lead to a defective gene and a deficiency in antithrombin which could result in the development of venous thromboembolism (blood clots within a vein).
Osteogenesis imperfecta from a frameshift mutation
Mutations in the collagen gene can lead to the disease Osteogenesis Imperfecta, which is characterized by brittle bones. In the illustration, a stem-loop formed from an imperfect inverted repeat is mutated with a thymine (T) nucleotide insertion as a result of an inter- or intrastrand switch. The addition of the T creates a base-pairing "match up" with the adenine (A) that was previously a "bump" on the left side of the stem. While this addition makes the stem stronger and perfects the inverted repeat, it also creates a frameshift mutation in the nucleotide sequence which alters the reading frame and will result in an incorrect expression of the gene.
Programs and databases
The following list provides information and external links to various programs and databases for inverted repeats:
non-B DB A Database for Integrated Annotations and Analysis of non-B DNA Forming Motifs. This database is provided by The Advanced Biomedical Computing Center (ABCC) at then Frederick National Laboratory for Cancer Research (FNLCR). It covers the A-DNA and Z-DNA conformations otherwise known as "non-B DNAs" because they are not the more common B-DNA form of a right-handed Watson-Crick double-helix. These "non-B DNAs" include left-handed Z-DNA, cruciform, triplex, tetraplex and hairpin structures. Searches can be performed on a variety of "repeat types" (including inverted repeats) and on several species.
Inverted Repeats Database Boston University. This database is a web application that allows query and analysis of repeats held in the PUBLIC DATABASE project. Scientists can also analyze their own sequences with the Inverted Repeats Finder algorithm.
P-MITE: a Plant MITE database — this database for Miniature Inverted-repeat Transposable Elements (MITEs) contains sequences from plant genomes. Sequences may be searched or downloaded from the database.
EMBOSS is the "European Molecular Biology Open Software Suite" which runs on UNIX and UNIX-like operating systems. Documentation and program source files are available on the EMBOSS website. Applications specifically related to inverted repeats are listed below:
EMBOSS einverted: Finds inverted repeats in nucleotide sequences. Threshold values can be set to limit the scope of the search.
EMBOSS palindrome: Finds palindromes such as stem loop regions in nucleotide sequences. The program will find sequences that include sections of mismatches and gaps that may correspond to bulges in a stem loop.
References
External links
Repetitive DNA sequences
Molecular biology | Inverted repeat | [
"Chemistry",
"Biology"
] | 3,094 | [
"Biochemistry",
"Molecular genetics",
"Repetitive DNA sequences",
"Molecular biology"
] |
65,184 | https://en.wikipedia.org/wiki/QNX | QNX ( or ) is a commercial Unix-like real-time operating system, aimed primarily at the embedded systems market.
The product was originally developed in the early 1980s by Canadian company Quantum Software Systems, founded March 30, 1980, and later renamed QNX Software Systems.
, it is used in a variety of devices including automobiles, medical devices, program logic controllers, automated manufacturing, trains, and more.
History
Gordon Bell and Dan Dodge, both students at the University of Waterloo in 1980, took a course in real-time operating systems, in which the students constructed a basic real-time microkernel and user programs. Both were convinced there was a commercial need for such a system, and moved to the high-tech planned community Kanata, Ontario, to start Quantum Software Systems that year. In 1982, the first version of QUNIX was released for the Intel 8088 CPU. In 1984, Quantum Software Systems renamed QUNIX to QNX in an effort to avoid any trademark infringement challenges.
One of the first widespread uses of the QNX real-time OS (RTOS) was in the nonembedded world when it was selected as the operating system for the Ontario education system's own computer design, the Unisys ICON. Over the years QNX was used mostly for larger projects, as its 44k kernel was too large to fit inside the one-chip computers of the era. The system garnered a reputation for reliability and became used in running machinery in many industrial applications.
In the late-1980s, Quantum realized that the market was rapidly moving towards the Portable Operating System Interface (POSIX) model and decided to rewrite the kernel to be much more compatible at a low level. The result was QNX 4. During this time Patrick Hayden, while working as an intern, along with Robin Burgener (a full-time employee at the time), developed a new windowing system. This patented concept was developed into the embeddable graphical user interface (GUI) named the QNX Photon microGUI. QNX also provided a version of the X Window System.
To demonstrate the OS's capability and relatively small size, in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk for the 386 PC.
Toward the end of the 1990s, the company, then named QNX Software Systems, began work on a new version of QNX, designed from the ground up to be symmetric multiprocessing (SMP) capable, and to support all current POSIX application programming interfaces (APIs) and any new POSIX APIs that could be anticipated while still retaining the microkernel architecture. This resulted in QNX Neutrino, released in 2001.
Along with the Neutrino kernel, QNX Software Systems became a founding member of the Eclipse (integrated development environment) consortium. The company released a suite of Eclipse plug-ins packaged with the Eclipse workbench in 2002, and named QNX Momentics Tool Suite.
In 2004, the company announced it had been sold to Harman International Industries. Before this acquisition, QNX software was already widely used in the automotive industry for telematics systems. Since the purchase by Harman, QNX software has been designed into over 200 different automobile makes and models, in telematics systems, and in infotainment and navigation units. The QNX CAR Application Platform was running in over 20 million vehicles as of mid-2011. The company has since released several middleware products including the QNX Aviage Multimedia Suite, the QNX Aviage Acoustic Processing Suite and the QNX HMI Suite.
The microkernels of Cisco Systems' IOS-XR (ultra high availability IOS, introduced 2004) and IOS Software Modularity (introduced 2006) were based on QNX. IOS Software Modularity never gained traction and was limited only to small run for Catalyst 6500, while IOS XR moved to Linux as of release 6.x.
In September 2007, QNX Software Systems announced the availability of some of its source code.
On April 9, 2010, Research In Motion (later renamed to BlackBerry Limited) announced they would acquire QNX Software Systems from Harman International Industries. On the same day, QNX source code access was restricted from the public and hobbyists.
In September 2010, the company announced a tablet computer, the BlackBerry PlayBook, and a new operating system BlackBerry Tablet OS based on QNX to run on the tablet.
On October 18, 2011, Research In Motion announced "BBX", which was later renamed BlackBerry 10, in December 2011. Blackberry 10 devices build upon the BlackBerry PlayBook QNX based operating system for touch devices, but adapt the user interface for smartphones using the Qt based Cascades Native User-Interface framework.
At the Geneva Motor Show, Apple demonstrated CarPlay which provides an iOS-like user interface to head units in compatible vehicles. Once configured by the automaker, QNX can be programmed to hand off its display and some functions to an Apple CarPlay device.
On December 11, 2014, Ford Motor Company stated that it would replace Microsoft Auto with QNX.
In January 2017, QNX announced the upcoming release of its SDP 7.0, with support for Intel and ARM 32- and 64-bit platforms, and support for C++14. It was released in March 2017.
In December 2023, QNX released QNX SDP 8.0 which is powered by a next generation microkernel with support for the latest Intel and ARM [v8 and v9] 64 bit platforms, GCC12 based toolchain and a QNX toolkit for Visual Studio Code.
On July 17, 2024, QNX launched QNX Containers, providing a standards-based environment for the deployment, execution, and management of container technology on QNX-based devices.
On September 14, 2024, QNX Filesystem for Safety (QFS) was announced. QFS is a POSIX-compliant, ISO 26262 certified, integrity checking filesystem to provide OEMs and other embedded software suppliers an additional layer of validation when building safety-critical systems.
On January 2, 2025, BlackBerry unveiled the strategic relaunch of the QNX brand. Previously named ‘BlackBerry IoT’, the decision to rename the division ‘QNX’ and relaunch the QNX brand is part of a broader strategy to increase visibility and fortify leadership within the automotive and embedded industries.
On January 6, 2025, QNX, Vector, and TTTech Auto announced a multi-year, global undertaking to collaborate, develop and market a foundational vehicle software platform for software integration. This vehicle software platform is pre-integrated, lightweight, and certified to the automotive industry’s highest functional safety (ISO 26262 ASIL D) and security (ISO 21434) standards.
At CES 2025, QNX announced it is collaborating with Microsoft to make it easier for automakers to build, test, and refine software within the cloud, accelerating the development of Software-Defined Vehicles (SDVs). QNX confirmed that its Software Development Platform (SDP) 8.0 would be coming to Microsoft Azure as part of the collaboration.
At CES 2025, QNX launched QNX Cabin, its industry-first automotive software solution designed to accelerate digital cockpit development. QNX Cabin aims to solve the problem of developing in mixed-criticality environments, blending safety-critical features (e.g. Advanced Driver Assistance Systems) running on the safety-certified QNX Operating System (OS) with consumer applications delivered via guest operating systems including Android Automotive and Linux.
QNX also revealed more details of its QNX Everywhere initiative at CES 2025. Intended to nurture and grow QNX’s worldwide developer community by giving free access to QNX Software Development Platform (SDP) 8.0 to students, schools, research organizations, and hobbyists, QNX Everywhere also includes complimentary resources and on-demand training.
Technology
As a microkernel-based OS, QNX is based on the idea of running most of the operating system kernel in the form of a number of small tasks, named Resource Managers. This differs from the more traditional monolithic kernel, in which the operating system kernel is one very large program composed of a huge number of parts, with special abilities. In the case of QNX, the use of a microkernel allows users (developers) to turn off any functions they do not need without having to change the OS. Instead, such services will simply not run.
The QNX kernel, procnto (also name of the binary executable program for the QNX Neutrino ('nto') process ('proc') itself), contains only CPU scheduling, interprocess communication, interrupt redirection and timers. Everything else runs as a user process, including a special process known as proc which performs process creation and memory management by operating in conjunction with the microkernel. This is made possible by two key mechanisms: subroutine-call type interprocess communication, and a boot loader which can load an image containing the kernel and any desired set of user programs and shared libraries. There are no device drivers in the kernel. The network stack is based on NetBSD code. Along with its support for its own, native, device drivers, QNX supports its legacy, io-net manager server, and the network drivers ported from NetBSD.
QNX interprocess communication consists of sending a message from one process to another and waiting for a reply. This is a single operation, called MsgSend. The message is copied, by the kernel, from the address space of the sending process to that of the receiving process. If the receiving process is waiting for the message, control of the CPU is transferred at the same time, without a pass through the CPU scheduler. Thus, sending a message to another process and waiting for a reply does not result in "losing one's turn" for the CPU. This tight integration between message passing and CPU scheduling is one of the key mechanisms that makes QNX message passing broadly usable. Most Unix and Linux interprocess communication mechanisms lack this tight integration, although a user space implementation of QNX-type messaging for Linux does exist. Mishandling of this subtle issue is a primary reason for the disappointing performance of some other microkernel systems such as early versions of Mach. The recipient process need not be on the same physical machine.
All I/O operations, file system operations, and network operations were meant to work through this mechanism, and the data transferred was copied during message passing. Later versions of QNX reduce the number of separate processes and integrate the network stack and other function blocks into single applications for performance reasons.
Message handling is prioritized by thread priority. Since I/O requests are performed using message passing, high priority threads receive I/O service before low priority threads, an essential feature in a hard real-time system.
The boot loader is the other key component of the minimal microkernel system. Because user programs can be built into the boot image, the set of device drivers and support libraries needed for startup need not be, and are not, in the kernel. Even such functions as program loading are not in the kernel, but instead are in shared user-space libraries loaded as part of the boot image. It is possible to put an entire boot image into ROM, which is used for diskless embedded systems.
Neutrino supports symmetric multiprocessing and processor affinity, called bound multiprocessing (BMP) in QNX terminology. BMP is used to improve cache hitting and to ease the migration of non-SMP safe applications to multi-processor computers.
Neutrino supports strict priority-preemptive scheduling and adaptive partition scheduling (APS). APS guarantees minimum CPU percentages to selected groups of threads, even though others may have higher priority. The adaptive partition scheduler is still strictly priority-preemptive when the system is underloaded. It can also be configured to run a selected set of critical threads strictly real time, even when the system is overloaded.
The QNX operating system also contained a web browser known as 'Voyager'.
Due to its microkernel architecture QNX is also a distributed operating system. Dan Dodge and Peter van der Veen hold based on the QNX operating system's distributed processing features known commercially as Transparent Distributed Processing. This allows the QNX kernels on separate devices to access each other's system services using effectively the same communication mechanism as is used to access local services.
Releases
Uses
The BlackBerry PlayBook tablet computer designed by BlackBerry uses a version of QNX as the primary operating system. The BlackBerry 10 operating system is also based on QNX.
QNX is also used in car infotainment systems with many major car makers offering variants that include an embedded QNX architecture. It is supported by popular SSL/TLS libraries such as wolfSSL.
Since the introduction of its "Safe Kernel 1.0" in 2010, QNX was projected and used subsequently in automated drive or ADAS systems for automotive projects that require a functional safety certified RTOS. QNX provides this with its QNX OS for Safety products.
QNX Neutrino (2001) has been ported to a number of platforms and now runs on practically any modern central processing unit (CPU) family that is used in the embedded market. This includes the PowerPC, x86, MIPS, SH-4, and the closely interrelated group of ARM, StrongARM, and XScale.
As of June 26, 2023, QNX software is now embedded in over 235 million vehicles worldwide, including most leading OEMs and Tier 1s, such as BMW, Bosch, Continental, Dongfeng Motor, Geely, Ford, Honda, Mercedes-Benz, Toyota, Volkswagen, Volvo, and more.
Licensing
QNX offers a license for noncommercial and academic users. In January 2024, BlackBerry introduced QNX Everywhere to make QNX more accessible to Hobbyists. QNX Everywhere was made publicly accessible in early 2024.
Community
OpenQNX is a QNX Community Portal established and run independently. An IRC channel and Newsgroups access via web is available. Diverse industries are represented by the developers on the site.
Foundry27 is a web-based QNX community established by the company. It serves as a hub to QNX Neutrino development where developers can register, choose the license, and get the source code and related toolkit of the RTOS.
QNX Board Support Packages
QNX Standard Support is available for a BSP that is listed below as available on QNX Software Center. For other BSPs, alternative forms of support (e.g., custom support plans, etc.) may be available or required from the “BSP Supplier” or “Board Vendor” indicated below.
BlackBerry QNX Partners
BlackBerry QNX has worked with a network of partner organizations to provide complementary technologies. These important relationships have ability to provide the foundational software, middleware, and services behind the world's most critical embedded systems.
See also
Comparison of operating systems
Android Auto
Android Automotive
Automotive Grade Linux
CarPlay
Ford Sync
HarmonyOS NEXT
OpenHarmony
Windows Embedded Automotive
References
Further reading
External links
Development for QNX phones
Foundry27
QNX User Community
Open source applications
GUIdebook > GUIs > QNX
QNX used for Canadian Nuclear Power Plants
QNX demo floppy disk
1980 establishments in Ontario
ARM operating systems
BlackBerry Limited
Computing platforms
Distributed operating systems
Embedded operating systems
Information technology companies of Canada
Lightweight Unix-like systems
Microkernel-based operating systems
Microkernels
Mobile operating systems
Proprietary operating systems
Real-time operating systems
Tablet operating systems
Software companies established in 1980
X86 operating systems
X86-64 operating systems | QNX | [
"Technology"
] | 3,291 | [
"Computing platforms",
"Real-time computing",
"Real-time operating systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.