id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
51,654 | https://en.wikipedia.org/wiki/Soliton | In mathematics and physics, a soliton is a nonlinear, self-reinforcing, localized wave packet that is strongly stable, in that it preserves its shape while propagating freely, at constant velocity, and recovers it even after collisions with other such localized wave packets. Its remarkable stability can be traced to a balanced cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons were subsequently found to provide stable solutions of a wide class of weakly nonlinear dispersive partial differential equations describing physical systems.
The soliton phenomenon was first described in 1834 by John Scott Russell who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation". The Korteweg–de Vries equation was later formulated to model such waves, and the term soliton was coined by Zabusky and Kruskal to describe localized, strongly stable propagating solutions to this equation. The name was meant to characterize the solitary nature of the waves, with the 'on' suffix recalling the usage for particles such as electrons, baryons or hadrons, reflecting their observed particle-like behaviour.
Definition
A single, consensus definition of a soliton is difficult to find. ascribe three properties to solitons:
They are of permanent form;
They are localized within a region;
They can interact with other solitons, and emerge from the collision unchanged, except for a phase shift.
More formal definitions exist, but they require substantial mathematics. Moreover, some scientists use the term soliton for phenomena that do not quite have these three properties (for instance, the 'light bullets' of nonlinear optics are often called solitons despite losing energy during interaction).
Explanation
Dispersion and nonlinearity can interact to produce permanent and localized wave forms. Consider a pulse of light traveling in glass. This pulse can be thought of as consisting of light of several different frequencies. Since glass shows dispersion, these different frequencies travel at different speeds and the shape of the pulse therefore changes over time. However, also the nonlinear Kerr effect occurs; the refractive index of a material at a given frequency depends on the light's amplitude or strength. If the pulse has just the right shape, the Kerr effect exactly cancels the dispersion effect and the pulse's shape does not change over time. Thus, the pulse is a soliton. See soliton (optics) for a more detailed description.
Many exactly solvable models have soliton solutions, including the Korteweg–de Vries equation, the nonlinear Schrödinger equation, the coupled nonlinear Schrödinger equation, and the sine-Gordon equation. The soliton solutions are typically obtained by means of the inverse scattering transform, and owe their stability to the integrability of the field equations. The mathematical theory of these equations is a broad and very active field of mathematical research.
Some types of tidal bore, a wave phenomenon of a few rivers including the River Severn, are 'undular': a wavefront followed by a train of solitons. Other solitons occur as the undersea internal waves, initiated by seabed topography, that propagate on the oceanic pycnocline. Atmospheric solitons also exist, such as the morning glory cloud of the Gulf of Carpentaria, where pressure solitons traveling in a temperature inversion layer produce vast linear roll clouds. The recent and not widely accepted soliton model in neuroscience proposes to explain the signal conduction within neurons as pressure solitons.
A topological soliton, also called a topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution". Soliton stability is due to topological constraints, rather than integrability of the field equations. The constraints arise almost always because the differential equations must obey a set of boundary conditions, and the boundary has a nontrivial homotopy group, preserved by the differential equations. Thus, the differential equation solutions can be classified into homotopy classes.
No continuous transformation maps a solution in one homotopy class to another. The solutions are truly distinct, and maintain their integrity, even in the face of extremely powerful forces. Examples of topological solitons include the screw dislocation in a crystalline lattice, the Dirac string and the magnetic monopole in electromagnetism, the Skyrmion and the Wess–Zumino–Witten model in quantum field theory, the magnetic skyrmion in condensed matter physics, and cosmic strings and domain walls in cosmology.
History
In 1834, John Scott Russell described his wave of translation:
Scott Russell spent some time making practical and theoretical investigations of these waves. He built wave tanks at his home and noticed some key properties:
The waves are stable, and can travel over very large distances (normal waves would tend to either flatten out, or steepen and topple over)
The speed depends on the size of the wave, and its width on the depth of water.
Unlike normal waves they will never merge – so a small wave is overtaken by a large one, rather than the two combining.
If a wave is too big for the depth of water, it splits into two, one big and one small.
Scott Russell's experimental work seemed at odds with Isaac Newton's and Daniel Bernoulli's theories of hydrodynamics. George Biddell Airy and George Gabriel Stokes had difficulty accepting Scott Russell's experimental observations because they could not be explained by the existing water wave theories. Additional observations were reported by Henry Bazin in 1862 after experiments carried out in the canal de Bourgogne in France. Their contemporaries spent some time attempting to extend the theory but it would take until the 1870s before Joseph Boussinesq and Lord Rayleigh published a theoretical treatment and solutions. In 1895 Diederik Korteweg and Gustav de Vries provided what is now known as the Korteweg–de Vries equation, including solitary wave and periodic cnoidal wave solutions.
In 1965 Norman Zabusky of Bell Labs and Martin Kruskal of Princeton University first demonstrated soliton behavior in media subject to the Korteweg–de Vries equation (KdV equation) in a computational investigation using a finite difference approach. They also showed how this behavior explained the puzzling earlier work of Fermi, Pasta, Ulam, and Tsingou.
In 1967, Gardner, Greene, Kruskal and Miura discovered an inverse scattering transform enabling analytical solution of the KdV equation. The work of Peter Lax on Lax pairs and the Lax equation has since extended this to solution of many related soliton-generating systems.
Solitons are, by definition, unaltered in shape and speed by a collision with other solitons. So solitary waves on a water surface are near-solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind.
Solitons are also studied in quantum mechanics, thanks to the fact that they could provide a new foundation of it through de Broglie's unfinished program, known as "Double solution theory" or "Nonlinear wave mechanics". This theory, developed by de Broglie in 1927 and revived in the 1950s, is the natural continuation of his ideas developed between 1923 and 1926, which extended the wave–particle duality introduced by Albert Einstein for the light quanta, to all the particles of matter. The observation of accelerating surface gravity water wave soliton using an external hydrodynamic linear potential was demonstrated in 2019. This experiment also demonstrated the ability to excite and measure the phases of ballistic solitons.
In fiber optics
Much experimentation has been done using solitons in fiber optics applications. Solitons in a fiber optic system are described by the Manakov equations.
Solitons' inherent stability make long-distance transmission possible without the use of repeaters, and could potentially double transmission capacity as well.
In biology
Solitons may occur in proteins and DNA. Solitons are related to the low-frequency collective motion in proteins and DNA.
A recently developed model in neuroscience proposes that signals, in the form of density waves, are conducted within neurons in the form of solitons. Solitons can be described as almost lossless energy transfer in biomolecular chains or lattices as wave-like propagations of coupled conformational and electronic disturbances.
In material physics
Solitons can occur in materials, such as ferroelectrics, in the form of domain walls. Ferroelectric materials exhibit spontaneous polarization, or electric dipoles, which are coupled to configurations of the material structure. Domains of oppositely poled polarizations can be present within a single material as the structural configurations corresponding to opposing polarizations are equally favorable with no presence of external forces. The domain boundaries, or “walls”, that separate these local structural configurations are regions of lattice dislocations. The domain walls can propagate as the polarizations, and thus, the local structural configurations can switch within a domain with applied forces such as electric bias or mechanical stress. Consequently, the domain walls can be described as solitons, discrete regions of dislocations that are able to slip or propagate and maintain their shape in width and length.
In recent literature, ferroelectricity has been observed in twisted bilayers of van der Waal materials such as molybdenum disulfide and graphene. The moiré superlattice that arises from the relative twist angle between the van der Waal monolayers generates regions of different stacking orders of the atoms within the layers. These regions exhibit inversion symmetry breaking structural configurations that enable ferroelectricity at the interface of these monolayers. The domain walls that separate these regions are composed of partial dislocations where different types of stresses, and thus, strains are experienced by the lattice. It has been observed that soliton or domain wall propagation across a moderate length of the sample (order of nanometers to micrometers) can be initiated with applied stress from an AFM tip on a fixed region. The soliton propagation carries the mechanical perturbation with little loss in energy across the material, which enables domain switching in a domino-like fashion.
It has also been observed that the type of dislocations found at the walls can affect propagation parameters such as direction. For instance, STM measurements showed four types of strains of varying degrees of shear, compression, and tension at domain walls depending on the type of localized stacking order in twisted bilayer graphene. Different slip directions of the walls are achieved with different types of strains found at the domains, influencing the direction of the soliton network propagation.
Nonidealities such as disruptions to the soliton network and surface impurities can influence soliton propagation as well. Domain walls can meet at nodes and get effectively pinned, forming triangular domains, which have been readily observed in various ferroelectric twisted bilayer systems. In addition, closed loops of domain walls enclosing multiple polarization domains can inhibit soliton propagation and thus, switching of polarizations across it. Also, domain walls can propagate and meet at wrinkles and surface inhomogeneities within the van der Waal layers, which can act as obstacles obstructing the propagation.
In magnets
In magnets, there also exist different types of solitons and other nonlinear waves. These magnetic solitons are an exact solution of classical nonlinear differential equations — magnetic equations, e.g. the Landau–Lifshitz equation, continuum Heisenberg model, Ishimori equation, nonlinear Schrödinger equation and others.
In nuclear physics
Atomic nuclei may exhibit solitonic behavior. Here the whole nuclear wave function is predicted to exist as a soliton under certain conditions of temperature and energy. Such conditions are suggested to exist in the cores of some stars in which the nuclei would not react but pass through each other unchanged, retaining their soliton waves through a collision between nuclei.
The Skyrme Model is a model of nuclei in which each nucleus is considered to be a topologically stable soliton solution of a field theory with conserved baryon number.
Bions
The bound state of two solitons is known as a bion, or in systems where the bound state periodically oscillates, a breather. The interference-type forces between solitons could be used in making bions. However, these forces are very sensitive to their relative phases. Alternatively, the bound state of solitons could be formed by dressing atoms with highly excited Rydberg levels. The resulting self-generated potential profile features an inner attractive soft-core supporting the 3D self-trapped soliton, an intermediate repulsive shell (barrier) preventing solitons’ fusion, and an outer attractive layer (well) used for completing the bound state resulting in giant stable soliton molecules. In this scheme, the distance and size of the individual solitons in the molecule can be controlled dynamically with the laser adjustment.
In field theory bion usually refers to the solution of the Born–Infeld model. The name appears to have been coined by G. W. Gibbons in order to distinguish this solution from the conventional soliton, understood as a regular, finite-energy (and usually stable) solution of a differential equation describing some physical system. The word regular means a smooth solution carrying no sources at all. However, the solution of the Born–Infeld model still carries a source in the form of a Dirac-delta function at the origin. As a consequence it displays a singularity in this point (although the electric field is everywhere regular). In some physical contexts (for instance string theory) this feature can be important, which motivated the introduction of a special name for this class of solitons.
On the other hand, when gravity is added (i.e. when considering the coupling of the Born–Infeld model to general relativity) the corresponding solution is called EBIon, where "E" stands for Einstein.
Alcubierre drive
Erik Lentz, a physicist at the University of Göttingen, has theorized that solitons could allow for the generation of Alcubierre warp bubbles in spacetime without the need for exotic matter, i.e., matter with negative mass.
See also
Compacton, a soliton with compact support
Dissipative soliton
Freak waves may be a Peregrine soliton related phenomenon involving breather waves which exhibit concentrated localized energy with non-linear properties.
Instantons
Nematicons
Non-topological soliton, in quantum field theory
Nonlinear Schrödinger equation
Oscillons
Pattern formation
Peakon, a soliton with a non-differentiable peak
Q-ball a non-topological soliton
Sine-Gordon equation
Soliton (optics)
Soliton (topological)
Soliton distribution
Soliton hypothesis for ball lightning, by David Finkelstein
Soliton model of nerve impulse propagation
Topological quantum number
Vector soliton
Notes
References
Further reading
External links
Related to John Scott Russell
John Scott Russell and the solitary wave
John Scott Russell biography
Photograph of soliton on the Scott Russell Aqueduct
Other
Heriot–Watt University soliton page
Helmholtz solitons, Salford University
Short didactic review on optical solitons
1834 introductions
1834 in science
Fluid dynamics
Integrable systems
Partial differential equations
Quasiparticles
Wave mechanics | Soliton | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,246 | [
"Physical phenomena",
"Integrable systems",
"Chemical engineering",
"Theoretical physics",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Subatomic particles",
"Condensed matter physics",
"Piping",
"Quasiparticles",
"Matter",
"Fluid dynamics"
] |
51,776 | https://en.wikipedia.org/wiki/Hydrostatic%20equilibrium | In fluid mechanics, hydrostatic equilibrium (hydrostatic balance, hydrostasy) is the condition of a fluid or plastic solid at rest, which occurs when external forces, such as gravity, are balanced by a pressure-gradient force. In the planetary physics of Earth, the pressure-gradient force prevents gravity from collapsing the planetary atmosphere into a thin, dense shell, whereas gravity prevents the pressure-gradient force from diffusing the atmosphere into outer space. In general, it is what causes objects in space to be spherical.
Hydrostatic equilibrium is the distinguishing criterion between dwarf planets and small solar system bodies, and features in astrophysics and planetary geology. Said qualification of equilibrium indicates that the shape of the object is symmetrically rounded, mostly due to rotation, into an ellipsoid, where any irregular surface features are consequent to a relatively thin solid crust. In addition to the Sun, there are a dozen or so equilibrium objects confirmed to exist in the Solar System.
Mathematical consideration
For a hydrostatic fluid on Earth:
Derivation from force summation
Newton's laws of motion state that a volume of a fluid that is not in motion or that is in a state of constant velocity must have zero net force on it. This means the sum of the forces in a given direction must be opposed by an equal sum of forces in the opposite direction. This force balance is called a hydrostatic equilibrium.
The fluid can be split into a large number of cuboid volume elements; by considering a single element, the action of the fluid can be derived.
There are three forces: the force downwards onto the top of the cuboid from the pressure, P, of the fluid above it is, from the definition of pressure,
Similarly, the force on the volume element from the pressure of the fluid below pushing upwards is
Finally, the weight of the volume element causes a force downwards. If the density is ρ, the volume is V and g the standard gravity, then:
The volume of this cuboid is equal to the area of the top or bottom, times the height – the formula for finding the volume of a cube.
By balancing these forces, the total force on the fluid is
This sum equals zero if the fluid's velocity is constant. Dividing by A,
Or,
Ptop − Pbottom is a change in pressure, and h is the height of the volume element—a change in the distance above the ground. By saying these changes are infinitesimally small, the equation can be written in differential form.
Density changes with pressure, and gravity changes with height, so the equation would be:
Derivation from Navier–Stokes equations
Note finally that this last equation can be derived by solving the three-dimensional Navier–Stokes equations for the equilibrium situation where
Then the only non-trivial equation is the -equation, which now reads
Thus, hydrostatic balance can be regarded as a particularly simple equilibrium solution of the Navier–Stokes equations.
Derivation from general relativity
By plugging the energy–momentum tensor for a perfect fluid
into the Einstein field equations
and using the conservation condition
one can derive the Tolman–Oppenheimer–Volkoff equation for the structure of a static, spherically symmetric relativistic star in isotropic coordinates:
In practice, Ρ and ρ are related by an equation of state of the form f(Ρ,ρ) = 0, with f specific to makeup of the star. M(r) is a foliation of spheres weighted by the mass density ρ(r), with the largest sphere having radius r:
Per standard procedure in taking the nonrelativistic limit, we let , so that the factor
Therefore, in the nonrelativistic limit the Tolman–Oppenheimer–Volkoff equation reduces to Newton's hydrostatic equilibrium:
(we have made the trivial notation change h = r and have used f(Ρ,ρ) = 0 to express ρ in terms of P). A similar equation can be computed for rotating, axially symmetric stars, which in its gauge independent form reads:
Unlike the TOV equilibrium equation, these are two equations (for instance, if as usual when treating stars, one chooses spherical coordinates as basis coordinates , the index i runs for the coordinates r and ).
Applications
Fluids
The hydrostatic equilibrium pertains to hydrostatics and the principles of equilibrium of fluids. A hydrostatic balance is a particular balance for weighing substances in water. Hydrostatic balance allows the discovery of their specific gravities. This equilibrium is strictly applicable when an ideal fluid is in steady horizontal laminar flow, and when any fluid is at rest or in vertical motion at constant speed. It can also be a satisfactory approximation when flow speeds are low enough that acceleration is negligible.
Astrophysics and planetary science
From the time of Isaac Newton much work has been done on the subject of the equilibrium attained when a fluid rotates in space. This has application to both stars and objects like planets, which may have been fluid in the past or in which the solid material deforms like a fluid when subjected to very high stresses.
In any given layer of a star, there is a hydrostatic equilibrium between the outward-pushing pressure gradient and the weight of the material above pressing inward. One can also study planets under the assumption of hydrostatic equilibrium. A rotating star or planet in hydrostatic equilibrium is usually an oblate spheroid, an ellipsoid in which two of the principal axes are equal and longer than the third.
An example of this phenomenon is the star Vega, which has a rotation period of 12.5 hours. Consequently, Vega is about 20% larger at the equator than from pole to pole.
In his 1687 Philosophiæ Naturalis Principia Mathematica Newton correctly stated that a rotating fluid of uniform density under the influence of gravity would take the form of a spheroid and that the gravity (including the effect of centrifugal force) would be weaker at the equator than at the poles by an amount equal (at least asymptotically) to five fourths the centrifugal force at the equator. In 1742, Colin Maclaurin published his treatise on fluxions in which he showed that the spheroid was an exact solution. If we designate the equatorial radius by the polar radius by and the eccentricity by with
he found that the gravity at the poles is
where is the gravitational constant, is the (uniform) density, and is the total mass. The ratio of this to the gravity if the fluid is not rotating, is asymptotic to
as goes to zero, where is the flattening:
The gravitational attraction on the equator (not including centrifugal force) is
Asymptotically, we have:
Maclaurin showed (still in the case of uniform density) that the component of gravity toward the axis of rotation depended only on the distance from the axis and was proportional to that distance, and the component in the direction toward the plane of the equator depended only on the distance from that plane and was proportional to that distance. Newton had already pointed out that the gravity felt on the equator (including the lightening due to centrifugal force) has to be in order to have the same pressure at the bottom of channels from the pole or from the equator to the centre, so the centrifugal force at the equator must be
Defining the latitude to be the angle between a tangent to the meridian and axis of rotation, the total gravity felt at latitude (including the effect of centrifugal force) is
This spheroid solution is stable up to a certain (critical) angular momentum (normalized by ), but in 1834, Carl Jacobi showed that it becomes unstable once the eccentricity reaches 0.81267 (or reaches 0.3302).
Above the critical value, the solution becomes a Jacobi, or scalene, ellipsoid (one with all three axes different). Henri Poincaré in 1885 found that at still higher angular momentum it will no longer be ellipsoidal but piriform or oviform. The symmetry drops from the 8-fold D point group to the 4-fold C, with its axis perpendicular to the axis of rotation. Other shapes satisfy the equations beyond that, but are not stable, at least not near the point of bifurcation. Poincaré was unsure what would happen at higher angular momentum but concluded that eventually the blob would split into two.
The assumption of uniform density may apply more or less to a molten planet or a rocky planet but does not apply to a star or to a planet like the earth which has a dense metallic core. In 1737, Alexis Clairaut studied the case of density varying with depth. Clairaut's theorem states that the variation of the gravity (including centrifugal force) is proportional to the square of the sine of the latitude, with the proportionality depending linearly on the flattening () and the ratio at the equator of centrifugal force to gravitational attraction. (Compare with the exact relation above for the case of uniform density.) Clairaut's theorem is a special case for an oblate spheroid of a connexion found later by Pierre-Simon Laplace between the shape and the variation of gravity.
If the star has a massive nearby companion object, tidal forces come into play as well, which distort the star into a scalene shape if rotation alone would make it a spheroid. An example of this is Beta Lyrae.
Hydrostatic equilibrium is also important for the intracluster medium, where it restricts the amount of fluid that can be present in the core of a cluster of galaxies.
We can also use the principle of hydrostatic equilibrium to estimate the velocity dispersion of dark matter in clusters of galaxies. Only baryonic matter (or, rather, the collisions thereof) emits X-ray radiation. The absolute X-ray luminosity per unit volume takes the form where and are the temperature and density of the baryonic matter, and is some function of temperature and fundamental constants. The baryonic density satisfies the above equation
The integral is a measure of the total mass of the cluster, with being the proper distance to the center of the cluster. Using the ideal gas law ( is the Boltzmann constant and is a characteristic mass of the baryonic gas particles) and rearranging, we arrive at
Multiplying by and differentiating with respect to yields
If we make the assumption that cold dark matter particles have an isotropic velocity distribution, the same derivation applies to these particles, and their density satisfies the non-linear differential equation
With perfect X-ray and distance data, we could calculate the baryon density at each point in the cluster and thus the dark matter density. We could then calculate the velocity dispersion of the dark matter, which is given by
The central density ratio is dependent on the redshift of the cluster and is given by
where is the angular width of the cluster and the proper distance to the cluster. Values for the ratio range from 0.11 to 0.14 for various surveys.
Planetary geology
The concept of hydrostatic equilibrium has also become important in determining whether an astronomical object is a planet, dwarf planet, or small Solar System body. According to the definition of planet that was adopted by the International Astronomical Union in 2006, one defining characteristic of planets and dwarf planets is that they are objects that have sufficient gravity to overcome their own rigidity and assume hydrostatic equilibrium. Such a body often has the differentiated interior and geology of a world (a planemo), but near-hydrostatic or formerly hydrostatic bodies such as the proto-planet 4 Vesta may also be differentiated and some hydrostatic bodies (notably Callisto) have not thoroughly differentiated since their formation. Often, the equilibrium shape is an oblate spheroid, as is the case with Earth. However, in the cases of moons in synchronous orbit, nearly unidirectional tidal forces create a scalene ellipsoid. Also, the purported dwarf planet is scalene because of its rapid rotation though it may not currently be in equilibrium.
Icy objects were previously believed to need less mass to attain hydrostatic equilibrium than rocky objects. The smallest object that appears to have an equilibrium shape is the icy moon Mimas at 396 km, but the largest icy object known to have an obviously non-equilibrium shape is the icy moon Proteus at 420 km, and the largest rocky bodies in an obviously non-equilibrium shape are the asteroids Pallas and Vesta at about 520 km. However, Mimas is not actually in hydrostatic equilibrium for its current rotation. The smallest body confirmed to be in hydrostatic equilibrium is the dwarf planet Ceres, which is icy, at 945 km, and the largest known body to have a noticeable deviation from hydrostatic equilibrium is Iapetus being made of mostly permeable ice and almost no rock. At 1,469 km Iapetus is neither spherical nor ellipsoid. Instead, it is rather in a strange walnut-like shape due to its unique equatorial ridge. Some icy bodies may be in equilibrium at least partly due to a subsurface ocean, which is not the definition of equilibrium used by the IAU (gravity overcoming internal rigid-body forces). Even larger bodies deviate from hydrostatic equilibrium, although they are ellipsoidal: examples are Earth's Moon at 3,474 km (mostly rock), and the planet Mercury at 4,880 km (mostly metal).
In 2024, Kiss et al. found that has an ellipsoidal shape incompatible with hydrostatic equilibrium for its current spin. They hypothesised that Quaoar originally had a rapid rotation and was in hydrostatic equilibrium but that its shape became "frozen in" and did not change as it spun down because of tidal forces from its moon Weywot. If so, this would resemble the situation of Iapetus, which is too oblate for its current spin. Iapetus is generally still considered a planetary-mass moon nonetheless though not always.
Solid bodies have irregular surfaces, but local irregularities may be consistent with global equilibrium. For example, the massive base of the tallest mountain on Earth, Mauna Kea, has deformed and depressed the level of the surrounding crust and so the overall distribution of mass approaches equilibrium.
Atmospheric modeling
In the atmosphere, the pressure of the air decreases with increasing altitude. This pressure difference causes an upward force called the pressure-gradient force. The force of gravity balances this out, keeps the atmosphere bound to Earth and maintains pressure differences with altitude.
See also
List of gravitationally rounded objects of the Solar System; a list of objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium)
Statics
Two-balloon experiment
References
External links
Strobel, Nick. (May, 2001). Nick Strobel's Astronomy Notes.
by Richard Pogge, Ohio State University, Department of Astronomy
Concepts in astrophysics
Concepts in astronomy
Definition of planet
Fluid mechanics
Hydrostatics | Hydrostatic equilibrium | [
"Physics",
"Astronomy",
"Engineering"
] | 3,110 | [
"Definition of planet",
"Concepts in astrophysics",
"Concepts in astronomy",
"Astrophysics",
"Civil engineering",
"Astronomical controversies",
"Astronomical classification systems",
"Fluid mechanics"
] |
51,784 | https://en.wikipedia.org/wiki/Map%20projection | In cartography, a map projection is any of a broad set of transformations employed to represent the curved two-dimensional surface of a globe on a plane. In a map projection, coordinates, often expressed as latitude and longitude, of locations from the surface of the globe are transformed to coordinates on a plane.
Projection is a necessary step in creating a two-dimensional map and is one of the essential elements of cartography.
All projections of a sphere on a plane necessarily distort the surface in some way. Depending on the purpose of the map, some distortions are acceptable and others are not; therefore, different map projections exist in order to preserve some properties of the sphere-like body at the expense of other properties. The study of map projections is primarily about the characterization of their distortions. There is no limit to the number of possible map projections.
More generally, projections are considered in several fields of pure mathematics, including differential geometry, projective geometry, and manifolds. However, the term "map projection" refers specifically to a cartographic projection.
Despite the name's literal meaning, projection is not limited to perspective projections, such as those resulting from casting a shadow on a screen, or the rectilinear image produced by a pinhole camera on a flat film plate. Rather, any mathematical function that transforms coordinates from the curved surface distinctly and smoothly to the plane is a projection. Few projections in practical use are perspective.
Most of this article assumes that the surface to be mapped is that of a sphere. The Earth and other large celestial bodies are generally better modeled as oblate spheroids, whereas small objects such as asteroids often have irregular shapes. The surfaces of planetary bodies can be mapped even if they are too irregular to be modeled well with a sphere or ellipsoid. Therefore, more generally, a map projection is any method of flattening a continuous curved surface onto a plane.
The most well-known map projection is the Mercator projection. This map projection has the property of being conformal. However, it has been criticized throughout the 20th century for enlarging regions further from the equator. To contrast, equal-area projections such as the Sinusoidal projection and the Gall–Peters projection show the correct sizes of countries relative to each other, but distort angles. The National Geographic Society and most atlases favor map projections that compromise between area and angular distortion, such as the Robinson projection and the Winkel tripel projection.
Metric properties of maps
Many properties can be measured on the Earth's surface independently of its geography:
Area
Shape
Direction
Bearing
Distance
Map projections can be constructed to preserve some of these properties at the expense of others. Because the Earth's curved surface is not isometric to a plane, preservation of shapes inevitably requires a variable scale and, consequently, non-proportional presentation of areas. Similarly, an area-preserving projection can not be conformal, resulting in shapes and bearings distorted in most places of the map. Each projection preserves, compromises, or approximates basic metric properties in different ways. The purpose of the map determines which projection should form the base for the map. Because maps have many different purposes, a diversity of projections have been created to suit those purposes.
Another consideration in the configuration of a projection is its compatibility with data sets to be used on the map. Data sets are geographic information; their collection depends on the chosen datum (model) of the Earth. Different datums assign slightly different coordinates to the same location, so in large scale maps, such as those from national mapping systems, it is important to match the datum to the projection. The slight differences in coordinate assignation between different datums is not a concern for world maps or those of large regions, where such differences are reduced to imperceptibility.
Distortion
Carl Friedrich Gauss's Theorema Egregium proved that a sphere's surface cannot be represented on a plane without distortion. The same applies to other reference surfaces used as models for the Earth, such as oblate spheroids, ellipsoids, and geoids. Since any map projection is a representation of one of those surfaces on a plane, all map projections distort.
The classical way of showing the distortion inherent in a projection is to use Tissot's indicatrix. For a given point, using the scale factor h along the meridian, the scale factor k along the parallel, and the angle θ′ between them, Nicolas Tissot described how to construct an ellipse that illustrates the amount and orientation of the components of distortion. By spacing the ellipses regularly along the meridians and parallels, the network of indicatrices shows how distortion varies across the map.
Other distortion metrics
Many other ways have been described of showing the distortion in projections. Like Tissot's indicatrix, the Goldberg-Gott indicatrix is based on infinitesimals, and depicts flexion and skewness (bending and lopsidedness) distortions.
Rather than the original (enlarged) infinitesimal circle as in Tissot's indicatrix, some visual methods project finite shapes that span a part of the map.
For example, a small circle of fixed radius (e.g., 15 degrees angular radius). Sometimes spherical triangles are used.
In the first half of the 20th century, projecting a human head onto different projections was common to show how distortion varies across one projection as compared to another.
In dynamic media, shapes of familiar coastlines and boundaries can be dragged across an interactive map to show how the projection distorts sizes and shapes according to position on the map.
Another way to visualize local distortion is through grayscale or color gradations whose shade represents the magnitude of the angular deformation or areal inflation. Sometimes both are shown simultaneously by blending two colors to create a bivariate map.
To measure distortion globally across areas instead of at just a single point necessarily involves choosing priorities to reach a compromise. Some schemes use distance distortion as a proxy for the combination of angular deformation and areal inflation; such methods arbitrarily choose what paths to measure and how to weight them in order to yield a single result. Many have been described.
Design and construction
The creation of a map projection involves two steps:
Selection of a model for the shape of the Earth or planetary body (usually choosing between a sphere or ellipsoid). Because the Earth's actual shape is irregular, information is lost in this step.
Transformation of geographic coordinates (longitude and latitude) to Cartesian (x,y) or polar (r, θ) plane coordinates. In large-scale maps, Cartesian coordinates normally have a simple relation to eastings and northings defined as a grid superimposed on the projection. In small-scale maps, eastings and northings are not meaningful, and grids are not superimposed.
Some of the simplest map projections are literal projections, as obtained by placing a light source at some definite point relative to the globe and projecting its features onto a specified surface. Although most projections are not defined in this way, picturing the light source-globe model can be helpful in understanding the basic concept of a map projection.
Choosing a projection surface
A surface that can be unfolded or unrolled into a plane or sheet without stretching, tearing or shrinking is called a developable surface. The cylinder, cone and the plane are all developable surfaces. The sphere and ellipsoid do not have developable surfaces, so any projection of them onto a plane will have to distort the image. (To compare, one cannot flatten an orange peel without tearing and warping it.)
One way of describing a projection is first to project from the Earth's surface to a developable surface such as a cylinder or cone, and then to unroll the surface into a plane. While the first step inevitably distorts some properties of the globe, the developable surface can then be unfolded without further distortion.
Aspect of the projection
Once a choice is made between projecting onto a cylinder, cone, or plane, the aspect of the shape must be specified. The aspect describes how the developable surface is placed relative to the globe: it may be normal (such that the surface's axis of symmetry coincides with the Earth's axis), transverse (at right angles to the Earth's axis) or oblique (any angle in between).
Notable lines
The developable surface may also be either tangent or secant to the sphere or ellipsoid. Tangent means the surface touches but does not slice through the globe; secant means the surface does slice through the globe. Moving the developable surface away from contact with the globe never preserves or optimizes metric properties, so that possibility is not discussed further here.
Tangent and secant lines (standard lines) are represented undistorted. If these lines are a parallel of latitude, as in conical projections, it is called a standard parallel. The central meridian is the meridian to which the globe is rotated before projecting. The central meridian (usually written λ) and a parallel of origin (usually written φ) are often used to define the origin of the map projection.
Scale
A globe is the only way to represent the Earth with constant scale throughout the entire map in all directions. A map cannot achieve that property for any area, no matter how small. It can, however, achieve constant scale along specific lines.
Some possible properties are:
The scale depends on location, but not on direction. This is equivalent to preservation of angles, the defining characteristic of a conformal map.
Scale is constant along any parallel in the direction of the parallel. This applies for any cylindrical or pseudocylindrical projection in normal aspect.
Combination of the above: the scale depends on latitude only, not on longitude or direction. This applies for the Mercator projection in normal aspect.
Scale is constant along all straight lines radiating from a particular geographic location. This is the defining characteristic of an equidistant projection such as the azimuthal equidistant projection. There are also projections (Maurer's two-point equidistant projection, Close) where true distances from two points are preserved.
Choosing a model for the shape of the body
Projection construction is also affected by how the shape of the Earth or planetary body is approximated. In the following section on projection categories, the earth is taken as a sphere in order to simplify the discussion. However, the Earth's actual shape is closer to an oblate ellipsoid. Whether spherical or ellipsoidal, the principles discussed hold without loss of generality.
Selecting a model for a shape of the Earth involves choosing between the advantages and disadvantages of a sphere versus an ellipsoid. Spherical models are useful for small-scale maps such as world atlases and globes, since the error at that scale is not usually noticeable or important enough to justify using the more complicated ellipsoid. The ellipsoidal model is commonly used to construct topographic maps and for other large- and medium-scale maps that need to accurately depict the land surface. Auxiliary latitudes are often employed in projecting the ellipsoid.
A third model is the geoid, a more complex and accurate representation of Earth's shape coincident with what mean sea level would be if there were no winds, tides, or land. Compared to the best fitting ellipsoid, a geoidal model would change the characterization of important properties such as distance, conformality and equivalence. Therefore, in geoidal projections that preserve such properties, the mapped graticule would deviate from a mapped ellipsoid's graticule. Normally the geoid is not used as an Earth model for projections, however, because Earth's shape is very regular, with the undulation of the geoid amounting to less than 100 m from the ellipsoidal model out of the 6.3 million m Earth radius. For irregular planetary bodies such as asteroids, however, sometimes models analogous to the geoid are used to project maps from.
Other regular solids are sometimes used as generalizations for smaller bodies' geoidal equivalent. For example, Io is better modeled by triaxial ellipsoid or prolated spheroid with small eccentricities. Haumea's shape is a Jacobi ellipsoid, with its major axis twice as long as its minor and with its middle axis one and half times as long as its minor.
See map projection of the triaxial ellipsoid for further information.
Classification
One way to classify map projections is based on the type of surface onto which the globe is projected. In this scheme, the projection process is described as placing a hypothetical projection surface the size of the desired study area in contact with part of the Earth, transferring features of the Earth's surface onto the projection surface, then unraveling and scaling the projection surface into a flat map. The most common projection surfaces are cylindrical (e.g., Mercator), conic (e.g., Albers), and planar (e.g., stereographic). Many mathematical projections, however, do not neatly fit into any of these three projection methods. Hence other peer categories have been described in the literature, such as pseudoconic, pseudocylindrical, pseudoazimuthal, retroazimuthal, and polyconic.
Another way to classify projections is according to properties of the model they preserve. Some of the more common categories are:
Preserving direction (azimuthal or zenithal), a trait possible only from one or two points to every other point
Preserving shape locally (conformal or orthomorphic)
Preserving area (equal-area or equiareal or equivalent or authalic)
Preserving distance (equidistant), a trait possible only between one or two points and every other point
Preserving shortest route, a trait preserved only by the gnomonic projection
Because the sphere is not a developable surface, it is impossible to construct a map projection that is both equal-area and conformal.
Projections by surface
The three developable surfaces (plane, cylinder, cone) provide useful models for understanding, describing, and developing map projections. However, these models are limited in two fundamental ways. For one thing, most world projections in use do not fall into any of those categories. For another thing, even most projections that do fall into those categories are not naturally attainable through physical projection. As L. P. Lee notes,
Lee's objection refers to the way the terms cylindrical, conic, and planar (azimuthal) have been abstracted in the field of map projections. If maps were projected as in light shining through a globe onto a developable surface, then the spacing of parallels would follow a very limited set of possibilities. Such a cylindrical projection (for example) is one which:
Is rectangular;
Has straight vertical meridians, spaced evenly;
Has straight parallels symmetrically placed about the equator;
Has parallels constrained to where they fall when light shines through the globe onto the cylinder, with the light source someplace along the line formed by the intersection of the prime meridian with the equator, and the center of the sphere.
(If you rotate the globe before projecting then the parallels and meridians will not necessarily still be straight lines. Rotations are normally ignored for the purpose of classification.)
Where the light source emanates along the line described in this last constraint is what yields the differences between the various "natural" cylindrical projections. But the term cylindrical as used in the field of map projections relaxes the last constraint entirely. Instead the parallels can be placed according to any algorithm the designer has decided suits the needs of the map. The famous Mercator projection is one in which the placement of parallels does not arise by projection; instead parallels are placed how they need to be in order to satisfy the property that a course of constant bearing is always plotted as a straight line.
Cylindrical
Normal cylindrical
A normal cylindrical projection is any projection in which meridians are mapped to equally spaced vertical lines and circles of latitude (parallels) are mapped to horizontal lines.
The mapping of meridians to vertical lines can be visualized by imagining a cylinder whose axis coincides with the Earth's axis of rotation. This cylinder is wrapped around the Earth, projected onto, and then unrolled.
By the geometry of their construction, cylindrical projections stretch distances east-west. The amount of stretch is the same at any chosen latitude on all cylindrical projections, and is given by the secant of the latitude as a multiple of the equator's scale. The various cylindrical projections are distinguished from each other solely by their north-south stretching (where latitude is given by φ):
North-south stretching equals east-west stretching (sec φ): The east-west scale matches the north-south scale: conformal cylindrical or Mercator; this distorts areas excessively in high latitudes.
North-south stretching grows with latitude faster than east-west stretching (sec φ): The cylindric perspective (or central cylindrical) projection; unsuitable because distortion is even worse than in the Mercator projection.
North-south stretching grows with latitude, but less quickly than the east-west stretching: such as the Miller cylindrical projection (sec φ).
North-south distances neither stretched nor compressed (1): equirectangular projection or "plate carrée".
North-south compression equals the cosine of the latitude (the reciprocal of east-west stretching): equal-area cylindrical. This projection has many named specializations differing only in the scaling constant, such as the Gall–Peters or Gall orthographic (undistorted at the 45° parallels), Behrmann (undistorted at the 30° parallels), and Lambert cylindrical equal-area (undistorted at the equator). Since this projection scales north-south distances by the reciprocal of east-west stretching, it preserves area at the expense of shapes.
In the first case (Mercator), the east-west scale always equals the north-south scale. In the second case (central cylindrical), the north-south scale exceeds the east-west scale everywhere away from the equator. Each remaining case has a pair of secant lines—a pair of identical latitudes of opposite sign (or else the equator) at which the east-west scale matches the north-south-scale.
Normal cylindrical projections map the whole Earth as a finite rectangle, except in the first two cases, where the rectangle stretches infinitely tall while retaining constant width.
Transverse cylindrical
A transverse cylindrical projection is a cylindrical projection that in the tangent case uses a great circle along a meridian as contact line for the cylinder.
See: transverse Mercator.
Oblique cylindrical
An oblique cylindrical projection aligns with a great circle, but not the equator and not a meridian.
Pseudocylindrical
Pseudocylindrical projections represent the central meridian as a straight line segment. Other meridians are longer than the central meridian and bow outward, away from the central meridian. Pseudocylindrical projections map parallels as straight lines. Along parallels, each point from the surface is mapped at a distance from the central meridian that is proportional to its difference in longitude from the central meridian. Therefore, meridians are equally spaced along a given parallel. On a pseudocylindrical map, any point further from the equator than some other point has a higher latitude than the other point, preserving north-south relationships. This trait is useful when illustrating phenomena that depend on latitude, such as climate. Examples of pseudocylindrical projections include:
Sinusoidal, which was the first pseudocylindrical projection developed. On the map, as in reality, the length of each parallel is proportional to the cosine of the latitude. The area of any region is true.
Collignon projection, which in its most common forms represents each meridian as two straight line segments, one from each pole to the equator.
Hybrid
The HEALPix projection combines an equal-area cylindrical projection in equatorial regions with the Collignon projection in polar areas.
Conic
The term "conic projection" is used to refer to any projection in which meridians are mapped to equally spaced lines radiating out from the apex and circles of latitude (parallels) are mapped to circular arcs centered on the apex.
When making a conic map, the map maker arbitrarily picks two standard parallels. Those standard parallels may be visualized as secant lines where the cone intersects the globe—or, if the map maker chooses the same parallel twice, as the tangent line where the cone is tangent to the globe. The resulting conic map has low distortion in scale, shape, and area near those standard parallels. Distances along the parallels to the north of both standard parallels or to the south of both standard parallels are stretched; distances along parallels between the standard parallels are compressed. When a single standard parallel is used, distances along all other parallels are stretched.
Conic projections that are commonly used are:
Equidistant conic, which keeps parallels evenly spaced along the meridians to preserve a constant distance scale along each meridian, typically the same or similar scale as along the standard parallels.
Albers conic, which adjusts the north-south distance between non-standard parallels to compensate for the east-west stretching or compression, giving an equal-area map.
Lambert conformal conic, which adjusts the north-south distance between non-standard parallels to equal the east-west stretching, giving a conformal map.
Pseudoconic
Bonne, an equal-area projection on which most meridians and parallels appear as curved lines. It has a configurable standard parallel along which there is no distortion.
Werner cordiform, upon which distances are correct from one pole, as well as along all parallels.
American polyconic and other projections in the polyconic projection class.
Azimuthal (projections onto a plane)
Azimuthal projections have the property that directions from a central point are preserved and therefore great circles through the central point are represented by straight lines on the map. These projections also have radial symmetry in the scales and hence in the distortions: map distances from the central point are computed by a function r(d) of the true distance d, independent of the angle; correspondingly, circles with the central point as center are mapped into circles which have as center the central point on the map.
The mapping of radial lines can be visualized by imagining a plane tangent to the Earth, with the central point as tangent point.
The radial scale is r′(d) and the transverse scale r(d)/(R sin ) where R is the radius of the Earth.
Some azimuthal projections are true perspective projections; that is, they can be constructed mechanically, projecting the surface of the Earth by extending lines from a point of perspective (along an infinite line through the tangent point and the tangent point's antipode) onto the plane:
The gnomonic projection displays great circles as straight lines. Can be constructed by using a point of perspective at the center of the Earth. r(d) = c tan ; so that even just a hemisphere is already infinite in extent.
The orthographic projection maps each point on the Earth to the closest point on the plane. Can be constructed from a point of perspective an infinite distance from the tangent point; r(d) = c sin . Can display up to a hemisphere on a finite circle. Photographs of Earth from far enough away, such as the Moon, approximate this perspective.
Near-sided perspective projection, which simulates the view from space at a finite distance and therefore shows less than a full hemisphere, such as used in The Blue Marble 2012).
The General Perspective projection can be constructed by using a point of perspective outside the Earth. Photographs of Earth (such as those from the International Space Station) give this perspective. It is a generalization of near-sided perspective projection, allowing tilt.
The stereographic projection, which is conformal, can be constructed by using the tangent point's antipode as the point of perspective. r(d) = c tan ; the scale is c/(2R cos ). Can display nearly the entire sphere's surface on a finite circle. The sphere's full surface requires an infinite map.
Other azimuthal projections are not true perspective projections:
Azimuthal equidistant: r(d) = cd; it is used by amateur radio operators to know the direction to point their antennas toward a point and see the distance to it. Distance from the tangent point on the map is proportional to surface distance on the Earth (; for the case where the tangent point is the North Pole, see the flag of the United Nations)
Lambert azimuthal equal-area. Distance from the tangent point on the map is proportional to straight-line distance through the Earth: r(d) = c sin
Logarithmic azimuthal is constructed so that each point's distance from the center of the map is the logarithm of its distance from the tangent point on the Earth. r(d) = c ln ); locations closer than at a distance equal to the constant d0 are not shown.
Polyhedral
Polyhedral map projections use a polyhedron to subdivide the globe into faces, and then projects each face to the globe. The most well-known polyhedral map projection is Buckminster Fuller's Dymaxion map.
Projections by preservation of a metric property
Conformal
Conformal, or orthomorphic, map projections preserve angles locally, implying that they map infinitesimal circles of constant size anywhere on the Earth to infinitesimal circles of varying sizes on the map. In contrast, mappings that are not conformal distort most such small circles into ellipses of distortion. An important consequence of conformality
is that relative angles at each point of the map are correct, and the local scale (although varying throughout the map) in every direction around any one point is constant. These are some conformal projections:
Mercator: Rhumb lines are represented by straight segments
Transverse Mercator
Stereographic: Any circle of a sphere, great and small, maps to a circle or straight line.
Roussilhe
Lambert conformal conic
Peirce quincuncial projection
Adams hemisphere-in-a-square projection
Guyou hemisphere-in-a-square projection
Equal-area
Equal-area maps preserve area measure, generally distorting shapes in order to do so. Equal-area maps are also called equivalent or authalic. These are some projections that preserve area:
Albers conic
Boggs eumorphic
Bonne
Bottomley
Collignon
Cylindrical equal-area
Eckert II, IV and VI
Equal Earth
Gall orthographic (also known as Gall–Peters, or Peters, projection)
Goode's homolosine
Hammer
Hobo–Dyer
Lambert azimuthal equal-area
Lambert cylindrical equal-area
Mollweide
Sinusoidal
Strebe 1995
Snyder's equal-area polyhedral projection, used for geodesic grids.
Tobler hyperelliptical
Werner
Equidistant
If the length of the line segment connecting two projected points on the plane is proportional to the geodesic (shortest surface) distance between the two unprojected points on the globe, then we say that distance has been preserved between those two points. An equidistant projection preserves distances from one or two special points to all other points. The special point or points may get stretched into a line or curve segment when projected. In that case, the point on the line or curve segment closest to the point being measured to must be used to measure the distance.
Plate carrée: Distances from the two poles are preserved, in equatorial aspect.
Azimuthal equidistant: Distances from the center and edge are preserved.
Equidistant conic: Distances from the two poles are preserved, in equatorial aspect.
Werner cordiform Distances from the North Pole are preserved, in equatorial aspect.
Two-point equidistant: Two "control points" are arbitrarily chosen by the map maker; distances from each control point are preserved.
Gnomonic
Great circles are displayed as straight lines:
Gnomonic projection
Retroazimuthal
Direction to a fixed location B (the bearing at the starting location A of the shortest route) corresponds to the direction on the map from A to B:
Littrow—the only conformal retroazimuthal projection
Hammer retroazimuthal—also preserves distance from the central point
Craig retroazimuthal aka Mecca or Qibla—also has vertical meridians
Compromise projections
Compromise projections give up the idea of perfectly preserving metric properties, seeking instead to strike a balance between distortions, or to simply make things look right. Most of these types of projections distort shape in the polar regions more than at the equator. These are some compromise projections:
Robinson
van der Grinten
Miller cylindrical
Winkel Tripel
Buckminster Fuller's Dymaxion
B. J. S. Cahill's Butterfly Map
Kavrayskiy VII projection
Wagner VI projection
Chamberlin trimetric
Oronce Finé's cordiform
AuthaGraph projection
Suitability of projections for application
The mathematics of projection do not permit any particular map projection to be best for everything. Something will always be distorted. Thus, many projections exist to serve the many uses of maps and their vast range of scales.
Modern national mapping systems typically employ a transverse Mercator or close variant for large-scale maps in order to preserve conformality and low variation in scale over small areas. For smaller-scale maps, such as those spanning continents or the entire world, many projections are in common use according to their fitness for the purpose, such as Winkel tripel, Robinson and Mollweide. Reference maps of the world often appear on compromise projections. Due to distortions inherent in any map of the world, the choice of projection becomes largely one of aesthetics.
Thematic maps normally require an equal area projection so that phenomena per unit area are shown in correct proportion.
However, representing area ratios correctly necessarily distorts shapes more than many maps that are not equal-area.
The Mercator projection, developed for navigational purposes, has often been used in world maps where other projections would have been more appropriate. This problem has long been recognized even outside professional circles. For example, a 1943 New York Times editorial states:
A controversy in the 1980s over the Peters map motivated the American Cartographic Association (now the Cartography and Geographic Information Society) to produce a series of booklets (including Which Map Is Best) designed to educate the public about map projections and distortion in maps. In 1989 and 1990, after some internal debate, seven North American geographic organizations adopted a resolution recommending against using any rectangular projection (including Mercator and Gall–Peters) for reference maps of the world.
See also
References
Citations
Sources
Fran Evanisko, American River College, lectures for Geography 20: "Cartographic Design for GIS", Fall 2002
Map Projections—PDF versions of numerous projections, created and released into the Public Domain by Paul B. Anderson ... member of the International Cartographic Association's Commission on Map Projections
External links
, U.S. Geological Survey Professional Paper 1453, by John P. Snyder (USGS) and Philip M. Voxland (U. Minnesota), 1989.
A Cornucopia of Map Projections, a visualization of distortion on a vast array of map projections in a single image.
G.Projector, free software can render many projections (NASA GISS).
Color images of map projections and distortion (Mapthematics.com).
Geometric aspects of mapping: map projection (KartoWeb.itc.nl).
Java world map projections, Henry Bottomley (SE16.info).
Map Projections (MathWorld).
MapRef: The Internet Collection of MapProjections and Reference Systems in Europe
PROJ.4 – Cartographic Projections Library.
Projection Reference Table of examples and properties of all common projections (RadicalCartography.net).
, Melita Kennedy (Esri).
World Map Projections, Stephen Wolfram based on work by Yu-Sung Chang (Wolfram Demonstrations Project).
Compare Map Projections
"the true size" page show size of countries without distortion from Mercator projection
Cartography
Infographics
Descriptive geometry
Geodesy | Map projection | [
"Mathematics"
] | 6,685 | [
"Map projections",
"Applied mathematics",
"Geodesy",
"Coordinate systems"
] |
51,834 | https://en.wikipedia.org/wiki/Geneva%20Protocol | The Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or other Gases, and of Bacteriological Methods of Warfare, usually called the Geneva Protocol, is a treaty prohibiting the use of chemical and biological weapons in international armed conflicts. It was signed at Geneva on 17 June 1925 and entered into force on 8 February 1928. It was registered in League of Nations Treaty Series on 7 September 1929. The Geneva Protocol is a protocol to the Convention for the Supervision of the International Trade in Arms and Ammunition and in Implements of War signed on the same date, and followed the Hague Conventions of 1899 and 1907.
It prohibits the use of "asphyxiating, poisonous or other gases, and of all analogous liquids, materials or devices" and "bacteriological methods of warfare". This is now understood to be a general prohibition on chemical weapons and biological weapons between state parties, but has nothing to say about production, storage or transfer. Later treaties did cover these aspects – the 1972 Biological Weapons Convention (BWC) and the 1993 Chemical Weapons Convention (CWC).
A number of countries submitted reservations when becoming parties to the Geneva Protocol, declaring that they only regarded the non-use obligations as applying to other parties and that these obligations would cease to apply if the prohibited weapons were used against them.
Negotiation history
In the Hague Conventions of 1899 and 1907, the use of dangerous chemical agents was outlawed. In spite of this, the First World War saw large-scale chemical warfare. France used tear gas in 1914, but the first large-scale successful deployment of chemical weapons was by the German Empire in Ypres, Belgium in 1915, when chlorine gas was released as part of a German attack at the Battle of Gravenstafel. Following this, a chemical arms race began, with the United Kingdom, Russia, Austria-Hungary, the United States, and Italy joining France and Germany in the use of chemical weapons.
This resulted in the development of a range of horrific chemicals affecting lungs, skin, or eyes. Some were intended to be lethal on the battlefield, like hydrogen cyanide, and efficient methods of deploying agents were invented. At least 124,000 tons were produced during the war. In 1918, about one grenade out of three was filled with dangerous chemical agents. Around 500k-1.3 million casualties of the conflict were attributed to the use of gas, and the psychological effect on troops may have had a much greater effect. A few thousand civilians also became casualties as collateral damage or due to production accidents.
The Treaty of Versailles included some provisions that banned Germany from either manufacturing or importing chemical weapons. Similar treaties banned the First Austrian Republic, the Kingdom of Bulgaria, and the Kingdom of Hungary from chemical weapons, all belonging to the losing side, the Central powers. Russian bolsheviks and Britain continued the use of chemical weapons in the Russian Civil War and possibly in the Middle East in 1920.
Three years after World War I, the Allies wanted to reaffirm the Treaty of Versailles, and in 1922 the United States introduced the Treaty relating to the Use of Submarines and Noxious Gases in Warfare at the Washington Naval Conference. Four of the war victors, the United States, the United Kingdom, the Kingdom of Italy and the Empire of Japan, gave consent for ratification, but it failed to enter into force as the French Third Republic objected to the submarine provisions of the treaty.
At the 1925 Geneva Conference for the Supervision of the International Traffic in Arms the French suggested a protocol for non-use of poisonous gases. The Second Polish Republic suggested the addition of bacteriological weapons. It was signed on 17 June.
Historical assessment
Eric Croddy, assessing the Protocol in 2005, took the view that the historic record showed it had been largely ineffectual. Specifically it does not prohibit:
use against not-ratifying parties
retaliation using such weapons, so effectively making it a no-first-use agreement
use within a state's own borders in a civil conflict
research and development of such weapons, or stockpiling them
In light of these shortcomings, Jack Beard notes that "the Protocol (...) resulted in a legal framework that allowed states to conduct [biological weapons] research, develop new biological weapons, and ultimately engage in [biological weapons] arms races".
As such, the use of chemical weapons inside the nation's own territory against its citizens or subjects employed by Spain in the Rif War until 1927, Japan against Seediq indigenous rebels in Taiwan (then part of the Japanese colonial empire) in 1930 during the Musha Incident, Iraq against ethnic Kurdish civilians in the 1988 attack on Halabja during the Iran–Iraq War, and Syria or Syrian opposition forces during the Syrian civil war, nor use on Black Lives Matter protestors in the United States did not breach the Geneva Protocol.
Despite the U.S. having been a proponent of the protocol, the U.S. military and American Chemical Society lobbied against it, causing the U.S. Senate not to ratify the protocol until 1975, the same year when the United States ratified the Biological Weapons Convention.
Violations
Several state parties have deployed chemical weapons for combat in spite of the treaty. Italy used mustard gas against the Ethiopian Empire in the Second Italo-Ethiopian War. In World War II, Germany employed chemical weapons in combat on several occasions along the Black Sea, notably in Sevastopol, where they used toxic smoke to force Russian resistance fighters out of caverns below the city. They also used asphyxiating gas in the catacombs of Odesa in November 1941, following their capture of the city, and in late May 1942 during the Battle of the Kerch Peninsula in eastern Crimea, perpetrated by the Wehrmacht's Chemical Forces and organized by a special detail of SS troops with the help of a field engineer battalion. After the battle in mid-May 1942, the Germans gassed and killed almost 3,000 of the besieged and non-evacuated Red Army soldiers and Soviet civilians hiding in a series of caves and tunnels in the nearby Adzhimushkay quarry.
During the 1980-1988 Iran-Iraq War, Iraq is known to have employed a variety of chemical weapons against Iranian forces. Some 100,000 Iranian troops were casualties of Iraqi chemical weapons during the war.
Subsequent interpretation of the protocol
In 1966, United Nations General Assembly resolution 2162B called for, without any dissent, all states to strictly observe the protocol. In 1969, United Nations General Assembly resolution 2603 (XXIV) declared that the prohibition on use of chemical and biological weapons in international armed conflicts, as embodied in the protocol (though restated in a more general form), were generally recognized rules of international law. Following this, there was discussion of whether the main elements of the protocol now form part of customary international law, and now this is widely accepted to be the case.
There have been differing interpretations over whether the protocol covers the use of harassing agents, such as adamsite and tear gas, and defoliants and herbicides, such as Agent Orange, in warfare. The 1977 Environmental Modification Convention prohibits the military use of environmental modification techniques having widespread, long-lasting or severe effects. Many states do not regard this as a complete ban on the use of herbicides in warfare, but it does require case-by-case consideration. The 1993 Chemical Weapons Convention effectively banned riot control agents from being used as a method of warfare, though still permitting it for riot control.
In recent times, the protocol had been interpreted to cover non-international armed conflicts as well international ones. In 1995, an appellate chamber in the International Criminal Tribunal for the former Yugoslavia stated that "there had undisputedly emerged a general consensus in the international community on the principle that the use of chemical weapons is also prohibited in internal armed conflicts." In 2005, the International Committee of the Red Cross concluded that customary international law includes a ban on the use of chemical weapons in internal as well as international conflicts.
However, such views drew general criticism from legal authors. They noted that much of the chemical arms control agreements stems from the context of international conflicts. Furthermore, the application of customary international law to banning chemical warfare in non-international conflicts fails to meet two requirements: state practice and opinio juris. Jillian Blake & Aqsa Mahmud cited the periodic use of chemical weapons in non-international conflicts since the end of WWI (as stated above) as well as the lack of existing international humanitarian law (such as the Geneva Conventions) and national legislation and manuals prohibiting using them in such conflicts. Anne Lorenzat stated the 2005 ICRC study was rooted in "'political and operational issues rather than legal ones".
State parties
To become party to the Protocol, states must deposit an instrument with the government of France (the depositary power). Thirty-eight states originally signed the Protocol. France was the first signatory to ratify the Protocol on 10 May 1926. El Salvador, the final signatory to ratify the Protocol, did so on 26 February 2008. As of April 2021, 146 states have ratified, acceded to, or succeeded to the Protocol, most recently Colombia on 24 November 2015.
Reservations
A number of countries submitted reservations when becoming parties to the Geneva Protocol, declaring that they only regarded the non-use obligations as applying with respect to other parties to the Protocol and/or that these obligations would cease to apply with respect to any state, or its allies, which used the prohibited weapons. Several Arab states also declared that their ratification did not constitute recognition of, or diplomatic relations with, Israel, or that the provision of the Protocol were not binding with respect to Israel.
Generally, reservations not only modify treaty provisions for the reserving party, but also symmetrically modify the provisions for previously ratifying parties in dealing with the reserving party. Subsequently, numerous states have withdrawn their reservations, including the former Czechoslovakia in 1990 prior to its dissolution, or the Russian reservation on biological weapons that "preserved the right to retaliate in kind if attacked" with them, which was dissolved by President Yeltsin.
According to the Vienna Convention on Succession of States in respect of Treaties, states which succeed to a treaty after gaining independence from a state party "shall be considered as maintaining any reservation to that treaty which was applicable at the date of the succession of States in respect of the territory to which the succession of States relates unless, when making the notification of succession, it expresses a contrary intention or formulates a reservation which relates to the same subject matter as that reservation." While some states have explicitly either retained or renounced their reservations inherited on succession, states which have not clarified their position on their inherited reservations are listed as "implicit" reservations.
Reservations
Notes
Non-signatory states
The remaining UN member states and UN observers that have not acceded or succeeded to the Protocol are:
Chemical weapons prohibitions
References
Further reading
Bunn, George. "Gas and germ warfare: international legal history and present status." Proceedings of the National Academy of Sciences of the United States of America 65.1 (1970): 253+. online
Webster, Andrew. "Making Disarmament Work: The implementation of the international disarmament provisions in the League of Nations Covenant, 1919–1925." Diplomacy and Statecraft 16.3 (2005): 551–569.
External links
The text of the protocol
Weapons of War: Poison Gas
Biological warfare
Chemical warfare
Chemical weapons demilitarization
Arms control treaties
Human rights instruments
Hague Conventions of 1899 and 1907
Treaties concluded in 1925
Treaties entered into force in 1928
Treaties of the Democratic Republic of Afghanistan
Treaties of the People's Socialist Republic of Albania
Treaties of Algeria
Treaties of the People's Republic of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Australia
Treaties of the First Austrian Republic
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belgium
Treaties of the People's Republic of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of the military dictatorship in Brazil
Treaties of the Kingdom of Bulgaria
Treaties of Burkina Faso
Treaties of the People's Republic of Kampuchea
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of the Central African Republic
Treaties of Chile
Treaties of the Republic of China (1912–1949)
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Czechoslovakia
Treaties of Denmark
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of the Kingdom of Egypt
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Estonia
Treaties of the Ethiopian Empire
Treaties of Fiji
Treaties of Finland
Treaties of the French Third Republic
Treaties of the Gambia
Treaties of the Weimar Republic
Treaties of Ghana
Treaties of the Kingdom of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea-Bissau
Treaties of the Holy See
Treaties of the Hungarian People's Republic
Treaties of Iceland
Treaties of British India
Treaties of Indonesia
Treaties of Pahlavi Iran
Treaties of Mandatory Iraq
Treaties of Ireland
Treaties of Israel
Treaties of the Kingdom of Italy (1861–1946)
Treaties of Jamaica
Treaties of Japan
Treaties of Jordan
Treaties of Kenya
Treaties of North Korea
Treaties of South Korea
Treaties of Kuwait
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Republic
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Malta
Treaties of Mauritius
Treaties of Mexico
Treaties of Moldova
Treaties of Monaco
Treaties of Mongolia
Treaties of Morocco
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niger
Treaties of Nigeria
Treaties of Norway
Treaties of Pakistan
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of the Second Polish Republic
Treaties of the Estado Novo (Portugal)
Treaties of Qatar
Treaties of the Kingdom of Romania
Treaties of Russia
Treaties of Rwanda
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of the Union of South Africa
Treaties of the Soviet Union
Treaties of Spain under the Restoration
Treaties of the Dominion of Ceylon
Treaties of the Democratic Republic of the Sudan
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tanzania
Treaties of Thailand
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Venezuela
Treaties of Vietnam
Treaties of the Yemen Arab Republic
Treaties of South Yemen
Treaties extended to Curaçao and Dependencies
Treaties extended to Greenland
Treaties extended to the Faroe Islands
Treaties extended to the Dutch East Indies
Treaties extended to Surinam (Dutch colony)
Treaties concluded in Geneva | Geneva Protocol | [
"Chemistry",
"Biology"
] | 2,989 | [
"Biological warfare",
"Chemical weapons demilitarization",
"nan",
"Chemical weapons"
] |
51,836 | https://en.wikipedia.org/wiki/Chemical%20Weapons%20Convention | The Chemical Weapons Convention (CWC), officially the Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction, is an arms control treaty administered by the Organisation for the Prohibition of Chemical Weapons (OPCW), an intergovernmental organization based in The Hague, The Netherlands. The treaty entered into force on 29 April 1997. It prohibits the use of chemical weapons, and the large-scale development, production, stockpiling, or transfer of chemical weapons or their precursors, except for very limited purposes (research, medical, pharmaceutical or protective). The main obligation of member states under the convention is to effect this prohibition, as well as the destruction of all current chemical weapons. All destruction activities must take place under OPCW verification.
193 states have become parties to the CWC and accept its obligations. Israel has signed but not ratified the agreement, while three other UN member states (Egypt, North Korea and South Sudan) have neither signed nor acceded to the treaty. Most recently, the State of Palestine deposited its instrument of accession to the CWC on 17 May 2018. In September 2013, Syria acceded to the convention as part of an agreement for the destruction of Syria's chemical weapons.
As of February 2021, 98.39% of the world's declared chemical weapons stockpiles had been destroyed. The convention has provisions for systematic evaluation of chemical production facilities, as well as for investigations of allegations of use and production of chemical weapons based on the intelligence of other state parties.
Some chemicals which have been used extensively in warfare but have numerous large-scale industrial uses (such as phosgene) are highly regulated; however, certain notable exceptions exist. Chlorine gas is highly toxic, but being a pure element and widely used for peaceful purposes, is not officially listed as a chemical weapon. Certain state powers (e.g. the former Assad regime of Syria) continue to regularly manufacture and implement such chemicals in combat munitions. Although these chemicals are not specifically listed as controlled by the CWC, the use of any toxic chemical as a weapon (when used to produce fatalities solely or mainly through its toxic action) is in and of itself forbidden by the treaty. Other chemicals, such as white phosphorus, are highly toxic but are legal under the CWC when they are used by military forces for reasons other than their toxicity.
History
The CWC augments the Geneva Protocol of 1925, which bans the use of chemical and biological weapons in international armed conflicts, but not their development or possession. The CWC also includes extensive verification measures such as on-site inspections, in stark contrast to the 1975 Biological Weapons Convention (BWC), which lacks a verification regime.
After several changes of name and composition, the ENDC evolved into the Conference on Disarmament (CD) in 1984. On 3 September 1992 the CD submitted to the U.N. General Assembly its annual report, which contained the text of the Chemical Weapons Convention. The General Assembly approved the convention on 30 November 1992, and the U.N. Secretary-General then opened the convention for signature in Paris on 13 January 1993. The CWC remained open for signature until its entry into force on 29 April 1997, 180 days after the deposit at the UN by Hungary of the 65th instrument of ratification.
Organisation for the Prohibition of Chemical Weapons (OPCW)
The convention is administered by the Organisation for the Prohibition of Chemical Weapons (OPCW), which acts as the legal platform for specification of the CWC provisions. The Conference of the States Parties is mandated to change the CWC and pass regulations on the implementation of CWC requirements. The Technical Secretariat of the organization conducts inspections to ensure compliance of member states. These inspections target destruction facilities (where constant monitoring takes place during destruction), chemical weapons production facilities which have been dismantled or converted for civil use, as well as inspections of the chemical industry. The Secretariat may furthermore conduct "investigations of alleged use" of chemical weapons and give assistance after use of chemical weapons.
The 2013 Nobel Peace Prize was awarded to the organization because it had, with the Chemical Weapons Convention, "defined the use of chemical weapons as a taboo under international law" according to Thorbjørn Jagland, Chairman of the Norwegian Nobel Committee.
Key points of the Convention
Prohibition of production and use of chemical weapons
Destruction (or monitored conversion to other functions) of chemical weapons production facilities
Destruction of all chemical weapons (including chemical weapons abandoned outside the state parties territory)
Assistance between State Parties and the OPCW in the case of use of chemical weapons
An OPCW inspection regime for the production of chemicals which might be converted to chemical weapons
International cooperation in the peaceful use of chemistry in relevant areas
Controlled substances
The convention distinguishes three classes of controlled substance, chemicals that can either be used as weapons themselves or used in the manufacture of weapons. The classification is based on the quantities of the substance produced commercially for legitimate purposes. Each class is split into Part A, which are chemicals that can be used directly as weapons, and Part B, which are chemicals useful in the manufacture of chemical weapons. Separate from the precursors, the convention defines toxic chemicals as "[a]ny chemical which through its chemical action on life processes can cause death, temporary incapacitation or permanent harm to humans or animals. This includes all such chemicals, regardless of their origin or of their method of production, and regardless of whether they are produced in facilities, in munitions or elsewhere."
Schedule 1 chemicals have few, or no uses outside chemical weapons. These may be produced or used for research, medical, pharmaceutical or chemical weapon defence testing purposes but production at sites producing more than 100 grams per year must be declared to the OPCW. A country is limited to possessing a maximum of 1 tonne of these materials. Examples are sulfur mustard and nerve agents, and substances which are solely used as precursor chemicals in their manufacture. A few of these chemicals have very small scale non-military applications, for example, milligram quantities of nitrogen mustard are used to treat certain cancers.
Schedule 2 chemicals have legitimate small-scale applications. Manufacture must be declared and there are restrictions on export to countries that are not CWC signatories. An example is thiodiglycol which can be used in the manufacture of mustard agents, but is also used as a solvent in inks.
Schedule 3 chemicals have large-scale uses apart from chemical weapons. Plants which manufacture more than 30 tonnes per year must be declared and can be inspected, and there are restrictions on export to countries which are not CWC signatories. Examples of these substances are phosgene (the most lethal chemical weapon employed in WWI), which has been used as a chemical weapon but which is also a precursor in the manufacture of many legitimate organic compounds (e.g. pharmaceutical agents and many common pesticides), and triethanolamine, used in the manufacture of nitrogen mustard but also commonly used in toiletries and detergents.
Many of the chemicals named in the schedules are simply examples from a wider class, defined with Markush like language. For example, all chemicals in the class "O-Alkyl (<=C10, incl. cycloalkyl) alkyl (Me, Et, n-Pr or i-Pr)- phosphonofluoridates chemicals" are controlled, despite only a few named examples being given, such as Soman.
This can make it more challenging for companies to identify if chemicals they handle are subject to the CWC, especially Schedule 2 and 3 chemicals (such as Alkylphosphorus chemicals). For example, Amgard 1045 is a flame retardant, but falls within Schedule 2B as part of Alkylphosphorus chemical class. This approach is also used in controlled drug legislation in many countries and are often termed "class wide controls" or "generic statements".
Due to the added complexity these statements bring in identifying regulated chemicals, many companies choose to carry out these assessments computationally, examining the chemicals structure using in silico tools which compare them to the legislation statements, either with in house systems maintained a company or by the use commercial compliance software solutions.
A treaty party may declare a "single small-scale facility" that produces up to 1 tonne of Schedule 1 chemicals for research, medical, pharmaceutical or protective purposes each year, and also another facility may produce 10 kg per year for protective testing purposes. An unlimited number of other facilities may produce Schedule 1 chemicals, subject to a total 10 kg annual limit, for research, medical or pharmaceutical purposes, but any facility producing more than 100 grams must be declared.
The treaty also deals with carbon compounds called in the treaty "discrete organic chemicals", the majority of which exhibit moderate-high direct toxicity or can be readily converted into compounds with toxicity sufficient for practical use as a chemical weapon. These are any carbon compounds apart from long chain polymers, oxides, sulfides and metal carbonates, such as organophosphates. The OPCW must be informed of, and can inspect, any plant producing (or expecting to produce) more than 200 tonnes per year, or 30 tonnes if the chemical contains phosphorus, sulfur or fluorine, unless the plant solely produces explosives or hydrocarbons.
Category definitions
Chemical weapons are divided into three categories:
Category 1 - based on Schedule 1 substances
Category 2 - based on non-Schedule 1 substances
Category 3 - devices and equipment designed to use chemical weapons, without the substances themselves
Member states
Before the CWC came into force in 1997, 165 states signed the convention, allowing them to ratify the agreement after obtaining domestic approval. Following the treaty's entry into force, it was closed for signature and the only method for non-signatory states to become a party was through accession. As of March 2021, 193 states, representing over 98 percent of the world's population, are party to the CWC. Of the four United Nations member states that are not parties to the treaty, Israel has signed but not ratified the treaty, while Egypt, North Korea, and South Sudan have neither signed nor acceded to the convention. Taiwan, though not a member state, has stated on 27 August 2002 that it fully complies with the treaty.
Key organizations of member states
Member states are represented at the OPCW by their Permanent Representative. This function is generally combined with the function of Ambassador. For the preparation of OPCW inspections and preparation of declarations, member states have to constitute a National Authority.
World stockpile of chemical weapons
A total of 72,304 metric tonnes of chemical agent, and 97 production facilities have been declared to OPCW.
Treaty deadlines
The treaty set up several steps with deadlines toward complete destruction of chemical weapons, with a procedure for requesting deadline extensions. No country reached total elimination by the original treaty date although several have finished under allowed extensions.
Progress of destruction
At the end of 2019, 70,545 of 72,304 (97.51%) metric tonnes of chemical agent have been verifiably destroyed. More than 57% (4.97 million) of chemical munitions and containers have been destroyed.
Seven state parties have completed the destruction of their declared stockpiles: Albania, India, Iraq, Libya, Syria, the United States, and an unspecified state party (believed to be South Korea). Russia also completed the destruction of its declared stockpile. According to the US Arms Control Association, the poisoning of Sergei and Yulia Skripal in 2018 and the poisoning of Alexei Navalny in 2020 indicated that Russia maintained an illicit chemical weapons program.
Japan and China in October 2010 began the destruction of World War II era chemical weapons abandoned by Japan in China by means of mobile destruction units and reported destruction of 35,203 chemical weapons (75% of the Nanjing stockpile).
Iraqi stockpile
The U.N. Security Council ordered the dismantling of Iraq's chemical weapon stockpile in 1991. By 1998, UNSCOM inspectors had accounted for the destruction of 88,000 filled and unfilled chemical munitions, over 690 metric tons of weaponized and bulk chemical agents, approximately 4,000 tonnes of precursor chemicals, and 980 pieces of key production equipment. The UNSCOM inspectors left in 1998.
In 2009, before Iraq joined the CWC, the OPCW reported that the United States military had destroyed almost 5,000 old chemical weapons in open-air detonations since 2004. These weapons, produced before the 1991 Gulf War, contained sarin and mustard agents but were so badly corroded that they could not have been used as originally intended.
When Iraq joined the CWC in 2009, it declared "two bunkers with filled and unfilled chemical weapons munitions, some precursors, as well as five former chemical weapons production facilities" according to OPCW Director General Rogelio Pfirter. The bunker entrances were sealed with 1.5 meters of reinforced concrete in 1994 under UNSCOM supervision. As of 2012, the plan to destroy the chemical weapons was still being developed, in the face of significant difficulties. In 2014, ISIS took control of the site.
On 13 March 2018, the Director-General of the Organisation for the Prohibition of Chemical Weapons (OPCW), Ambassador Ahmet Üzümcü, congratulated the Government of Iraq on the completion of the destruction of the country's chemical weapons remnants.
Syrian destruction
Following the August 2013 Ghouta chemical attack, Syria, which had long been suspected of possessing chemical weapons, acknowledged them in September 2013 and agreed to put them under international supervision. On 14 September Syria deposited its instrument of accession to the CWC with the United Nations as the depositary and agreed to its provisional application pending entry into force effective 14 October. An accelerated destruction schedule was devised by Russia and the United States on 14 September, and was endorsed by United Nations Security Council Resolution 2118 and the OPCW Executive Council Decision EC-M-33/DEC.1. Their deadline for destruction was the first half of 2014. Syria gave the OPCW an inventory of its chemical weapons arsenal and began its destruction in October 2013, 2 weeks before its formal entry into force, while applying the convention provisionally. All declared Category 1 materials were destroyed by August 2014. However, the Khan Shaykhun chemical attack in April 2017 indicated that undeclared stockpiles probably remained in the country. A chemical attack on Douma occurred on 7 April 2018 that killed at least 49 civilians with scores injured, and which has been blamed on the Assad government.
Controversy arose in November 2019 over the OPCW's finding on the Douma chemical weapons attack when Wikileaks published emails by an OPCW staff member saying a report on this incident "misrepresents the facts" and contains "unintended bias". The OPCW staff member questioned the report's finding that OPCW's inspectors had "sufficient evidence at this time to determine that chlorine, or another reactive chlorine-containing chemical, was likely released from cylinders". The staff member alleged this finding was "highly misleading and not supported by the facts" and said he would attach his own differing observations if this version of the report was released. On 25 November 2019, OPCW Director General Fernando Arias, in a speech to the OPCW's annual conference in The Hague, defended the Organization's report on the Douma incident, stating "While some of these diverse views continue to circulate in some public discussion forums, I would like to reiterate that I stand by the independent, professional conclusion" of the probe.
Financial support for destruction
Financial support for the Albanian and Libyan stockpile destruction programmes was provided by the United States. Russia received support from a number of countries, including the United States, the United Kingdom, Germany, the Netherlands, Italy and Canada; with some $2 billion given by 2004. Costs for Albania's program were approximately US$48 million. The United States has spent $20 billion and expected to spend a further $40 billion.
Known chemical weapons production facilities
Fourteen states parties declared chemical weapons production facilities (CWPFs):
{|width=100% style="background:transparent"
|- valign=top
| style="width:25%;"|
| style="width:25%;"|
| style="width:25%;"|
| style="width:25%;"|
|}
1 non-disclosed state party (referred to as "A State Party" in OPCW-communications; said to be South Korea)
Currently all 97 declared production facilities have been deactivated and certified as either destroyed (74) or converted (23) to civilian use.
See also
Related international law
Australia Group of countries and the European Commission that helps member nations identify exports which need to be controlled so as not to contribute to the spread of chemical and biological weapons
1990 US-Soviet Arms Control Agreement
General-purpose criterion, a concept in international law that broadly governs international agreements with respect to chemical weapons
Geneva Protocol, a treaty prohibiting the use of chemical and biological weapons among signatory states in international armed conflicts
Worldwide treaties for other types of weapons of mass destruction
Biological Weapons Convention (BWC) (states parties)
Nuclear Non-Proliferation Treaty (NPT) (states parties)
Treaty on the Prohibition of Nuclear Weapons (TPNW) (states parties)
Chemical weapons
Chemical warfare
Weapons of mass destruction
Tear gas
Related remembrance day
Day of Remembrance for all Victims of Chemical Warfare
References
External links
Full text of the Chemical Weapons Convention, OPCW
Online text of the Chemical Weapons Convention: Articles, Annexes including Chemical Schedules, OPCW
Fact Sheets , OPCW
Chemical Weapons Convention: Ratifying Countries, OPCW
Chemical Weapons Convention Website, United States
The Chemical Weapons Convention at a Glance , Arms Control Association
Chemical Warfare Chemicals and Precursors, Chemlink Pty Ltd, Australia
Introductory note by Michael Bothe, procedural history note and audiovisual material on the Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction in the Historic Archives of the United Nations Audiovisual Library of International Law
Lecture by Santiago Oñate Laborde entitled The Chemical Weapons Convention: an Overview in the Lecture Series of the United Nations Audiovisual Library of International Law
Arms control treaties
Chemical warfare
Human rights instruments
Chemical weapons demilitarization
Non-proliferation treaties
Treaties concluded in 1993
Treaties entered into force in 1997
Treaties of the Afghan Transitional Administration
Treaties of Albania
Treaties of Algeria
Treaties of Andorra
Treaties of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belarus
Treaties of Belgium
Treaties of Belize
Treaties of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of Brazil
Treaties of Brunei
Treaties of Bulgaria
Treaties of Burkina Faso
Treaties of Myanmar
Treaties of Burundi
Treaties of Cambodia
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of the Central African Republic
Treaties of Chad
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of the Comoros
Treaties of the Republic of the Congo
Treaties of the Democratic Republic of the Congo
Treaties of the Cook Islands
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Denmark
Treaties of the Dominican Republic
Treaties of Djibouti
Treaties of Dominica
Treaties of Ecuador
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Eritrea
Treaties of Estonia
Treaties of Ethiopia
Treaties of the Federated States of Micronesia
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of Germany
Treaties of Ghana
Treaties of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea
Treaties of Guinea-Bissau
Treaties of Guyana
Treaties of Haiti
Treaties of the Holy See
Treaties of Honduras
Treaties of Hungary
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Iran
Treaties of Iraq
Treaties of Ireland
Treaties of Italy
Treaties of Jamaica
Treaties of Jordan
Treaties of Japan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kiribati
Treaties of Kuwait
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Jamahiriya
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties of North Macedonia
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Mali
Treaties of Malta
Treaties of the Marshall Islands
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of Moldova
Treaties of Monaco
Treaties of Mongolia
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of Namibia
Treaties of Nauru
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niger
Treaties of Nigeria
Treaties of Niue
Treaties of Norway
Treaties of Oman
Treaties of Pakistan
Treaties of Palau
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of Poland
Treaties of Portugal
Treaties of Qatar
Treaties of Romania
Treaties of Russia
Treaties of Rwanda
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of Samoa
Treaties of San Marino
Treaties of São Tomé and Príncipe
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia and Montenegro
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of Somalia
Treaties of South Africa
Treaties of South Korea
Treaties of Spain
Treaties of Sri Lanka
Treaties of the Republic of the Sudan (1985–2011)
Treaties of Suriname
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Tanzania
Treaties of Thailand
Treaties of Timor-Leste
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Turkmenistan
Treaties of Tuvalu
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Uzbekistan
Treaties of Vanuatu
Treaties of Venezuela
Treaties of Vietnam
Treaties of Yemen
Treaties of Zambia
Treaties of Zimbabwe
Treaties establishing intergovernmental organizations
Treaties extended to Aruba
Treaties extended to the Netherlands Antilles
Treaties extended to Guernsey
Treaties extended to Jersey
Treaties extended to the Isle of Man
Treaties extended to Anguilla
Treaties extended to Bermuda
Treaties extended to the British Antarctic Territory
Treaties extended to the British Indian Ocean Territory
Treaties extended to the British Virgin Islands
Treaties extended to the Cayman Islands
Treaties extended to the Falkland Islands
Treaties extended to Gibraltar
Treaties extended to Montserrat
Treaties extended to the Pitcairn Islands
Treaties extended to Saint Helena, Ascension and Tristan da Cunha
Treaties extended to South Georgia and the South Sandwich Islands
Treaties extended to Akrotiri and Dhekelia
Treaties extended to the Turks and Caicos Islands
Treaties extended to Greenland
Treaties extended to the Faroe Islands | Chemical Weapons Convention | [
"Chemistry"
] | 4,650 | [
"Chemical weapons demilitarization",
"nan",
"Chemical weapons"
] |
5,540,555 | https://en.wikipedia.org/wiki/Yotari | The yotari mouse is an autosomal recessive mutant. It has a mutated disabled homolog 1 (Dab1) gene. This mutant mouse is recognized by unstable gait ("Yota-ru" in Japanese means "unstable gait") and tremor and by early deaths around the time of weaning. The cytoarchitectures of cerebellar and cerebral cortices and hippocampal formation of the yotari mouse are abnormal. These malformations resemble those of reeler mouse.
References
Molecular neuroscience
Molecular genetics | Yotari | [
"Chemistry",
"Biology"
] | 119 | [
"Molecular neuroscience",
"Molecular genetics",
"Molecular biology"
] |
5,540,651 | https://en.wikipedia.org/wiki/Microwave%20transmission | Microwave transmission is the transmission of information by electromagnetic waves with wavelengths in the microwave frequency range of 300 MHz to 300 GHz (1 m - 1 mm wavelength) of the electromagnetic spectrum. Microwave signals are normally limited to the line of sight, so long-distance transmission using these signals requires a series of repeaters forming a microwave relay network. It is possible to use microwave signals in over-the-horizon communications using tropospheric scatter, but such systems are expensive and generally used only in specialist roles.
Although an experimental microwave telecommunication link across the English Channel was demonstrated in 1931, the development of radar in World War II provided the technology for practical exploitation of microwave communication. During the war, the British Army introduced the Wireless Set No. 10, which used microwave relays to multiplex eight telephone channels over long distances. A link across the English Channel allowed General Bernard Montgomery to remain in continual contact with his group headquarters in London.
In the post-war era, the development of microwave technology was rapid, which led to the construction of several transcontinental microwave relay systems in North America and Europe. In addition to carrying thousands of telephone calls at a time, these networks were also used to send television signals for cross-country broadcast, and later, computer data. Communication satellites took over the television broadcast market during the 1970s and 80s, and the introduction of long-distance fibre optic systems in the 1980s and especially 90s led to the rapid rundown of the relay networks, most of which are abandoned.
In recent years, there has been an explosive increase in use of the microwave spectrum by new telecommunication technologies such as wireless networks, and direct-broadcast satellites which broadcast television and radio directly into consumers' homes. Larger line-of-sight links are once again popular for handing connections between mobile telephone towers, although these are generally not organized into long relay chains.
Uses
Microwaves are widely used for point-to-point communications because their small wavelength allows conveniently-sized antennas to direct them in narrow beams, which can be pointed directly at the receiving antenna. This use of tightly-focused direct beams allows microwave transmitters in the same area to use the same frequencies, without interfering with each other as lower frequency radio waves would. This frequency reuse conserves scarce radio spectrum bandwidth. Another advantage is that the high frequency of microwaves gives the microwave band a very large information-carrying capacity; the microwave band has a bandwidth 30 times that of all the rest of the radio spectrum below it. A disadvantage is that microwaves are limited to line of sight propagation; they cannot pass around hills or mountains as lower frequency radio waves can.
Microwave radio transmission is commonly used in point-to-point communication systems on the surface of the Earth, in satellite communications, and in deep space radio communications. Other parts of the microwave radio band are used for radars, radio navigation systems, sensor systems, and radio astronomy.
The next higher frequency band of the radio spectrum, between 30 GHz and 300 GHz, are called "millimeter waves" because their wavelengths range from 10 mm to 1 mm. Radio waves in the millimeter wave band are strongly attenuated by the gases of the atmosphere, which limits their practical transmission distance to a few kilometers, not enough for long-distance communication. The electronic technologies needed in the millimeter wave band are also in an earlier state of development than those of the microwave band.
Wireless transmission of information
One-way and two-way telecommunication using communications satellite
Terrestrial microwave relay links in telecommunications networks including backbone or backhaul carriers in cellular networks
More recently, microwaves have been used for wireless power transmission.
Microwave radio relay
Microwave radio relay is a technology widely used in the 1950s and 1960s for transmitting information, such as long-distance telephone calls and television programs between two terrestrial points on a narrow beam of microwaves. In microwave radio relay, a microwave transmitter and directional antenna transmits a narrow beam of microwaves carrying many channels of information on a line of sight path to another relay station where it is received by a directional antenna and receiver, forming a fixed radio connection between the two points. The link was often bidirectional, using a transmitter and receiver at each end to transmit data in both directions. The requirement of a line of sight limits the separation between stations to the visual horizon, about . For longer distances, the receiving station could function as a relay, retransmitting the received information to another station along its journey. Chains of microwave relay stations were used to transmit telecommunication signals over transcontinental distances. Microwave relay stations were often located on tall buildings and mountaintops, with their antennas on towers to get maximum range.
Beginning in the 1950s, networks of microwave relay links, such as the AT&T Long Lines system in the U.S., carried long-distance telephone calls and television programs between cities. The first system, dubbed TDX and built by AT&T, connected New York and Boston in 1947 with a series of eight radio relay stations. Through the 1950s, they deployed a network of a slightly improved version across the U.S., known as TD2. These included long daisy-chained links that traversed mountain ranges and spanned continents. The launch of communication satellites in the 1970s provided a cheaper alternative. Much of the transcontinental traffic is now carried by satellites and optical fibers, but microwave relay remains important for shorter distances.
Planning
Because in microwave transmission the waves travel in narrow beams confined to a line-of-sight path from one antenna to the other, they do not interfere with other microwave equipment, so nearby microwave links can use the same frequencies. The antennas must therefore be highly directional (high gain), and are installed in elevated locations such as large radio towers in order to be able to avoid the obstructions closer to the ground and transmit across long distances. Typical types of antenna used in radio relay link installations are parabolic antennas, dielectric lens, and horn-reflector antennas, which have a diameter of up to . Highly directive antennas permit an economical use of the available frequency spectrum, despite long transmission distances.
Because of the high frequencies used, a line-of-sight path between the stations is required. Additionally, in order to avoid attenuation of the beam, an area around the beam called the first Fresnel zone must be free from obstacles. Obstacles in the signal field cause unwanted attenuation. High mountain peaks or ridges are often ideal positions for the antennas.
In addition to the use of conventional repeaters with back-to-back radios transmitting on different frequencies, obstructions in microwave paths can also be dealt with by using Passive repeater or on-frequency repeaters.
Obstacles, the curvature of the Earth, the geography of the area and reception issues arising from the use of nearby land (such as in manufacturing and forestry) are important issues to consider when planning radio links. In the planning process, it is essential that "path profiles" are produced, which provide information about the terrain and Fresnel zones affecting the transmission path. The presence of a water surface, such as a lake or river, along the path also must be taken into consideration since it can reflect the beam, and the direct and reflected beam can interfere with each other at the receiving antenna, causing multipath fading. Multipath fades are usually deep only in a small spot and a narrow frequency band, so space and/or frequency diversity schemes can be applied to mitigate these effects.
The effects of atmospheric stratification cause the radio path to bend downward in a typical situation so a major distance is possible as the earth equivalent curvature increases from to about (a 4/3 equivalent radius effect). Rare events of temperature, humidity and pressure profile versus height, may produce large deviations and distortion of the propagation and affect transmission quality. High-intensity rain and snow making rain fade must also be considered as an impairment factor, especially at frequencies above 10 GHz. All of the detrimental factors mentioned in this section, collectively known as path loss, make it necessary to compute suitable power margins, in order to maintain the link operative for a high percentage of time, like the standard 99.99% or 99.999% used in 'carrier class' services of most telecommunication operators.
The longest known microwave radio relay crosses the Red Sea with a hop between Jebel Erba ( a.s.l., , Sudan) and Jebel Dakka ( a.s.l., , Saudi Arabia). The link was built in 1979 by Telettra to transmit 300 telephone channels and one TV signal, in the 2 GHz frequency band. (Hop distance is the distance between two microwave stations).
Previous considerations represent typical problems characterizing terrestrial radio links using microwaves for the so-called backbone networks: hop lengths of a few tens of kilometers (typically ) were largely used until the 1990s. Frequency bands below 10 GHz, and above all, the information to be transmitted, were a stream containing a fixed capacity block. The target was to supply the requested availability for the whole block (Plesiochronous digital hierarchy, PDH, or synchronous digital hierarchy, SDH). Fading and/or multipath affecting the link for short time period during the day had to be counteracted by the diversity architecture. During 1990s microwave radio links begun widely to be used for urban links in cellular network. Requirements regarding link distance changed to shorter hops (less than , typically ), and frequency increased to bands between 11 and 43 GHz and more recently, up to 86 GHz (E-band). Furthermore, link planning deals more with intense rainfall and less with multipath, so diversity schemes became less used. Another big change that occurred during the last decade was an evolution toward packet radio transmission. Therefore, new countermeasures, such as adaptive modulation, have been adopted.
The emitted power is regulated for cellular and microwave systems. These microwave transmissions use emitted power typically from 0.03 to 0.30 W, radiated by a parabolic antenna on a narrow beam diverging by a few degrees (1 to 3-4). The microwave channel arrangement is regulated by International Telecommunication Union (ITU-R) and local regulations (ETSI, FCC). In the last decade the dedicated spectrum for each microwave band has become extremely crowded, motivating the use of techniques to increase transmission capacity such as frequency reuse, polarization-division multiplexing, XPIC, MIMO.
History
The history of radio relay communication began in 1898 with the publication by Johann Mattausch in the Austrian journal, Zeitschrift für Elektrotechnik. But his proposal was primitive and not suitable for practical use. The first experiments with radio repeater stations to relay radio signals were done in 1899 by Emile Guarini-Foresio. However the low frequency and medium frequency radio waves used during the first 40 years of radio proved to be able to travel long distances by ground wave and skywave propagation.
In 1931, an Anglo-French consortium headed by Andre C. Clavier demonstrated an experimental microwave relay link across the English Channel using dishes. Telephony, telegraph, and facsimile data was transmitted over the bidirectional 1.7 GHz beams between Dover, UK, and Calais, France. The radiated power, produced by a miniature Barkhausen–Kurz tube located at the dish's focus, was one-half watt. A 1933 military microwave link between airports at St. Inglevert, France, and Lympne, UK, a distance of , was followed in 1935 by a 300 MHz telecommunication link, the first commercial microwave relay system.
The development of radar during World War II provided much of the microwave technology which made practical microwave communication links possible, particularly the klystron oscillator and techniques of designing parabolic antennas. Though not commonly known, the British Army used the Wireless Set Number 10 in this role during World War II. The need for radio relay did not really begin until the 1940s exploitation of microwaves, which traveled by line of sight and so were limited to a propagation distance of about by the visual horizon.
After the war, telephone companies used this technology to build large microwave radio relay networks to carry long-distance telephone calls. During the 1950s a unit of the US telephone carrier, AT&T Long Lines, built a transcontinental system of microwave relay links across the US which grew to carry the majority of US long distance telephone traffic, as well as television network signals. The main motivation in 1946 to use microwave radio instead of cable was that a large capacity could be installed quickly and at less cost. It was expected at that time that the annual operating costs for microwave radio would be greater than for cable. There were two main reasons that a large capacity had to be introduced suddenly: Pent-up demand for long-distance telephone service, because of the hiatus during the war years, and the new medium of television, which needed more bandwidth than radio. The prototype was called TDX and was tested with a connection between New York City and Murray Hill, the location of Bell Laboratories in 1946. The TDX system was set up between New York and Boston in 1947. The TDX was upgraded to the TD2 system, which used [the Morton tube, 416B and later 416C, manufactured by Western Electric] in the transmitters, and then later to TD3 that used solid-state electronics.
Remarkable were the microwave relay links to West Berlin during the Cold War, which had to be built and operated due to the large distance between West Germany and Berlin at the edge of the technical feasibility. In addition to the telephone network, also microwave relay links for the distribution of TV and radio broadcasts. This included connections from the studios to the broadcasting systems distributed across the country, as well as between the radio stations, for example for program exchange.
Military microwave relay systems continued to be used into the 1960s, when many of these systems were supplanted with tropospheric scatter or communication satellite systems. When the NATO military arm was formed, much of this existing equipment was transferred to communications groups. The typical communications systems used by NATO during that time period consisted of the technologies which had been developed for use by the telephone carrier entities in host countries. One example from the USA is the RCA CW-20A 1–2 GHz microwave relay system which utilized flexible UHF cable rather than the rigid waveguide required by higher frequency systems, making it ideal for tactical applications. The typical microwave relay installation or portable van had two radio systems (plus backup) connecting two line of sight sites. These radios would often carry 24 telephone channels frequency-division multiplexed on the microwave carrier (i.e. Lenkurt 33C FDM). Any channel could be designated to carry up to 18 teletype communications instead. Similar systems from Germany and other member nations were also in use.
Long-distance microwave relay networks were built in many countries until the 1980s, when the technology lost its share of fixed operation to newer technologies such as fiber-optic cable and communication satellites, which offer a lower cost per bit.
During the Cold War, the US intelligence agencies, such as the National Security Agency (NSA), were reportedly able to intercept Soviet microwave traffic using satellites such as Rhyolite/Aquacade. Much of the beam of a microwave link passes the receiving antenna and radiates toward the horizon, into space. By positioning a geosynchronous satellite in the path of the beam, the microwave beam can be received.
At the turn of the 21st century, microwave radio relay systems were used increasingly in portable radio applications. The technology is particularly suited to this application because of lower operating costs, a more efficient infrastructure, and provision of direct hardware access to the portable radio operator.
Microwave link
A microwave link is a communications system that uses a beam of radio waves in the microwave frequency range to transmit video, audio, or data between two locations, which can be from just a few feet or meters to several miles or kilometers apart. Microwave links are commonly used by television broadcasters to transmit programmes across a country, for instance, or from an outside broadcast back to a studio.
Mobile units can be camera mounted, allowing cameras the freedom to move around without trailing cables. These are often seen on the touchlines of sports fields on Steadicam systems.
Properties of microwave links
Involve line of sight (LOS) communication technology
Affected greatly by environmental constraints, including rain fade
Have very limited penetration capabilities through obstacles such as hills, buildings and trees
Sensitive to high pollen count
Signals can be degraded during Solar proton events
Propagation delays are lower than in fiber optic networks because the speed of light in air is faster than in optical cable
Uses of microwave links
In communications between satellites and base stations
As backbone carriers for cellular systems
In short-range indoor communications
Linking remote and regional telephone exchanges to larger (main) exchanges without the need for copper/optical fibre lines
Measuring the intensity of rain between two locations
To give financial advantage to high frequency traders at one stock exchange via faster knowledge of price changes at a distant exchange
Troposcatter
Terrestrial microwave relay links are limited in distance to the visual horizon, a few tens of miles or kilometers depending on tower height. Tropospheric scatter ("troposcatter" or "scatter") was a technology developed in the 1950s to allow microwave communication links beyond the horizon, to a range of several hundred kilometers. The transmitter radiates a beam of microwaves into the sky, at a shallow angle above the horizon toward the receiver. As the beam passes through the troposphere a small fraction of the microwave energy is scattered back toward the ground by water vapor and dust in the air. A sensitive receiver beyond the horizon picks up this reflected signal. Signal clarity obtained by this method depends on the weather and other factors, and as a result, a high level of technical difficulty is involved in the creation of a reliable over horizon radio relay link. Troposcatter links are therefore only used in special circumstances where satellites and other long-distance communication channels cannot be relied on, such as in military communications.
See also
Wireless power transfer
Fresnel zone
Passive repeater
Radio repeater
Relay (disambiguation)
Transmitter station
Path loss
British Telecom microwave network
Trans Canada Microwave
Antenna array
References
Microwave Radio Transmission Design Guide, Trevor Manning, Artech House, 1999
External links
RF / Microwave Design at Oxford University
AT&T's Microwave Radio-Relay Skyway introduced in 1951
Bell System 1951 magazine ad for Microwave Radio-Relay systems.
RCA vintage magazine ad for Microwave-Radio Relay equipment used for Western Union Telegraph Co.
AT&T Long Lines Microwave Towers Remembered
AT&T Long Lines
IEEE Global History Network Microwave Link Networks (Wollschlager, Anthony)
Electromagnetic radiation
Energy development
Wireless energy transfer
Microwave technology
Wireless networking
Television technology
Television terminology | Microwave transmission | [
"Physics",
"Technology",
"Engineering"
] | 3,820 | [
"Information and communications technology",
"Physical phenomena",
"Television technology",
"Electromagnetic radiation",
"Wireless networking",
"Computer networks engineering",
"Radiation"
] |
5,543,013 | https://en.wikipedia.org/wiki/Ragone%20plot | A Ragone plot ( ) is a plot used for comparing the energy density of various energy-storing devices. On such a chart the values of specific energy (in W·h/kg) are plotted versus specific power (in W/kg). Both axes are logarithmic, which allows comparing performance of very different devices. Ragone plots can reveal information about gravimetric energy density, but do not convey details about volumetric energy density.
The Ragone plot was first used to compare performance of batteries. However, it is suitable for comparing any energy-storage devices, as well as energy devices such as engines, gas turbines, and fuel cells. The plot is named after David V. Ragone.
Conceptually, the vertical axis describes how much energy is available per unit mass, while the horizontal axis shows how quickly that energy can be delivered, otherwise known as power per unit mass. A point in a Ragone plot represents a particular energy device or technology.
The amount of time (in hours) during which a device can be operated at its rated power is given as the ratio between the specific energy (Y-axis) and the specific power (X-axis). This is true regardless of the overall scale of the device, since a larger device would have proportional increases in both power and energy. Consequently, the iso curves (constant operating time) in a Ragone plot are straight lines.
For electrical systems, the following equations are relevant:
where V is voltage (V), I electric current (A), t time (s) and m mass (kg).
References
Capacitors
Electric battery
Charts | Ragone plot | [
"Physics"
] | 331 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
5,547,122 | https://en.wikipedia.org/wiki/Prony%27s%20method | Prony analysis (Prony's method) was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal.
The method
Let be a signal consisting of evenly spaced samples. Prony's method fits a function
to the observed . After some manipulation utilizing Euler's formula, the following result is obtained, which allows more direct computation of terms:
where
are the eigenvalues of the system,
are the damping components,
are the angular-frequency components,
are the phase components,
are the amplitude components of the series,
is the imaginary unit ().
Representations
Prony's method is essentially a decomposition of a signal with complex exponentials via the following process:
Regularly sample so that the -th of samples may be written as
If happens to consist of damped sinusoids, then there will be pairs of complex exponentials such that
where
Because the summation of complex exponentials is the homogeneous solution to a linear difference equation, the following difference equation will exist:
The key to Prony's Method is that the coefficients in the difference equation are related to the following polynomial:
These facts lead to the following three steps within Prony's method:
1) Construct and solve the matrix equation for the values:
Note that if , a generalized matrix inverse may be needed to find the values .
2) After finding the values, find the roots (numerically if necessary) of the polynomial
The -th root of this polynomial will be equal to .
3) With the values, the values are part of a system of linear equations that may be used to solve for the values:
where unique values are used. It is possible to use a generalized matrix inverse if more than samples are used.
Note that solving for will yield ambiguities, since only was solved for, and for an integer . This leads to the same Nyquist sampling criteria that discrete Fourier transforms are subject to
See also
Generalized pencil-of-function method
Computation of Prony decomposition using Autoregression analysis
Application of Prony decomposition in Time-frequency analysis
Notes
References
Signal processing | Prony's method | [
"Technology",
"Engineering"
] | 483 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
5,547,210 | https://en.wikipedia.org/wiki/Lehmstedt%E2%80%93Tanasescu%20reaction | The Lehmstedt–Tanasescu reaction is a method in organic chemistry for the organic synthesis of acridone derivatives (3) from a 2-nitrobenzaldehyde (1) and an arene compound (2):
The reaction is named after two chemists who devoted part of their careers to research into this synthetic method, the German chemist Kurt Lehmstedt and the Romanian chemist Ion Tănăsescu. Variations of the reaction name include Lehmsted–Tănăsescu reaction, Lehmsted–Tănăsescu acridone synthesis and Lehmsted–Tanasescu acridone synthesis.
Reaction mechanism
In the first step of the reaction mechanism the precursor molecule 2-nitrobenzaldehyde 4 is protonated, often by sulfuric acid, to intermediate 5, followed by an electrophilic attack to benzene (other arenes can be used as well). The resulting benzhydrol 6 cyclisizes to 7 and finally to compound 8. Treatment of this intermediate with nitrous acid (sodium nitrite en sulfuric acid) leads to the N-nitroso acridone 11 via intermediates 9 en 10. The N-nitroso group is removed by an acid in the final step. The procedure is an example of a one-pot synthesis.
References
Heterocycle forming reactions
Name reactions | Lehmstedt–Tanasescu reaction | [
"Chemistry"
] | 280 | [
"Name reactions",
"Heterocycle forming reactions",
"Organic reactions"
] |
5,547,607 | https://en.wikipedia.org/wiki/Mode%20of%20action | In pharmacology and biochemistry, mode of action (MoA) describes a functional or anatomical change, resulting from the exposure of a living organism to a substance. In comparison, a mechanism of action (MOA) describes such changes at the molecular level.
A mode of action is important in classifying chemicals, as it represents an intermediate level of complexity in between molecular mechanisms and physiological outcomes, especially when the exact molecular target has not yet been elucidated or is subject to debate. A mechanism of action of a chemical could be "binding to DNA" while its broader mode of action would be "transcriptional regulation". However, there is no clear consensus and the term mode of action is also often used, especially in the study of pesticides, to describe molecular mechanisms such as action on specific nuclear receptors or enzymes. Despite this, there are classification attempts, such as the HRAC's classification to manage pesticide resistance.
See also
Mechanism of action in pharmaceuticals
Adverse outcome pathway
References
Pharmacodynamics
Medicinal chemistry | Mode of action | [
"Chemistry",
"Biology"
] | 211 | [
"Pharmacology",
"Pharmacodynamics",
"Medicinal chemistry stubs",
"Medicinal chemistry",
"nan",
"Biochemistry",
"Pharmacology stubs"
] |
8,726,320 | https://en.wikipedia.org/wiki/Control%20valve | A control valve is a valve used to control fluid flow by varying the size of the flow passage as directed by a signal from a controller. This enables the direct control of flow rate and the consequential control of process quantities such as pressure, temperature, and liquid level.
In automatic control terminology, a control valve is termed a "final control element".
Operation
The opening or closing of automatic control valves is usually done by electrical, hydraulic or pneumatic actuators. Normally with a modulating valve, which can be set to any position between fully open and fully closed, valve positioners are used to ensure the valve attains the desired degree of opening.
Air-actuated valves are commonly used because of their simplicity, as they only require a compressed air supply, whereas electrically operated valves require additional cabling and switch gear, and hydraulically actuated valves required high pressure supply and return lines for the hydraulic fluid.
The pneumatic control signals are traditionally based on a pressure range of 3–15 psi (0.2–1.0 bar), or more commonly now, an electrical signal of 4-20mA for industry, or 0–10 V for HVAC systems. Electrical control now often includes a "Smart" communication signal superimposed on the 4–20 mA control current, such that the health and verification of the valve position can be signalled back to the controller. The HART, Fieldbus Foundation, and Profibus are the most common protocols.
An automatic control valve consists of three main parts in which each part exist in several types and designs:
Valve actuator – which moves the valve's modulating element, such as ball or butterfly.
Valve positioner – which ensures the valve has reached the desired degree of opening. This overcomes the problems of friction and wear.
Valve body – in which the modulating element, a plug, globe, ball or butterfly, is contained.
Control action
Taking the example of an air-operated valve, there are two control actions possible:
"Air or current to open" – The flow restriction decreases with increased control signal value.
"Air or current to close" – The flow restriction increases with increased control signal value.
There can also be failure to safety modes:
"Air or control signal failure to close" – On failure of compressed air to the actuator, the valve closes under spring pressure or by backup power.
"Air or control signal failure to open" – On failure of compressed air to actuator, the valve opens under spring pressure or by backup power.
The modes of failure operation are requirements of the failure to safety process control specification of the plant. In the case of cooling water it may be to fail open, and the case of delivering a chemical it may be to fail closed.
Valve positioners
The fundamental function of a positioner is to deliver pressurized air to the valve actuator, such that the position of the valve stem or shaft corresponds to the set point from the control system. Positioners are typically used when a valve requires throttling action. A positioner requires position feedback from the valve stem or shaft and delivers pneumatic pressure to the actuator to open and close the valve. The positioner must be mounted on or near the control valve assembly. There are three main categories of positioners, depending on the type of control signal, the diagnostic capability, and the communication protocol: pneumatic, analog, and digital.
Pneumatic positioners
Processing units may use pneumatic pressure signaling as the control set point to the control valves. Pressure is typically modulated between 20.7 and 103 kPa (3 to 15 psig) to move the valve from 0 to 100% position. In a common pneumatic positioner, the position of the valve stem or shaft is compared with the position of a bellows that receives the pneumatic control signal. When the input signal increases, the bellows expands and moves a beam. The beam pivots about an input axis, which moves a flapper closer to the nozzle. The nozzle pressure increases, which increases the output pressure to the actuator through a pneumatic amplifier relay. The increased output pressure to the actuator causes the valve stem to move.
Stem movement is fed back to the beam by means of a cam. As the cam rotates, the beam pivots about the feedback axis to move the flapper slightly away from the nozzle. The nozzle pressure decreases and reduces the output pressure to the actuator. Stem movement continues, backing the flapper away from the nozzle until equilibrium is reached. When the input signal decreases, the bellows contracts (aided by an internal range spring) and the beam pivots about the input axis to move the flapper away from the nozzle. Nozzle decreases and the relay permits the release of diaphragm casing pressure to the atmosphere, which allows the actuator stem to move upward.
Through the cam, stem movement is fed back to the beam to reposition the flapper closer to the nozzle. When equilibrium conditions are obtained, stem movement stops and the flapper is positioned to prevent any further decrease in actuator pressure.
Analog positioners
The second type of positioner is an analog I/P positioner. Most modern processing units use a 4 to 20 mA DC signal to modulate the control valves. This introduces electronics into the positioner design and requires that the positioner convert the electronic current signal into a pneumatic pressure signal (current-to-pneumatic or I/P). In a typical analog I/P positioner, the converter receives a DC input signal and provides a proportional pneumatic output signal through a nozzle/flapper arrangement. The pneumatic output signal provides the input signal to the pneumatic positioner. Otherwise, the design is the same as the pneumatic positioner
Digital positioners
While pneumatic positioners and analog I/P positioners provide basic valve position control, digital valve controllers add another dimension to positioner capabilities. This type of positioner is a microprocessor-based instrument. The microprocessor enables diagnostics and two-way communication to simplify setup and troubleshooting.
In a typical digital valve controller, the control signal is read by the microprocessor, processed by a digital algorithm, and converted into a drive current signal to the I/P converter. The microprocessor performs the position control algorithm rather than a mechanical beam, cam, and flapper assembly. As the control signal increases, the drive signal to the I/P converter increases, increasing the output pressure from the I/P converter. This pressure is routed to a pneumatic amplifier relay and provides two output pressures to the actuator. With increasing control signal, one output pressure always increases and the other output pressure decreases
Double-acting actuators use both outputs, whereas single-acting actuators use only one output. The changing output pressure causes the actuator stem or shaft to move. Valve position is fed back to the microprocessor. The stem continues to move until the correct position is attained. At this point, the microprocessor stabilizes the drive signal to the I/P converter until equilibrium is obtained.
In addition to the function of controlling the position of the valve, a digital valve controller has two additional capabilities: diagnostics and two-way digital communication.
Widely used communication protocols include HART, FOUNDATION fieldbus, and PROFIBUS.
Advantages of placing a smart positioner on a control valve:
Automatic calibration and configuration of positioner.
Real time diagnostics.
Reduced cost of loop commissioning, including installation and calibration.
Use of diagnostics to maintain loop performance levels.
Improved process control accuracy that reduces process variability.
Types of control valve
Control valves are classified by attributes and features.
Based on the pressure drop profile
High recovery valve: These valves typically regain most of static pressure drop from the inlet to vena contracta at the outlet. They are characterised by a lower recovery coefficient. Examples: butterfly valve, ball valve, plug valve, gate valve
Low recovery valve: These valves typically regain little of the static pressure drop from the inlet to vena contracta at the outlet. They are characterised by a higher recovery coefficient. Examples: globe valve, angle valve
Based on the movement profile of the controlling element
Sliding stem: The valve stem / plug moves in a linear, or straight line motion. Examples: Globe valve, angle valve, wedge type gate valve
Rotary valve: The valve disc rotates. Examples: Butterfly valve, ball valve
Based on the functionality
Control valve: Controls flow parameters proportional to an input signal received from the central control system. Examples: Globe valve, angle valve, ball valve
Shut-off / On-off valve: These valves are either completely open or closed. Examples: Gate valve, ball valve, globe valve, angle valve, pinch valve, diaphragm valve
Check valve: Allows flow only in a single direction
Steam conditioning valve: Regulates the pressure and temperature of inlet media to required parameters at outlet. Examples: Turbine bypass valve, process steam letdown station
Spring-loaded safety valve: Closed by the force of a spring, which retracts to open when the inlet pressure is equal to the spring force
Based on the actuating medium
Manual valve: Actuated by hand wheel
Pneumatic valve: Actuated using a compressible medium like air, hydrocarbon, or nitrogen, with a spring diaphragm, piston cylinder or piston-spring type actuator
Hydraulic valve: Actuated by a non-compressible medium such as water or oil
Electric valve: Actuated by an electric motor
A wide variety of valve types and control operation exist. However, there are two main forms of action, the sliding stem and the rotary.
The most common and versatile types of control valves are sliding-stem globe, V-notch ball, butterfly and angle types. Their popularity derives from rugged construction and the many options available that make them suitable for a variety of process applications. Control valve bodies may be categorized as below:
List of common types of control valve
Sliding stem
Rotary
Other
See also
References
External links
Control Valve Handbook
Fluid Control Research Institute
Valve World Magazine
New era of valve design and engineering
Machine learning based Valve Design Application
Control devices
Valves | Control valve | [
"Physics",
"Chemistry",
"Engineering"
] | 2,113 | [
"Control devices",
"Physical systems",
"Control engineering",
"Valves",
"Hydraulics",
"Piping"
] |
8,726,682 | https://en.wikipedia.org/wiki/Etching%20%28microfabrication%29 | Etching is used in microfabrication to chemically remove layers from the surface of a wafer during manufacturing. Etching is a critically important process module in fabrication, and every wafer undergoes many etching steps before it is complete.
For many etch steps, part of the wafer is protected from the etchant by a "masking" material which resists etching. In some cases, the masking material is a photoresist which has been patterned using photolithography. Other situations require a more durable mask, such as silicon nitride.
Etching media and technology
The two fundamental types of etchants are liquid-phase ("wet") and plasma-phase ("dry"). Each of these exists in several varieties.
Wet etching
The first etching processes used liquid-phase ("wet") etchants. This process is now largely outdated but was used up until the late 1980s when it was superseded by dry plasma etching. The wafer can be immersed in a bath of etchant, which must be agitated to achieve good process control. For instance, buffered hydrofluoric acid (BHF) is used commonly to etch silicon dioxide over a silicon substrate.
Different specialized etchants can be used to characterize the surface etched.
Wet etchants are usually isotropic, which leads to a large bias when etching thick films. They also require the disposal of large amounts of toxic waste. For these reasons, they are seldom used in state-of-the-art processes. However, the photographic developer used for photoresist resembles wet etching.
As an alternative to immersion, single wafer machines use the Bernoulli principle to employ a gas (usually, pure nitrogen) to cushion and protect one side of the wafer while etchant is applied to the other side. It can be done to either the front side or back side. The etch chemistry is dispensed on the top side when in the machine and the bottom side is not affected. This etching method is particularly effective just before "backend" processing (BEOL), where wafers are normally very much thinner after wafer backgrinding, and very sensitive to thermal or mechanical stress. Etching a thin layer of even a few micrometres will remove microcracks produced during backgrinding resulting in the wafer having dramatically increased strength and flexibility without breaking.
Anisotropic wet etching (Orientation dependent etching)
Some wet etchants etch crystalline materials at very different rates depending upon which crystal face is exposed. In single-crystal materials (e.g. silicon wafers), this effect can allow very high anisotropy, as shown in the figure. The term "crystallographic etching" is synonymous with "anisotropic etching along crystal planes".
However, for some non-crystal materials like glass, there are unconventional ways to etch in an anisotropic manner. The authors employ multistream laminar flow that contains etching non-etching solutions to fabricate a glass groove. The etching solution at the center is flanked by non-etching solutions and the area contacting etching solutions is limited by the surrounding non-etching solutions. The etching direction is thereby mainly vertical to the glass surface. The scanning electron microscopy (SEM) images demonstrate the breaking of the conventional theoretical limit of aspect ratio (width/height=0.5) and contribute a two-fold improvement (width/height=1).
Several anisotropic wet etchants are available for silicon, all of them hot aqueous caustics. For instance, potassium hydroxide (KOH) displays an etch rate selectivity 400 times higher in <100> crystal directions than in <111> directions. EDP (an aqueous solution of ethylene diamine and pyrocatechol), displays a <100>/<111> selectivity of 17X, does not etch silicon dioxide as KOH does, and also displays high selectivity between lightly doped and heavily boron-doped (p-type) silicon. Use of these etchants on wafers that already contain CMOS integrated circuits requires protecting the circuitry. KOH may introduce mobile potassium ions into silicon dioxide, and EDP is highly corrosive and carcinogenic, so care is required in their use. Tetramethylammonium hydroxide (TMAH) presents a safer alternative than EDP, with a 37X selectivity between {100} and {111} planes in silicon.
Etching a (100) silicon surface through a rectangular hole in a masking material, like a hole in a layer of silicon nitride, creates a pit with flat sloping {111}-oriented sidewalls and a flat (100)-oriented bottom. The {111}-oriented sidewalls have an angle to the surface of the wafer of:
If the etching is continued "to completion", i.e. until the flat bottom disappears, the pit becomes a trench with a V-shaped cross-section. If the original rectangle was a perfect square, the pit when etched to completion displays a pyramidal shape.
The undercut, δ, under an edge of the masking material is given by:
,
where Rxxx is the etch rate in the <xxx> direction, T is the etch time, D is the etch depth and S is the anisotropy of the material and etchant.
Different etchants have different anisotropies. Below is a table of common anisotropic etchants for silicon:
Plasma etching
Modern very large scale integration (VLSI) processes avoid wet etching, and use plasma etching instead. Plasma etchers can operate in several modes by adjusting the parameters of the plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure, commonly used in vacuum engineering, equals approximately 133.3 pascals.) The plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. Since neutral particles attack the wafer from all angles, this process is isotropic.
Plasma etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in deep reactive ion etching (DRIE). The use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation-dependent etching.
The source gas for the plasma usually contains small molecules rich in chlorine or fluorine. For instance, carbon tetrachloride (CCl4) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. A plasma containing oxygen is used to oxidize ("ash") photoresist and facilitate its removal.
Ion milling, or sputter etching, uses lower pressures, often as low as 10−4 Torr (10 mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock atoms from the substrate by transferring momentum. Because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching (RIE) operates under conditions intermediate between sputter and plasma etching (between 10−3 and 10−1 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to produce deep, narrow features.
Figures of merit
If the etch is intended to make a cavity in a material, the depth of the cavity may be controlled approximately using the etching time and the known etch rate. More often, though, etching must entirely remove the top layer of a multilayer structure, without damaging the underlying or masking layers. The etching system's ability to do this depends on the ratio of etch rates in the two materials (selectivity).
Some etches undercut the masking layer and form cavities with sloping sidewalls. The distance of undercutting is called bias. Etchants with large bias are called isotropic, because they erode the substrate equally in all directions. Modern processes greatly prefer anisotropic etches, because they produce sharp, well-controlled features.
Common etch processes used in microfabrication
See also
Chemical-Mechanical Polishing
Ingot sawing
Metal assisted chemical etching
Lift-off (microtechnology)
References
Ibid, "Processes for MicroElectroMechanical Systems (MEMS)"
Inline references
External links
Semiconductor technology
Semiconductor device fabrication
Etching
Microtechnology | Etching (microfabrication) | [
"Materials_science",
"Engineering"
] | 1,879 | [
"Microtechnology",
"Etching (microfabrication)",
"Materials science",
"Semiconductor device fabrication",
"Semiconductor technology"
] |
8,732,068 | https://en.wikipedia.org/wiki/New%20Holland%20Brewing%20Company | New Holland Brewing Company is an American independent craft brewing and distilling company headquartered in Holland, Michigan. It also owns and operates brewpub-style restaurants and spirits-tasting rooms located across West Michigan. The company's craft-style beer brands Dragon's Milk, Tangerine Space Machine, and spirits brands Dragon's Milk Origin, Beer Barrel Bourbon among others, are distributed throughout the United States and exported to Canada, Europe and Asia.
After the sale of Bell's to Kirin, New Holland Brewing Company became the largest craft brewery in the state of Michigan.
History
Brett VanderKamp and Jason Spaulding, the founders of New Holland Brewing Company, grew up together in Midland, Michigan, and later attended Hope College. In college Spaulding and VanderKamp cultivated a love of homebrewing, which would bring them together again shortly after graduation. Their business plan took two years to formulate, but once complete, the pair quickly lined up investors, and in 1997 New Holland was founded in Holland, Michigan.
Originally, their goal was to produce beer that was characteristically unique to Western Michigan. Their beer was well received, and the company increased production to just over in 2006. In 2007, the company increased production to over .
New Holland began distilling bourbon, whiskey, rum, gin and vodka in 2005, and selling it in 2008.
On August 23, 2018, New Holland Brewing Company announced that it will be re-branding its flagship Dragon's Milk Bourbon Barrel-Aged Stout. The company launched the re-branding Dragon's Milk packaging in 2023 alongside new Dragon's Milk items, Dragon's Milk Crimson Keep BA Imperial Red Ale and Dragon's Milk Tales of Gold BA Imperial Golden Ale.
References
Breweries in the United States
American beer brands
Beer brewing companies based in Michigan
Distilleries
Bourbon whiskey
Cocktails
Restaurants in Michigan
Companies based in Michigan
Pub chains
Food- and drink-related organizations
Holland, Michigan
Grand Rapids, Michigan
Battle Creek, Michigan | New Holland Brewing Company | [
"Chemistry"
] | 409 | [
"Distilleries",
"Distillation"
] |
8,732,281 | https://en.wikipedia.org/wiki/GOR%20method | The GOR method (short for Garnier–Osguthorpe–Robson) is an information theory-based method for the prediction of secondary structures in proteins. It was developed in the late 1970s shortly after the simpler Chou–Fasman method. Like Chou–Fasman, the GOR method is based on probability parameters derived from empirical studies of known protein tertiary structures solved by X-ray crystallography. However, unlike Chou–Fasman, the GOR method takes into account not only the propensities of individual amino acids to form particular secondary structures, but also the conditional probability of the amino acid to form a secondary structure given that its immediate neighbors have already formed that structure. The method is therefore essentially Bayesian in its analysis.
Method
The GOR method analyzes sequences to predict alpha helix, beta sheet, turn, or random coil secondary structure at each position based on 17-amino-acid sequence windows. The original description of the method included four scoring matrices of size 17×20, where the columns correspond to the log-odds score, which reflects the probability of finding a given amino acid at each position in the 17-residue sequence. The four matrices reflect the probabilities of the central, ninth amino acid being in a helical, sheet, turn, or coil conformation. In subsequent revisions to the method, the turn matrix was eliminated due to the high variability of sequences in turn regions (particularly over such a large window). The method was considered as best requiring at least four contiguous residues to score as alpha helices to classify the region as helical, and at least two contiguous residues for a beta sheet.
Algorithm
The mathematics and algorithm of the GOR method were based on an earlier series of studies by Robson and colleagues reported mainly in the Journal of Molecular Biology and The Biochemical Journal. The latter describes the information theoretic expansions in terms of conditional information measures. The use of the word "simple" in the title of the GOR paper reflected the fact that the above earlier methods provided proofs and techniques somewhat daunting by being rather unfamiliar in protein science in the early 1970s; even Bayes methods were then unfamiliar and controversial. An important feature of these early studies, which survived in the GOR method, was the treatment of the sparse protein sequence data of the early 1970s by expected information measures. That is, expectations on a Bayesian basis considering the distribution of plausible information measure values given the actual frequencies (numbers of observations). The expectation measures resulting from integration over this and similar distributions may now be seen as composed of "incomplete" or extended zeta functions, e.g. z(s,observed frequency) − z(s, expected frequency) with incomplete zeta function z(s, n) = 1 + (1/2)s + (1/3)s+ (1/4)s + …. +(1/n)s. The GOR method used s=1. Also, in the GOR method and the earlier methods, the measure for the contrary state to e.g. helix H, i.e. ~H, was subtracted from that for H, and similarly for beta sheet, turns, and coil or loop. Thus the method can be seen as employing a zeta function estimate of log predictive odds. An adjustable decision constant could also be applied, which thus implies a decision theory approach; the GOR method allowed the option to use decision constants to optimize predictions for different classes of protein. The expected information measure used as a basis for the information expansion was less important by the time of publication of the GOR method because protein sequence data became more plentiful, at least for the terms considered at that time. Then, for s=1, the expression z(s,observed frequency) − z(s,expected frequency) approaches the natural logarithm of (observed frequency / expected frequency) as frequencies increase. However, this measure (including use of other values of s) remains important in later more general applications with high-dimensional data, where data for more complex terms in the information expansion are inevitably sparse.
See also
List of protein structure prediction software
References
Bioinformatics
Protein methods
Applications of Bayesian inference | GOR method | [
"Chemistry",
"Engineering",
"Biology"
] | 864 | [
"Biochemistry methods",
"Biological engineering",
"Protein methods",
"Protein biochemistry",
"Bioinformatics"
] |
2,196,648 | https://en.wikipedia.org/wiki/Lithium%20hydride | Lithium hydride is an inorganic compound with the formula LiH. This alkali metal hydride is a colorless solid, although commercial samples are grey. Characteristic of a salt-like (ionic) hydride, it has a high melting point, and it is not soluble but reactive with all protic organic solvents. It is soluble and nonreactive with certain molten salts such as lithium fluoride, lithium borohydride, and sodium hydride. With a molar mass of 7.95 g/mol, it is the lightest ionic compound.
Physical properties
LiH is a diamagnetic and an ionic conductor with a conductivity gradually increasing from at 443 °C to 0.18 Ω−1cm−1 at 754 °C; there is no discontinuity in this increase through the melting point. The dielectric constant of LiH decreases from 13.0 (static, low frequencies) to 3.6 (visible-light frequencies). LiH is a soft material with a Mohs hardness of 3.5. Its compressive creep (per 100 hours) rapidly increases from < 1% at 350 °C to > 100% at 475 °C, meaning that LiH cannot provide mechanical support when heated.
The thermal conductivity of LiH decreases with temperature and depends on morphology: the corresponding values are 0.125 W/(cm·K) for crystals and 0.0695 W/(cm·K) for compacts at 50 °C, and 0.036 W/(cm·K) for crystals and 0.0432 W/(cm·K) for compacts at 500 °C. The linear thermal expansion coefficient is 4.2/°C at room temperature.
Synthesis and processing
LiH is produced by treating lithium metal with hydrogen gas:
This reaction is especially rapid at temperatures above 600 °C. Addition of 0.001–0.003% carbon, and/or increasing temperature/pressure, increases the yield up to 98% at 2-hour residence time. However, the reaction proceeds at temperatures as low as 29 °C. The yield is 60% at 99 °C and 85% at 125 °C, and the rate depends significantly on the surface condition of LiH.
Less common ways of LiH synthesis include thermal decomposition of lithium aluminium hydride (200 °C), lithium borohydride (300 °C), n-butyllithium (150 °C), or ethyllithium (120 °C), as well as several reactions involving lithium compounds of low stability and available hydrogen content.
Chemical reactions yield LiH in the form of lumped powder, which can be compressed into pellets without a binder. More complex shapes can be produced by casting from the melt. Large single crystals (about 80 mm long and 16 mm in diameter) can be then grown from molten LiH powder in hydrogen atmosphere by the Bridgman–Stockbarger technique. They often have bluish color owing to the presence of colloidal Li. This color can be removed by post-growth annealing at lower temperatures (~550 °C) and lower thermal gradients. Major impurities in these crystals are Na (20–200 ppm), O (10–100 ppm), Mg (0.5–6 ppm), Fe (0.5-2 ppm) and Cu (0.5-2 ppm).
Bulk cold-pressed LiH parts can be easily machined using standard techniques and tools to micrometer precision. However, cast LiH is brittle and easily cracks during processing.
A more energy efficient route to form lithium hydride powder is by ball milling lithium metal under high hydrogen pressure. A problem with this method is the cold welding of lithium metal due to the high ductility. By adding small amounts of lithium hydride powder the cold welding can be avoided.
Reactions
LiH powder reacts rapidly with air of low humidity, forming LiOH, and . In moist air the powder ignites spontaneously, forming a mixture of products including some nitrogenous compounds. The lump material reacts with humid air, forming a superficial coating, which is a viscous fluid. This inhibits further reaction, although the appearance of a film of "tarnish" is quite evident. Little or no nitride is formed on exposure to humid air. The lump material, contained in a metal dish, may be heated in air to slightly below 200 °C without igniting, although it ignites readily when touched by an open flame. The surface condition of LiH, presence of oxides on the metal dish, etc., have a considerable effect on the ignition temperature. Dry oxygen does not react with crystalline LiH unless heated strongly, when an almost explosive combustion occurs.
LiH is highly reactive towards water and other protic reagents:
LiH is less reactive with water than Li and thus is a much less powerful reducing agent for water, alcohols, and other media containing reducible solutes. This is true for all the binary saline hydrides.
LiH pellets slowly expand in moist air, forming LiOH; however, the expansion rate is below 10% within 24 hours in a pressure of 2 Torr of water vapor. If moist air contains carbon dioxide, then the product is lithium carbonate. LiH reacts with ammonia, slowly at room temperature, but the reaction accelerates significantly above 300 °C. LiH reacts slowly with higher alcohols and phenols, but vigorously with lower alcohols.
LiH reacts with sulfur dioxide to give the dithionite:
though above 50 °C the product is lithium sulfide instead.
LiH reacts with acetylene to form lithium carbide and hydrogen. With anhydrous organic acids, phenols and acid anhydrides, LiH reacts slowly, producing hydrogen gas and the lithium salt of the acid. With water-containing acids, LiH reacts faster than with water. Many reactions of LiH with oxygen-containing species yield LiOH, which in turn irreversibly reacts with LiH at temperatures above 300 °C:
Lithium hydride is rather unreactive at moderate temperatures with or . It is, therefore, used in the synthesis of other useful hydrides, e.g.,
Applications
Hydrogen storage and fuel
With a hydrogen content in proportion to its mass three times that of NaH, LiH has the highest hydrogen content of any hydride. LiH is periodically of interest for hydrogen storage, but applications have been thwarted by its stability to decomposition. Thus removal of requires temperatures above the 700 °C used for its synthesis, such temperatures are expensive to create and maintain. The compound was once tested as a fuel component in a model rocket.
Precursor to complex metal hydrides
LiH is not usually a hydride-reducing agent, except in the synthesis of hydrides of certain metalloids. For example, silane is produced in the reaction of lithium hydride and silicon tetrachloride by the Sundermeyer process:
Lithium hydride is used in the production of a variety of reagents for organic synthesis, such as lithium aluminium hydride () and lithium borohydride (). Triethylborane reacts to give superhydride ().
In nuclear chemistry and physics
Lithium hydride (LiH) is sometimes a desirable material for the shielding of nuclear reactors, with the isotope lithium-6 (Li-6), and it can be fabricated by casting.
Lithium deuteride
Lithium deuteride, in the form of lithium-7 deuteride ( or 7LiD), is a good moderator for nuclear reactors, because deuterium (2H or D) has a lower neutron absorption cross-section than ordinary hydrogen or protium (1H) does, and the cross-section for 7Li is also low, decreasing the absorption of neutrons in a reactor. 7Li is preferred for a moderator because it has a lower neutron capture cross-section, and it also forms less tritium (3H or T) under bombardment with neutrons.
The corresponding lithium-6 deuteride ( or 6LiD) is the primary fusion fuel in thermonuclear weapons. In hydrogen warheads of the Teller–Ulam design, a nuclear fission trigger explodes to heat and compress the lithium-6 deuteride, and to bombard the 6LiD with neutrons to produce tritium in an exothermic reaction:
The deuterium and tritium then fuse to produce helium, one neutron, and 17.59 MeV of free energy in the form of gamma rays, kinetic energy, etc. Tritium has a favorable reaction cross section. The helium is an inert byproduct.
+ → + n.
Before the Castle Bravo nuclear weapons test in 1954, it was thought that only the less common isotope 6Li would breed tritium when struck with fast neutrons. The Castle Bravo test showed (accidentally) that the more plentiful 7Li also does so under extreme conditions, albeit by an endothermic reaction.
Safety
LiH reacts violently with water to give hydrogen gas and LiOH, which is caustic. Consequently, LiH dust can explode in humid air, or even in dry air due to static electricity. At concentrations of in air the dust is extremely irritating to the mucous membranes and skin and may cause an allergic reaction. Because of the irritation, LiH is normally rejected rather than accumulated by the body.
Some lithium salts, which can be produced in LiH reactions, are toxic. LiH fire should not be extinguished using carbon dioxide, carbon tetrachloride, or aqueous fire extinguishers; it should be smothered by covering with a metal object or graphite or dolomite powder. Sand is less suitable, as it can explode when mixed with burning LiH, especially if not dry. LiH is normally transported in oil, using containers made of ceramic, certain plastics or steel, and is handled in an atmosphere of dry argon or helium. Nitrogen can be used, but not at elevated temperatures, as it reacts with lithium. LiH normally contains some metallic lithium, which corrodes steel or silica containers at elevated temperatures.
References
External links
University of Southampton, Mountbatten Centre for International Studies, Nuclear History Working Paper No5.
CDC - NIOSH Pocket Guide to Chemical Hazards
Lithium compounds
Metal hydrides
Nuclear materials
Nuclear fusion fuels
Superbases
Rock salt crystal structure | Lithium hydride | [
"Physics",
"Chemistry"
] | 2,160 | [
"Superbases",
"Inorganic compounds",
"Reducing agents",
"Materials",
"Nuclear materials",
"Metal hydrides",
"Bases (chemistry)",
"Matter"
] |
2,197,956 | https://en.wikipedia.org/wiki/Thorium-232 | Thorium-232 () is the main naturally occurring isotope of thorium, with a relative abundance of 99.98%. It has a half life of 14.05 billion years, which makes it the longest-lived isotope of thorium. It decays by alpha decay to radium-228; its decay chain terminates at stable lead-208.
Thorium-232 is a fertile material; it can capture a neutron to form thorium-233, which subsequently undergoes two successive beta decays to uranium-233, which is fissile. As such, it has been used in the thorium fuel cycle in nuclear reactors; various prototype thorium-fueled reactors have been designed. However, as of 2024, thorium fuel has not been widely adopted for commercial-scale nuclear power.
Natural occurrence
The half-life of thorium-232 (14 billion years) is more than three times the age of the Earth; thorium-232 therefore occurs in nature as a primordial nuclide. Other thorium isotopes occur in nature in much smaller quantities as intermediate products in the decay chains of uranium-238, uranium-235, and thorium-232.
Some minerals that contain thorium include apatite, sphene, zircon, allanite, monazite, pyrochlore, thorite, and xenotime.
Decay
Thorium-232 has a half-life of 14 billion years and mainly decays by alpha decay to radium-228 with a decay energy of 4.0816 MeV. The decay chain follows the thorium series, which terminates at stable lead-208. The intermediates in the thorium-232 decay chain are all relatively short-lived; the longest-lived intermediate decay products are radium-228 and thorium-228, with half lives of 5.75 years and 1.91 years, respectively. All other intermediate decay products have half lives of less than four days.
The following table lists the intermediate decay products in the thorium-232 decay chain:
Rare decay modes
Although thorium-232 mainly decays by alpha decay, it also undergoes spontaneous fission 1.1% of the time. In addition, it is capable of
cluster decay, splitting into ytterbium-182, neon-24, and neon-26; the upper limit for the branching ratio of this decay mode is 2.78%. Double beta decay to uranium-232 is also theoretically possible, but has not been observed.
Use in nuclear power
Thorium-232 is not fissile; it therefore cannot be used directly as fuel in nuclear reactors. However, is fertile: it can capture a neutron to form , which undergoes beta decay with a half-life of 21.8 minutes to . This nuclide subsequently undergoes beta decay with a half-life of 27 days to form fissile .
One potential advantage of a thorium-based nuclear fuel cycle is that thorium is three times more abundant than uranium, the current fuel for commercial nuclear reactors. It is also more difficult to produce material suitable for nuclear weapons from the thorium fuel cycle compared to the uranium fuel cycle. Some proposed designs for thorium-fueled nuclear reactors include the molten salt reactor and a fast neutron reactor, among others. Although thorium-based nuclear reactors have been proposed since the 1960s and several prototype reactors have been built, there has been relatively little research on the thorium fuel cycle compared to the more established uranium fuel cycle; thorium-based nuclear power has not seen large-scale commercial use as of 2024. Nevertheless, some countries such as India have actively pursued thorium-based nuclear power.
References
Actinides
Isotopes of thorium
Fertile materials
IARC Group 1 carcinogens
Radionuclides used in radiometric dating | Thorium-232 | [
"Chemistry"
] | 773 | [
"Isotopes of thorium",
"Isotopes",
"Radionuclides used in radiometric dating"
] |
2,199,445 | https://en.wikipedia.org/wiki/Gravity%20train | A gravity train is a theoretical means of transportation for purposes of commuting between two points on the surface of a sphere, by following a straight tunnel connecting the two points through the interior of the sphere.
In a large body such as a planet, this train could be left to accelerate using just the force of gravity, since during the first half of the trip (from the point of departure until the middle), the downward pull towards the center of gravity would pull it towards the destination. During the second half of the trip, the acceleration would be in the opposite direction relative to the trajectory, but, ignoring the effects of friction, the momentum acquired during the first half of the trajectory would equal this deceleration, and as a result, the train's speed would reach zero at approximately the moment the train reached its destination.
Origin of the concept
In the 17th century, British scientist Robert Hooke presented the idea of an object accelerating inside a planet in a letter to Isaac Newton. A gravity train project was seriously presented to the French Academy of Sciences in the 19th century. The same idea was proposed, without calculation, by Lewis Carroll in 1893 in Sylvie and Bruno Concluded. The idea was rediscovered in the 1960s when physicist Paul Cooper published a paper in the American Journal of Physics suggesting that gravity trains be considered for a future transportation project.
Mathematical considerations
Under the assumption of a spherical planet with uniform density, and ignoring relativistic effects as well as friction, a gravity train has the following properties:
The duration of a trip depends only on the density of the planet and the gravitational constant, but not on the diameter of the planet.
The maximum speed is reached at the middle point of the trajectory.
For gravity trains between points which are not the antipodes of each other, the following hold:
The shortest time tunnel through a homogeneous earth is a hypocycloid; in the special case of two antipodal points, the hypocycloid degenerates to a straight line.
All straight-line gravity trains on a given planet take exactly the same amount of time to complete a journey (that is, no matter where on the surface the two endpoints of its trajectory are located).
On the planet Earth specifically, since a gravity train's movement is the projection of a very-low-orbit satellite's movement onto a line, it has the following parameters:
The travel time equals 2530.30 seconds (nearly 42.2 minutes, half the period of a low Earth orbit satellite), assuming Earth were a perfect sphere of uniform density.
By taking into account the realistic density distribution inside the Earth, as known from the preliminary reference Earth model, the expected fall-through time is reduced from 42 to 38 minutes.
To put some numbers in perspective, the deepest current bore hole is the Kola Superdeep Borehole with a true depth of 12,262 meters; covering the distance between London and Paris (350 km) via a hypocycloidical path would require the creation of a hole 111,408 metres deep. Not only is such a depth nine times as great, but it would also necessitate a tunnel that passes through the Earth's mantle.
Mathematical derivation
Using the approximations that the Earth is perfectly spherical and of uniform density , and the fact that within a uniform hollow sphere there is no gravity, the gravitational acceleration experienced by a body within the Earth is proportional to the ratio of the distance from the center to the Earth's radius . This is because underground at distance from the center is like being on the surface of a planet of radius , within a hollow sphere which contributes nothing.
On the surface, , so the gravitational acceleration is . Hence, the gravitational acceleration at is
Diametric path to antipodes
In the case of a straight line through the center of the Earth, the acceleration of the body is equal to that of gravity: it is falling freely straight down. We start falling at the surface, so at time (treating acceleration and velocity as positive downwards):
Differentiating twice:
where . This class of problems, where there is a restoring force proportional to the displacement away from zero, has general solutions of the form , and describes simple harmonic motion such as in a spring or pendulum.
In this case so that , we begin at the surface at time zero, and oscillate back and forth forever.
The travel time to the antipodes is half of one cycle of this oscillator, that is the time for the argument to to sweep out radians. Using simple approximations of that time is
Straight path between two arbitrary points
For the more general case of the straight line path between any two points on the surface of a sphere we calculate the acceleration of the body as it moves frictionlessly along its straight path.
The body travels along AOB, O being the midpoint of the path, and the closest point to the center of the Earth on this path. At distance along this path, the force of gravity depends on distance to the center of the Earth as above. Using the shorthand for length OC:
The resulting acceleration on the body, because is it on a frictionless
inclined surface, is :
But is , so substituting:
which is exactly the same for this new , distance along AOB away from O, as for the in the diametric case along ACD. So the remaining analysis is the same, accommodating the initial condition that the maximal is the complete equation of motion is
The time constant is the same as in the diametric case so the journey time is still 42 minutes; it's just that all the distances and speeds are scaled by the constant .
Dependence on radius of planet
The time constant depends only on so if we expand that we get
which depends only on the gravitational constant and the density of the planet. The size of the planet is immaterial; the journey time is the same if the density is the same.
In fiction
In the 2012 movie Total Recall, a gravity train called "The Fall" goes through the center of the Earth to commute between Western Europe and Australia.
See also
Brachistochrone curve
Funicular
Hyperloop
Rail energy storage
Schuler tuning
Colonization of the asteroid belt
Space elevator
References
Description of the concept Gravity train and mathematical solution (Alexandre Eremenko web page at Purdue University).
External links
A simulation of this motion; includes tunnels that do not pass through the center of the earth. Also shows a satellite with same period.
The Gravity Express
To Everywhere in 42 Minutes
Mechanics
Fictional technology
Hypothetical technology
High-speed rail
Train
Differential equations
Travel to the Earth's center | Gravity train | [
"Physics",
"Mathematics",
"Engineering"
] | 1,346 | [
"Mathematical objects",
"Differential equations",
"Equations",
"Mechanics",
"Mechanical engineering"
] |
14,543,350 | https://en.wikipedia.org/wiki/Hydroxymethylglutaryl-CoA%20synthase | In biochemistry, hydroxymethylglutaryl-CoA synthase or HMG-CoA synthase is an enzyme which catalyzes the reaction in which acetyl-CoA condenses with acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA). This reaction comprises the second step in the mevalonate-dependent isoprenoid biosynthesis pathway. HMG-CoA is an intermediate in both cholesterol synthesis and ketogenesis. This reaction is overactivated in patients with diabetes mellitus type 1 if left untreated, due to prolonged insulin deficiency and the exhaustion of substrates for gluconeogenesis and the TCA cycle, notably oxaloacetate. This results in shunting of excess acetyl-CoA into the ketone synthesis pathway via HMG-CoA, leading to the development of diabetic ketoacidosis.
The 3 substrates of this enzyme are acetyl-CoA, H2O, and acetoacetyl-CoA, whereas its two products are (S)-3-hydroxy-3-methylglutaryl-CoA and CoA.
In humans, the protein is encoded by the HMGCS1 gene on chromosome 5.
Classification
This enzyme belongs to the family of transferases, specifically those acyltransferases that convert acyl groups into alkyl groups on transfer.
Nomenclature
The systematic name of this enzyme class is acetyl-CoA:acetoacetyl-CoA C-acetyltransferase (thioester-hydrolysing, carboxymethyl-forming). Other names in common use include (S)-3-hydroxy-3-methylglutaryl-CoA acetoacetyl-CoA-lyase, (CoA-acetylating), 3-hydroxy-3-methylglutaryl CoA synthetase, 3-hydroxy-3-methylglutaryl coenzyme A synthase, 3-hydroxy-3-methylglutaryl coenzyme A synthetase, 3-hydroxy-3-methylglutaryl-CoA synthase, 3-hydroxy-3-methylglutaryl-coenzyme A synthase, beta-hydroxy-beta-methylglutaryl-CoA synthase, HMG-CoA synthase, acetoacetyl coenzyme A transacetase, hydroxymethylglutaryl coenzyme A synthase, and hydroxymethylglutaryl coenzyme A-condensing enzyme.
Mechanism
HMG-CoA synthase contains an important catalytic cysteine residue that acts as a nucleophile in the first step of the reaction: the acetylation of the enzyme by acetyl-CoA (its first substrate) to produce an acetyl-enzyme thioester, releasing the reduced coenzyme A. The subsequent nucleophilic attack on acetoacetyl-CoA (its second substrate) leads to the formation of HMG-CoA.
Biological role
This enzyme participates in 3 metabolic pathways: synthesis and degradation of ketone bodies, valine, leucine and isoleucine degradation, and butanoate metabolism.
Species distribution
HMG-CoA synthase occurs in eukaryotes, archaea, and certain bacteria.
Eukaryotes
In vertebrates, there are two different isozymes of the enzyme (cytosolic and mitochondrial); in humans the cytosolic form has only 60.6% amino acid identity with the mitochondrial form of the enzyme. HMG-CoA is also found in other eukaryotes such as insects, plants, and fungi.
Cytosolic
The cytosolic form is the starting point of the mevalonate pathway, which leads to cholesterol and other sterolic and isoprenoid compounds.
Mitochondrial
The mitochondrial form is responsible for the biosynthesis of ketone bodies. The gene for the mitochondrial form of the enzyme has three sterol regulatory elements in the 5' flanking region. These elements are responsible for decreased transcription of the message responsible for enzyme synthesis when dietary cholesterol is high in animals: the same is observed for 3-hydroxy-3-methylglutaryl-CoA and the low density lipoprotein receptor.
Bacteria
In bacteria, isoprenoid precursors are generally synthesised via an alternative, non-mevalonate pathway, however a number of Gram-positive pathogens utilise a mevalonate pathway involving HMG-CoA synthase that is parallel to that found in eukaryotes.
Structural studies
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes , , , and .
External links
References
EC 2.3.3
Protein families
Human proteins | Hydroxymethylglutaryl-CoA synthase | [
"Biology"
] | 1,013 | [
"Protein families",
"Protein classification"
] |
14,543,886 | https://en.wikipedia.org/wiki/LPAR3 | Lysophosphatidic acid receptor 3 also known as LPA3 is a protein that in humans is encoded by the LPAR3 gene. LPA3 is a G protein-coupled receptor that binds the lipid signaling molecule lysophosphatidic acid (LPA).
Function
This gene encodes a member of the G protein-coupled receptor family, as well as the EDG family of proteins. This protein functions as a cellular receptor for lysophosphatidic acid and mediates lysophosphatidic acid-evoked calcium mobilization. This receptor couples predominantly to G(q/11) alpha proteins.
Evolution
Paralogues
Source:
LPAR1
LPAR2
S1PR1
S1PR3
S1PR4
S1PR2
S1PR5
CNR1
GPR3
MC5R
GPR6
GPR12
MC4R
CNR2
MC3R
MC1R
MC2R
GPR119
See also
Lysophospholipid receptor
References
Further reading
External links
G protein-coupled receptors | LPAR3 | [
"Chemistry"
] | 222 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,544,789 | https://en.wikipedia.org/wiki/Tele-epidemiology | Tele-epidemiology is the application of telecommunications to epidemiological research and application, including space-based and internet-based systems.
Tele-epidemiology applies satellite communication systems to investigate or support investigations of infectious disease outbreak, including disease reemergence. In this application, space-based systems (i.e. GIS, GPS, SPOT5) use natural index and in-situ data (i.e. NDVI, Meteosat, Envisat) to assess health risk to human and animal populations. Space-based applications of tele-epidemiology extend to health surveillance and health emergency response.
Internet-based applications of tele-epidemiology include sourcing of epidemiological data in generating internet reports and real-time disease mapping. This entails gathering and structuring epidemiological data from news and social media outlets, and mapping or reporting this data for application with research or public health organizations. Examples of such applications include HealthMap and ProMED-mail, two web-based services that map and e-mail global cases of disease outbreak, respectively.
The United Nations Office for Outer Space Affairs often refers generally to telehealth for applications linking communication and information technologies such as telesurgery and telenursing, to healthcare administration.
Clinical applications
Provides real-time information about disease prevalence across populations to public health, physicians and citizens, globally.
Diminishes communicable disease risk by mobilizing local medical efforts to respond to disease outbreaks, especially in vulnerable populations.
Enhances the ability of managing the proliferation of communicable pathogens.
Can be used as a management tool in public health to discover, assess, and act on epidemiological data. For example, gathering and identifying disease relevant risk factors helps to identify treatment interventions and implement the prevention strategies that could lessen the effects of the outbreak on the general population and improve clinical outcomes at the individual, patient-leve
Could prove useful to commerce, travelers, public health agencies and federal governments, and diplomatic efforts.
Public health agencies and federal governments might take advantage of Tele-epidemiology for predicting the propagation of communicable diseases.
Provides users and governments with information for early warning systems.
Non-clinical applications
Applications of tele-epidemiology are not being used frequently in clinical settings
The use of space-based systems are important for research and public health efforts, though these activities are driven largely by secondary or tertiary organizations, not the public health agencies themselves.
Relevant data can be used for research and is widely accessible through existing internet outlets.
Data can be disseminated through internet reports of disease outbreak for real-time disease mapping for public use. The application of HealthMap and ProMED-mail demonstrate considerable global health utility and accessibility for users from both the public and private domains.
Internet-based platforms can be used by the general public to determine local and international disease outbreaks. Consumers can also contribute their own epidemiologically relevant data to these services.
Advantages
Space-based tele-epidemiological initiatives, using satellites, are able to gather environmental information relevant to tracking disease outbreaks. S2E, a French multidisciplinary consortium on spatial surveillance of epidemics, has used satellites to garner relevant information on vegetation, meteorology and hydrology. This information, in concert with clinical data from humans and animals, can be used to construct predictive mathematical models that may allow for the forecasting of disease outbreaks.
Web-based tele-epidemiological services are able to aggregate information from several disparate sources to provide information on disease surveillance and potential disease outbreaks. Both ProMED-mail and Healthmap collect information in several different languages to gather worldwide epidemiological information. These services are both free and allow both health care professionals and laypeople to access reliable disease outbreak information from around the world and in real-time.
Disadvantages
Space-based methodologies require investment of resources for the collection and management of epidemiological information; as such, these systems may not be affordable or technologically feasible for developing countries that need assistance tracking disease outbreaks. Further, the success of space-based methodologies is predicated on the collection of accurate ground-based data by qualified public health professionals. This may not be possible in developing countries because they lack basic laboratory and epidemiological resources
Web-based tele-epidemiological initiatives have a unique set of challenges that are different from those experienced by space-based methodologies. HealthMap, in an effort to provide comprehensive worldwide information, contains information from a variety of sources including eyewitness accounts, online news and validated official reports. As a result, the site necessarily relies upon third party information, the veracity of which they are not liable.
See also
Landscape epidemiology
Satellite imagery
Telehealth
Telematics
Telemedicine
Telenursing
Teleophthalmology
References
Epidemiology
Human geography
Telehealth
Health informatics | Tele-epidemiology | [
"Biology",
"Environmental_science"
] | 1,041 | [
"Health informatics",
"Epidemiology",
"Environmental social science",
"Human geography",
"Medical technology"
] |
14,546,072 | https://en.wikipedia.org/wiki/Melting-point%20depression | This article deals with melting/freezing point depression due to very small particle size. For depression due to the mixture of another compound, see freezing-point depression.
Melting-point depression is the phenomenon of reduction of the melting point of a material with a reduction of its size. This phenomenon is very prominent in nanoscale materials, which melt at temperatures hundreds of degrees lower than bulk materials.
Introduction
The melting temperature of a bulk material is not dependent on its size. However, as the dimensions of a material decrease towards the atomic scale, the melting temperature scales with the material dimensions. The decrease in melting temperature can be on the order of tens to hundreds of degrees for metals with nanometer dimensions.
Melting-point depression is most evident in nanowires, nanotubes and nanoparticles, which all melt at lower temperatures than bulk amounts of the same material. Changes in melting point occur because nanoscale materials have a much larger surface-to-volume ratio than bulk materials, drastically altering their thermodynamic and thermal properties.
Melting-point depression was mostly studied for nanoparticles, owing to their ease of fabrication and theoretical modeling. The melting temperature of a nanoparticle decreases sharply as the particle reaches critical diameter, usually < 50 nm for common engineering metals.
Melting point depression is a very important issue for applications involving nanoparticles, as it decreases the functional range of the solid phase. Nanoparticles are currently used or proposed for prominent roles in catalyst, sensor, medicinal, optical, magnetic, thermal, electronic, and alternative energy applications. Nanoparticles must be in a solid state to function at elevated temperatures in several of these applications.
Measurement techniques
Two techniques allow measurement of the melting point of the nanoparticle. The electron beam of a transmission electron microscope (TEM) can be used to melt nanoparticles. The melting temperature is estimated from the beam intensity, while changes in the diffraction conditions to indicate phase transition from solid to liquid. This method allows direct viewing of nanoparticles as they melt, making it possible to test and characterize samples with a wider distribution of particle sizes. The TEM limits the pressure range at which melting point depression can be tested.
More recently, researchers developed nanocalorimeters that directly measure the enthalpy and melting temperature of nanoparticles. Nanocalorimeters provide the same data as bulk calorimeters, however, additional calculations must account for the presence of the substrate supporting the particles. A narrow size distribution of nanoparticles is required since the procedure does not allow users to view the sample during the melting process. There is no way to characterize the exact size of melted particles during the experiment.
History
Melting point depression was predicted in 1909 by Pawlow. It was directly observed inside an electron microscope in the 1960s–70s for nanoparticles of Pb, Au, and In.
Physics
Nanoparticles have a much greater surface-to-volume ratio than bulk materials. The increased surface-to-volume ratio means surface atoms have a much greater effect on the chemical and physical properties of a nanoparticle. Surface atoms bind in the solid phase with less cohesive energy because they have fewer neighboring atoms in close proximity compared to atoms in the bulk of the solid. Each chemical bond an atom shares with a neighboring atom provides cohesive energy, so atoms with fewer bonds and neighboring atoms have lower cohesive energy. The cohesive energy of the nanoparticle has been theoretically calculated as a function of particle size according to Equation 1.
Where: D = nanoparticle size
d = atomic size
Eb = cohesive energy of bulk
As Equation 1 shows, the effective cohesive energy of a nanoparticle approaches that of the bulk material as the material extends beyond the atomic size range (D>>d).
Atoms located at or near the surface of the nanoparticle have reduced cohesive energy due to a reduced number of cohesive bonds. An atom experiences an attractive force with all nearby atoms according to the Lennard-Jones potential.
The cohesive energy of an atom is directly related to the thermal energy required to free the atom from the solid. According to Lindemann's criterion, the melting temperature of a material is proportional to its cohesive energy, av (TM=Cav). Since atoms near the surface have fewer bonds and reduced cohesive energy, they require less energy to free from the solid phase. Melting point depression of high surface-to-volume ratio materials results from this effect. For the same reason, surfaces of nanomaterials can melt at lower temperatures than the bulk material.
The theoretical size-dependent melting point of a material can be calculated through classical thermodynamic analysis. The result is the Gibbs–Thomson equation shown in Equation 2.
Where: TMB = bulk melting temperature
σsl = solid–liquid interface energy
Hf = Bulk heat of fusion
ρs = density of solid
d = particle diameter
Semiconductor/covalent nanoparticles
Equation 2 gives the general relation between the melting point of a metal nanoparticle and its diameter. However, recent work indicates the melting point of semiconductor and covalently bonded nanoparticles may have a different dependence on particle size. The covalent character of the bonds changes the melting physics of these materials. Researchers have demonstrated that Equation 3 more accurately models melting point depression in covalently bonded materials.
Where: TMB=bulk melting temperature
c=materials constant
d=particle diameter
Equation 3 indicates that melting point depression is less pronounced in covalent nanoparticles due to the quadratic nature of particle size dependence in the melting Equation.
Proposed mechanisms
The specific melting process for nanoparticles is currently unknown. The scientific community currently accepts several mechanisms as possible models of nanoparticle melting. Each of the corresponding models effectively matches experimental data for the melting of nanoparticles. Three of the four models detailed below derive the melting temperature in a similar form using different approaches based on classical thermodynamics.
Liquid drop model
The liquid drop model (LDM) assumes that an entire nanoparticle transitions from solid to liquid at a single temperature. This feature distinguishes the model, as the other models predict melting of the nanoparticle surface prior to the bulk atoms. If the LDM is true, a solid nanoparticle should function over a greater temperature range than other models predict. The LDM assumes that the surface atoms of a nanoparticle dominate the properties of all atoms in the particle. The cohesive energy of the particle is identical for all atoms in the nanoparticle.
The LDM represents the binding energy of nanoparticles as a function of the free energies of the volume and surface. Equation 4 gives the normalized, size-dependent melting temperature of a material according to the liquid-drop model.
Where: σsv=solid-vapor interface energy
σlv=liquid-vapor interface energy
Hf=Bulk heat of fusion
ρs=density of solid
ρl=density of liquid
d=diameter of nanoparticle
Liquid shell nucleation model
The liquid shell nucleation model (LSN) predicts that a surface layer of atoms melts prior to the bulk of the particle. The melting temperature of a nanoparticle is a function of its radius of curvature according to the LSN. Large nanoparticles melt at greater temperatures as a result of their larger radius of curvature.
The model calculates melting conditions as a function of two competing order parameters using Landau potentials. One order parameter represents a solid nanoparticle, while the other represents the liquid phase. Each of the order parameters is a function of particle radius.
The parabolic Landau potentials for the liquid and solid phases are calculated at a given temperature, with the lesser Landau potential assumed to be the equilibrium state at any point in the particle. In the temperature range of surface melting, the results show that the Landau curve of the ordered state is favored near the center of the particle while the Landau curve of the disordered state is smaller near the surface of the particle.
The Landau curves intersect at a specific radius from the center of the particle. The distinct intersection of the potentials means the LSN predicts a sharp, unmoving interface between the solid and liquid phases at a given temperature. The exact thickness of the liquid layer at a given temperature is the equilibrium point between the competing Landau potentials.
Equation 5 gives the condition at which an entire nanoparticle melts according to the LSN model.
Where: d0=atomic diameter
Liquid nucleation and growth model
The liquid nucleation and growth model (LNG) treats nanoparticle melting as a surface-initiated process. The surface melts initially, and the liquid-solid interface quickly advances through the entire nanoparticle. The LNG defines melting conditions through the Gibbs-Duhem relations, yielding a melting temperature function dependent on the interfacial energies between the solid and liquid phases, the volumes and surface areas of each phase, and the size of the nanoparticle. The model calculations show that the liquid phase forms at lower temperatures for smaller nanoparticles. Once the liquid phase forms, the free energy conditions quickly change and favor melting. Equation 6 gives the melting conditions for a spherical nanoparticle according to the LNG model.
Bond-order-length-strength (BOLS) model
The bond-order-length-strength (BOLS) model employs an atomistic approach to explain melting point depression. The model focuses on the cohesive energy of individual atoms rather than a classical thermodynamic approach. The BOLS model calculates the melting temperature for individual atoms from the sum of their cohesive bonds. As a result, the BOLS predicts the surface layers of a nanoparticle melt at lower temperatures than the bulk of the nanoparticle.
The BOLS mechanism states that if one bond breaks, the remaining neighbouring ones become shorter and stronger. The cohesive energy, or the sum of bond energy, of the less coordinated atoms determines the thermal stability, including melting, evaporating and other phase transition. The lowered CN changes the equilibrium bond length between atoms near the surface of the nanoparticle. The bonds relax towards equilibrium lengths, increasing the cohesive energy per bond between atoms, independent of the exact form of the specific interatomic potential. However, the integrated, cohesive energy for surface atoms is much lower than for bulk atoms due to the reduced coordination number and an overall decrease in cohesive energy.
Using a core–shell configuration, the melting point depression of nanoparticles is dominated by the outermost two atomic layers, yet atoms in the core interior retain their bulk nature.
The BOLS model and the core–shell structure have been applied to other size dependencies of nanostructures such as the mechanical strength, chemical and thermal stability, lattice dynamics (optical and acoustic phonons), Photon emission and absorption, electronic colevel shift and work function modulation, magnetism at various temperatures, and dielectrics due to electron polarization etc. Reproduction of experimental observations in the above-mentioned size dependency has been realized. Quantitative information, such as the energy level of an isolated atom and the vibration frequency of individual dimer, has been obtained by matching the BOLS predictions to the measured size dependency.
Particle shape
Nanoparticle shape impacts the melting point of a nanoparticle. Facets, edges and deviations from a perfect sphere all change the magnitude of melting point depression. These shape changes affect the surface -to-volume ratio, which affects the cohesive energy and thermal properties of a nanostructure. Equation 7 gives a general shape-corrected formula for the theoretical melting point of a nanoparticle-based on its size and shape.
Where: c=materials constant
z=shape parameter of particle
The shape parameter is 1 for a sphere and 3/2 for a very long wire, indicating that melting-point depression is suppressed in nanowires compared to nanoparticles. Past experimental data show that nanoscale tin platelets melt within a narrow range of 10 °C of the bulk melting temperature. The melting point depression of these platelets was suppressed compared to spherical tin nanoparticles.
Substrate
Several nanoparticle melting simulations theorize that the supporting substrate affects the extent of melting-point depression of a nanoparticle. These models account for energetic interactions between the substrate materials. A free nanoparticle, as many theoretical models assume, has a different melting temperature (usually lower) than a supported particle due to the absence of cohesive energy between the nanoparticle and substrate. However, measurement of the properties of a freestanding nanoparticle remains impossible, so the extent of the interactions cannot be verified through an experiment. Ultimately, substrates currently support nanoparticles for all nanoparticle applications, so substrate/nanoparticle interactions are always present and must impact melting point depression.
Solubility
Within the size–pressure approximation, which considers the stress induced by the surface tension and the curvature of the particle, it was shown that the size of the particle affects the composition and temperature of a eutectic point (Fe-C), the solubility of C in Fe and Fe:Mo nanoclusters.
Reduced solubility can affect the catalytic properties of nanoparticles. In fact it, has been shown that size-induced instability of Fe-C mixtures represents the thermodynamic limit for the thinnest nanotube that can be grown from Fe nanocatalysts.
See also
Freezing-point depression
Thermoporometry and cryoporometry
References
Phase transitions | Melting-point depression | [
"Physics",
"Chemistry"
] | 2,821 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
14,546,804 | https://en.wikipedia.org/wiki/Terrace%20ledge%20kink%20model | In chemistry, the terrace ledge kink (TLK) model, which is also referred to as the terrace step kink (TSK) model, describes the thermodynamics of crystal surface formation and transformation, as well as the energetics of surface defect formation. It is based upon the idea that the energy of an atom’s position on a crystal surface is determined by its bonding to neighboring atoms and that transitions simply involve the counting of broken and formed bonds. The TLK model can be applied to surface science topics such as crystal growth, surface diffusion, roughening, and vaporization.
History
The TLK model is credited as having originated from papers published in the 1920s by the German chemist Walther Kossel and the Bulgarian chemist Ivan Stranski
Definitions
Depending on the position of an atom on a surface, it can be referred to by one of several names. Figure 1 illustrates the names for the atomic positions and point defects on a surface for a simple cubic lattice.
Figure 2 shows a scanning tunneling microscopy topographic image of a step edge that shows many of the features in Figure 1.Figure 3 shows a crystal surface with steps, kinks, adatoms, and vacancies in a closely packed crystalline material, which resembles the surface featured in Figure 2.
Although intuitively evident, it has only recently been explicitly recognized that the attachment of crystal building units to kink positions plays a pivotal role in perpetuating the crystal's symmetry. At a kink position, the attaching unit does not form all its potential bonds; rather, it forms only half the bonds in each given direction. These bonds are grouped in such a way in order to create a concave structure, which naturally accommodates the incoming building unit. This unique arrangement not only minimizes the system's free energy but also aligns the new unit with the symmetry of the underlying lattice. Consequently, kink positions serve as the primary sites where the crystal's structural order is reproduced and propagated, enabling the transition from microscopic nucleation to a macroscopic, ordered crystal form. This subtle yet fundamental mechanism distinguishes kink-mediated growth from other aggregation processes and underscores its critical role in maintaining the uniformity and symmetry of growing crystals.
Thermodynamics
The energy required to remove an atom from the surface depends on the number of bonds to other surface atoms which must be broken. For a simple cubic lattice in this model, each atom is treated as a cube and bonding occurs at each face, giving a coordination number of 6 nearest neighbors. Second-nearest neighbors in this cubic model are those that share an edge and third-nearest neighbors are those that share corners. The number of neighbors, second-nearest neighbors, and third-nearest neighbors for each of the different atom positions are given in Table 1.
Most crystals, however, are not arranged in a simple cubic lattice. The same ideas apply for other types of lattices where the coordination number is not six, but these are not as easy to visualize and work with in theory, so the remainder of the discussion will focus on simple cubic lattices. Table 2 indicates the number of neighboring atoms for a bulk atom in some other crystal lattices.
The kink site is of special importance when evaluating the thermodynamics of a variety of phenomena. This site is also referred to as the “half-crystal position” and energies are evaluated relative to this position for processes such as adsorption, surface diffusion, and sublimation. The term “half-crystal” comes from the fact that the kink site has half the number of neighboring atoms as an atom in the crystal bulk, regardless of the type of crystal lattice.
For example, the formation energy for an adatom—ignoring any crystal relaxation—is calculated by subtracting the energy of an adatom from the energy of the kink atom.
This can be understood as the breaking of all of the kink atom’s bonds to remove the atom from the surface and then reforming the adatom interactions. This is equivalent to a kink atom diffusing away from the rest of the step to become a step adatom and then diffusing away from the adjacent step onto the terrace to become an adatom. In the case where all interactions are ignored except for those with nearest neighbors, the formation energy for an adatom would be the following, where is the bond energy in the crystal is given by Equation 2.
This can be extended to a variety of situations, such as the formation of an adatom-surface vacancy pair on a terrace, which would involve the removal of a surface atom from the crystal and placing it as an adatom on the terrace. This is described by Equation 3.
The energy of sublimation would simply be the energy required to remove an atom from the kink site. This can be envisioned as the surface being disassembled one terrace at a time by removing atoms from the edge of each step, which is the kink position. It has been demonstrated that the application of an external electric field will induce the formation of additional kinks in a surface, which then leads to a faster rate of evaporation from the surface.
Temperature dependence of defect coverage
The number of adatoms present on a surface is temperature dependent. The relationship between the surface adatom concentration and the temperature at equilibrium is described by equation 4, where n0 is the total number of surface sites per unit area:
This can be extended to find the equilibrium concentration of other types of surface point defects as well. To do so, the energy of the defect in question is simply substituted into the above equation in the place of the energy of adatom formation.
References
Thermodynamic models
Chemical thermodynamics | Terrace ledge kink model | [
"Physics",
"Chemistry"
] | 1,167 | [
"Thermodynamic models",
"Chemical thermodynamics",
"Thermodynamics"
] |
14,547,147 | https://en.wikipedia.org/wiki/Architectural%20engineer%20%28PE%29 | Architectural Engineer (PE) is a professional engineering designation in the United States. The architectural engineer applies the knowledge and skills of broader engineering disciplines to the design, construction, operation, maintenance, and renovation of buildings and their component systems while paying careful attention to their effects on the surrounding environment.
With the establishment of a specific "Architectural Engineering" NCEES professional engineering registration examination in the 1990s and first offering in April 2003, architectural engineering is now recognized as a distinct engineering discipline in the United States.
Note that in the United States Architects are not to be confused with "architectural engineering technology" which is different from architectural engineering; in the United States architectural engineering technologists tend to be "Engineering Technicians" that utilize CAD technology as drafters or technical assistants who do not have a license to practice either Architecture or Engineering, usually hired by larger construction firms or developers who prefer to cut out architectural design and maintain high costs of construction for standard processes and common building materials, while in Europe, Canada, South Africa and other countries Architectural technologists have a role similar to Architects and Architectural Engineers.
Areas of focus
Architecture (if licensed as an Architect)
Structural engineering
Construction engineering
Construction management
Project management
Green building
Heating, ventilation and air conditioning (HVAC)
Plumbing and piping (hydronics)
Energy management
Fire protection engineering
Building power systems
Lighting
Building transportation systems
Acoustics, noise & vibration control
A common combined specialization is Mechanical, Electrical and Plumbing, better known by its abbreviation MEP. An MEP design engineer has experience in HVAC, lighting/electrical, and plumbing systems' analysis and design.
Some topics of special interest
Building construction
Building Information Modeling (BIM)
Efficient energy use, Energy conservation or Energy demand management
Renewable energy
Solar energy
Green buildings
Intelligent buildings
Autonomous buildings
Indoor air quality
Thermal comfort
Educational institutions offering bachelor's degrees in architectural engineering
Programs accredited by the Engineering Accreditation Commission (EAC) of ABET and that are members of Architectural Engineering Institute (AEI) are denoted below.
California Polytechnic State University, San Luis Obispo, California (ABET, AEI)
Drexel University, Philadelphia, Pennsylvania (ABET, AEI)
Illinois Institute of Technology, Chicago, Illinois (ABET, AEI)
Kansas State University, Manhattan, Kansas (ABET, AEI)
Lawrence Technological University, Southfield, Michigan (ABET)
Milwaukee School of Engineering, Milwaukee, Wisconsin (ABET, AEI)
North Carolina A&T State University, Greensboro, North Carolina (ABET, AEI)
Oklahoma State University, Stillwater, Oklahoma (ABET, AEI)
Oregon State University, Corvallis, Oregon
Penn State University, State College, Pennsylvania (ABET, AEI)
Tennessee State University, Nashville, Tennessee (ABET, AEI)
Texas A&M University, College Station, Texas
Texas A&M University, Kingsville, Kingsville, Texas (ABET, AEI)
University of Alabama, Tuscaloosa, Alabama (ABET)
University of Arizona, Tucson, Arizona
University of Arkansas at Little Rock, Little Rock, Arkansas (ABET)
University of Cincinnati, Cincinnati, Ohio (ABET)
University of Colorado at Boulder, Boulder, Colorado (ABET, AEI)
University of Detroit Mercy, Detroit, Michigan (ABET)
University of Kansas, Lawrence, Kansas (ABET, AEI)
University of Miami, Miami, Florida (ABET, AEI)
Missouri University of Science and Technology, Rolla, Missouri (ABET, AEI)
University of Nebraska at Omaha, Omaha, Nebraska (ABET, AEI)
University of Oklahoma, Norman, Oklahoma (ABET)
University of Texas at Arlington, Arlington, Texas
University of Texas at Austin, Austin, Texas (ABET, AEI)
University of Wyoming, Laramie, Wyoming (ABET, AEI)
Worcester Polytechnic Institute, Worcester, Massachusetts (ABET)
See also
Accreditation Board for Engineering and Technology
American Society of Heating, Refrigerating and Air-Conditioning Engineers
American Society of Plumbing Engineers
Architectural Engineering Institute
Architectural technologist
Associated General Contractors of America
Illuminating Engineering Society of North America
National Society of Professional Engineers
Society of Fire Protection Engineers
Structural engineering
U.S. Green Building Council
References
Building engineering
Engineering occupations
Engineering | Architectural engineer (PE) | [
"Engineering"
] | 856 | [
"Building engineering",
"Architecture occupations",
"Civil engineering",
"Architecture"
] |
14,547,183 | https://en.wikipedia.org/wiki/Ribosomal%20frameshift | Ribosomal frameshifting, also known as translational frameshifting or translational recoding, is a biological phenomenon that occurs during translation that results in the production of multiple, unique proteins from a single mRNA. The process can be programmed by the nucleotide sequence of the mRNA and is sometimes affected by the secondary, 3-dimensional mRNA structure. It has been described mainly in viruses (especially retroviruses), retrotransposons and bacterial insertion elements, and also in some cellular genes.
Small molecules, proteins, and nucleic acids have also been found to stimulate levels of frameshifting. In December 2023, it was reported that in vitro-transcribed (IVT) mRNAs in response to BNT162b2 (Pfizer–BioNTech) anti-COVID-19 vaccine caused ribosomal frameshifting.
Process overview
Proteins are translated by reading tri-nucleotides on the mRNA strand, also known as codons, from one end of the mRNA to the other (from the 5' to the 3' end) starting with the amino acid methionine as the start (initiation) codon AUG. Each codon is translated into a single amino acid. The code itself is considered degenerate, meaning that a particular amino acid can be specified by more than one codon. However, a shift of any number of nucleotides that is not divisible by 3 in the reading frame will cause subsequent codons to be read differently. This effectively changes the ribosomal reading frame.
Sentence example
In this example, the following sentence of three-letter words makes sense when read from the beginning:
|Start|THE CAT AND THE MAN ARE FAT ...
|Start|123 123 123 123 123 123 123 ...
However, if the reading frame is shifted by one letter to between the T and H of the first word (effectively a +1 frameshift when considering the 0 position to be the initial position of T),
T|Start|HEC ATA NDT HEM ANA REF AT...
-|Start|123 123 123 123 123 123 12...
then the sentence reads differently, making no sense.
DNA example
In this example, the following sequence is a region of the human mitochondrial genome with the two overlapping genes MT-ATP8 and MT-ATP6.
When read from the beginning, these codons make sense to a ribosome and can be translated into amino acids (AA) under the vertebrate mitochondrial code:
|Start|AAC GAA AAT CTG TTC GCT TCA ...
|Start|123 123 123 123 123 123 123 ...
| AA | N E N L F A S ...
However, let's change the reading frame by starting one nucleotide downstream (effectively a "+1 frameshift" when considering the 0 position to be the initial position of A):
A|Start|ACG AAA ATC TGT TCG CTT CA...
-|Start|123 123 123 123 123 123 12...
| AA | T K I C S L ...
Because of this +1 frameshifting, the DNA sequence is read differently. The different codon reading frame therefore yields different amino acids.
Effect
In the case of a translating ribosome, a frameshift can either result in nonsense mutation, a premature stop codon after the frameshift, or the creation of a completely new protein after the frameshift. In the case where a frameshift results in nonsense, the nonsense-mediated mRNA decay (NMD) pathway may destroy the mRNA transcript, so frameshifting would serve as a method of regulating the expression level of the associated gene.
If a novel or off-target protein is produced, it can trigger other unknown consequences.
Function in viruses and eukaryotes
In viruses this phenomenon may be programmed to occur at particular sites and allows the virus to encode multiple types of proteins from the same mRNA. Notable examples include HIV-1 (human immunodeficiency virus), RSV (Rous sarcoma virus) and the influenza virus (flu), which all rely on frameshifting to create a proper ratio of 0-frame (normal translation) and "trans-frame" (encoded by frameshifted sequence) proteins. Its use in viruses is primarily for compacting more genetic information into a shorter amount of genetic material.
In eukaryotes it appears to play a role in regulating gene expression levels by generating premature stops and producing nonfunctional transcripts.
Types of frameshifting
The most common type of frameshifting is −1 frameshifting or programmed −1 ribosomal frameshifting (−1 PRF). Other, rarer types of frameshifting include +1 and −2 frameshifting. −1 and +1 frameshifting are believed to be controlled by different mechanisms, which are discussed below. Both mechanisms are kinetically driven.
Programmed −1 ribosomal frameshifting
In −1 frameshifting, the ribosome slips back one nucleotide and continues translation in the −1 frame. There are typically three elements that comprise a −1 frameshift signal: a slippery sequence, a spacer region, and an RNA secondary structure. The slippery sequence fits a X_XXY_YYH motif, where XXX is any three identical nucleotides (though some exceptions occur), YYY typically represents UUU or AAA, and H is A, C or U. Because the structure of this motif contains 2 adjacent 3-nucleotide repeats it is believed that −1 frameshifting is described by a tandem slippage model, in which the ribosomal P-site tRNA anticodon re-pairs from XXY to XXX and the A-site anticodon re-pairs from YYH to YYY simultaneously. These new pairings are identical to the 0-frame pairings except at their third positions. This difference does not significantly disfavor anticodon binding because the third nucleotide in a codon, known as the wobble position, has weaker tRNA anticodon binding specificity than the first and second nucleotides. In this model, the motif structure is explained by the fact that the first and second positions of the anticodons must be able to pair perfectly in both the 0 and −1 frames. Therefore, nucleotides 2 and 1 must be identical, and nucleotides 3 and 2 must also be identical, leading to a required sequence of 3 identical nucleotides for each tRNA that slips.
+1 ribosomal frameshifting
The slippery sequence for a +1 frameshift signal does not have the same motif, and instead appears to function by pausing the ribosome at a sequence encoding a rare amino acid. Ribosomes do not translate proteins at a steady rate, regardless of the sequence. Certain codons take longer to translate, because there are not equal amounts of tRNA of that particular codon in the cytosol. Due to this lag, there exist in small sections of codons sequences that control the rate of ribosomal frameshifting. Specifically, the ribosome must pause to wait for the arrival of a rare tRNA, and this increases the kinetic favorability of the ribosome and its associated tRNA slipping into the new frame. In this model, the change in reading frame is caused by a single tRNA slip rather than two.
Controlling mechanisms
Ribosomal frameshifting may be controlled by mechanisms found in the mRNA sequence (cis-acting). This generally refers to a slippery sequence, an RNA secondary structure, or both. A −1 frameshift signal consists of both elements separated by a spacer region typically 5–9 nucleotides long. Frameshifting may also be induced by other molecules which interact with the ribosome or the mRNA (trans-acting).
Frameshift signal elements
Slippery sequence
Slippery sequences can potentially make the reading ribosome "slip" and skip a number of nucleotides (usually only 1) and read a completely different frame thereafter. In programmed −1 ribosomal frameshifting, the slippery sequence fits a X_XXY_YYH motif, where XXX is any three identical nucleotides (though some exceptions occur), YYY typically represents UUU or AAA, and H is A, C or U. In the case of +1 frameshifting, the slippery sequence contains codons for which the corresponding tRNA is more rare, and the frameshift is favored because the codon in the new frame has a more common associated tRNA. One example of a slippery sequence is the polyA on mRNA, which is known to induce ribosome slippage even in the absence of any other elements.
RNA secondary structure
Efficient ribosomal frameshifting generally requires the presence of an RNA secondary structure to enhance the effects of the slippery sequence. The RNA structure (which can be a stem-loop or pseudoknot) is thought to pause the ribosome on the slippery site during translation, forcing it to relocate and continue replication from the −1 position. It is believed that this occurs because the structure physically blocks movement of the ribosome by becoming stuck in the ribosome mRNA tunnel. This model is supported by the fact that strength of the pseudoknot has been positively correlated with the level of frameshifting for associated mRNA.
Below are examples of predicted secondary structures for frameshift elements shown to stimulate frameshifting in a variety of organisms. The majority of the structures shown are stem-loops, with the exception of the ALIL (apical loop-internal loop) pseudoknot structure. In these images, the larger and incomplete circles of mRNA represent linear regions. The secondary "stem-loop" structures, where "stems" are formed by a region of mRNA base pairing with another region on the same strand, are shown protruding from the linear DNA. The linear region of the HIV ribosomal frameshift signal contains a highly conserved UUU UUU A slippery sequence; many of the other predicted structures contain candidates for slippery sequences as well.
The mRNA sequences in the images can be read according to a set of guidelines. While A, T, C, and G represent a particular nucleotide at a position, there are also letters that represent ambiguity which are used when more than one kind of nucleotide could occur at that position. The rules of the International Union of Pure and Applied Chemistry (IUPAC) are as follows:
These symbols are also valid for RNA, except with U (uracil) replacing T (thymine).
Trans-acting elements
Small molecules, proteins, and nucleic acids have been found to stimulate levels of frameshifting. For example, the mechanism of a negative feedback loop in the polyamine synthesis pathway is based on polyamine levels stimulating an increase in +1 frameshifts, which results in production of an inhibitory enzyme. Certain proteins which are needed for codon recognition or which bind directly to the mRNA sequence have also been shown to modulate frameshifting levels. MicroRNA (miRNA) molecules may hybridize to an RNA secondary structure and affect its strength.
See also
Antizyme RNA frameshifting stimulation element
Coronavirus frameshifting stimulation element
DnaX ribosomal frameshifting element
Frameshift mutation
HIV ribosomal frameshift signal
Insertion sequence IS1222 ribosomal frameshifting element
Recode database
Ribosomal pause
Slippery sequence
References
External links
Wise2 — aligns a protein against a DNA sequence allowing frameshifts and introns
FastY — compare a DNA sequence to a protein sequence database, allowing gaps and frameshifts
Path — tool that compares two frameshift proteins (back-translation principle)
Recode2 — Database of recoded genes, including those that require programmed Translational frameshift.
RNA
Gene expression
Cis-regulatory RNA elements
Genetics | Ribosomal frameshift | [
"Chemistry",
"Biology"
] | 2,468 | [
"Genetics",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
14,548,330 | https://en.wikipedia.org/wiki/Dark%20star%20%28dark%20matter%29 | A dark star is a hypothetical type of star that may have existed early in the universe before conventional stars were able to form and thrive.
Properties
The dark stars would be composed mostly of normal matter, like modern stars, but a high concentration of neutralino dark matter present within them would generate heat via annihilation reactions between the dark-matter particles. This heat would prevent such stars from collapsing into the relatively compact and dense sizes of modern stars and therefore prevent nuclear fusion among the 'normal' matter atoms from being initiated.
Under this model, a dark star is predicted to be an enormous cloud of molecular hydrogen and helium ranging between 1 and 960 astronomical units (AU) in radius; its surface temperature would be around 10000 K. It is expected that they would grow over time and reach masses up to , up until the point where they exhaust the dark matter needed to sustain them, after which they would collapse.
In the unlikely event that dark stars have endured to the modern era, they could be detectable by their emissions of gamma rays, neutrinos, and antimatter and would be associated with clouds of cold molecular hydrogen gas that normally would not harbor such energetic, extreme, and rare particles.
Possible dark star candidates
In April 2023, a study investigated four extremely redshifted objects discovered by the James Webb Space Telescope. Their study suggested that three of these four, namely JADES-GS-z13-0, JADES-GS-z12-0, and JADES-GS-z11-0, are consistent with being point sources, and further suggested that the only point sources which could exist in this time and be bright enough to be observed at these phenomenal distances and redshifts (z = 10–13) were supermassive dark stars in the early universe, powered by dark matter annihilation. Their spectral analysis of the objects suggested that they were between 500,000 and 1 million solar masses (), as well as having a luminosity of billions of Suns (); they would also likely be huge, possibly with radii surpassing 10,000 solar radii (), far exceeding the size of the largest modern stars.
See also
Population III star
Supermassive star
Quasi-star
Primordial black hole
References
Further reading
External links
Star types
Star
Dark concepts in astrophysics
Hypothetical stars
Black holes | Dark star (dark matter) | [
"Physics",
"Astronomy"
] | 479 | [
"Dark matter",
"Black holes",
"Unsolved problems in astronomy",
"Physical phenomena",
"Physical quantities",
"Concepts in astronomy",
"Unsolved problems in physics",
"Astrophysics",
"Dark concepts in astrophysics",
"Astronomical objects",
"Density",
"Astronomical classification systems",
"Ex... |
17,339,449 | https://en.wikipedia.org/wiki/RNA%20immunoprecipitation%20chip | RIP-chip (RNA immunoprecipitation chip) is a molecular biology technique which combines RNA immunoprecipitation with a microarray. The purpose of this technique is to identify which RNA sequences interact with a particular RNA binding protein of interest in vivo. It can also be used to determine relative levels of gene expression, to identify subsets of RNAs which may be co-regulated, or to identify RNAs that may have related functions. This technique provides insight into the post-transcriptional gene regulation which occurs between RNA and RNA binding proteins.
Procedural Overview
Collect and lyse the cells of interest.
Isolate all RNA fragments and the proteins bound to them from the solution.
Immunoprecipitate the protein of interest. The solution containing the protein-bound RNAs is washed over beads which have been conjugated to antibodies. These antibodies are designed to bind to the protein of interest. They pull the protein (and any RNA fragments that are specifically bound to it) out of the solution which contains the rest of the cell contents.
Dissociate the protein-bound RNA from the antibody-bead complex. Then, use a centrifuge to separate the protein-bound RNA from the heavier antibody-bead complexes, keeping the protein-bound RNA and discarding the beads.
Disassociate the RNA from the protein of interest.
Isolate the RNA fragments from the protein using a centrifuge.
Use Reverse Transcription PCR to convert the RNA fragments into cDNA (DNA that is complementary to the RNA fragments).
Fluorescently label these cDNA fragments.
Prepare the gene chip. This is a small chip that has DNA sequences bound to it in known locations. These DNA sequences correspond to all of the known genes in the genome of the organism that the researcher is working with (or a subset of genes that the researcher is interested in). The cDNA sequences that have been collected will be complementary to some of these DNA sequences, as the cDNAs represent a subset of the RNAs transcribed from the genome.
Allow the cDNA fragments to competitively hybridize to the DNA sequences bound to the chip.
Detection of the fluorescent signal from the cDNA bound to the chip tells researchers which gene(s) on the chip were hybridized to the cDNA.
The genes fluorescently identified by the chip analysis are the genes whose RNA interacts with the original protein of interest. The strength of the fluorescent signal for a particular gene can indicate how much of that particular RNA was present in the original sample, which indicates the expression level of that gene.
Development and Similar Techniques
Previous techniques aiming to understand protein-RNA interactions included RNA Electrophoretic Mobility Shift Assays and UV-crosslinking followed by RT-PCR, however such selective analysis cannot be used when the bound RNAs are not yet known. To resolve this, RIP-chip combines RNA immunoprecipitation to isolate RNA molecules interacting with specific proteins with a microarray which can elucidate the identity of the RNAs participating in this interaction. Alternatives to RIP-chip include:
RIP-seq: Involves sequencing the RNAs that were pulled down using high-throughput sequencing rather than analyzing them with a microarray. Authors Zhao et al., 2010. combined the RNA immunoprecipitation procedure with RNA sequencing. Using specific antibodies (α-Ezh2) they immunoprecipitated nuclear RNA isolated from mouse ES cells, and subsequently sequenced the pulled-down RNA using the next generation sequencing platform, Illumina.
CLIP: The RNA binding protein is cross-linked to the RNA via the use of UV light prior to lysis, which is followed by RNA fragmentation, immunoprecipitation, high-salt wash, SDS-PAGE, membrane transfer, proteinase digestion, cDNA library preparation and sequencing in order to identify the direct RNA binding sites. CLIP has first been combined with high throughput sequencing in HITS-CLIP to determine Nova-RNA binding sites in the mouse brain, and in iCLIP that enabled amplification of truncated cDNAs and introduced the use of UMIs.
ChIP-on-chip: A similar technique which detects the binding of proteins to genomic DNA rather than RNA.
References
Genetics techniques
Microarrays
RNA
Protein methods | RNA immunoprecipitation chip | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 888 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Protein methods",
"Protein biochemistry",
"Genetic engineering",
"Bioinformatics",
"Molecular biology techniques"
] |
17,347,452 | https://en.wikipedia.org/wiki/Green%E2%80%93Davies%E2%80%93Mingos%20rules | In organometallic chemistry, the Green–Davies–Mingos rules predict the regiochemistry for nucleophilic addition to 18-electron metal complexes containing multiple unsaturated ligands. The rules were published in 1978 by organometallic chemists Stephen G. Davies, Malcolm Green, and Michael Mingos. They describe how and where unsaturated hydrocarbon generally become more susceptibile to nucleophilic attack upon complexation.
Rule 1
Nucleophilic attack is preferred on even-numbered polyenes (even hapticity).
Rule 2
Nucleophiles preferentially add to acyclic polyenes rather than cyclic polyenes.
Rule 3
Nucleophiles preferentially add to even-hapticity polyene ligands at a terminus.
Nucleophiles add to odd-hapticity acyclic polyene ligands at a terminal position if the metal is highly electrophilic, otherwise they add at an internal site.
Simplified: even before odd and open before closed
The following is a diagram showing the reactivity trends of even/odd hapticity and open/closed π-ligands.
The metal center is electron withdrawing. This effect is enhanced if the metal is also attached to a carbonyl. Electron poor metals do not back bond well to the carbonyl. The more electron withdrawing the metal is, the more triple bond character the CO ligand has. This gives the ligand a higher force constant. The resultant force constant found for a ligated carbonyl represents the same force constant for π ligands if they replaced the CO ligand in the same complex.
Nucleophilic addition does not occur if kCO* (the effective force constant for the CO ligand) is below a threshold value
The following figure shows a ligated metal attached to a carbonyl group. This group has a partial positive charge and therefore is susceptible to nucleophilic attack. If the ligand represented by Ln were a π-ligand, it would be activated toward nucleophilic attack as well.
Incoming nucleophilic attack happens at one of the termini of the π-system in the figure below:
In this example the ring system can be thought of as analogous to 1,3-butadiene. Following the Green–Davies–Mingos rules, since butadiene is an open π-ligand of even hapticity, nucleophilic attack will occur at one of the terminal positions of the π-system. This occurs because the LUMO of butadiene has larger lobes on the ends rather than the internal positions.
Effects of types of ligands on regiochemistry of attack
Nucleophilic attack at terminal position of allyl ligands when π accepting ligand is present.
If sigma donating ligands are present they pump electrons into the ligand and attack occurs at the internal position.
Effects of asymmetrical ligands
When asymmetrical allyl ligands are present attack occurs at the more substituted position.
In this case the attack will occur on the carbon with both R groups attached to it since that is the more substituted position.
Uses in synthesis
Nucleophilic addition to π ligands can be used in synthesis. One example of this is to make cyclic metal compounds. Nucleophiles add to the center of the π ligand and produces a metallobutane.
Internal attack
References
Reaction mechanisms | Green–Davies–Mingos rules | [
"Chemistry"
] | 709 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
331,731 | https://en.wikipedia.org/wiki/Plaster | Plaster is a building material used for the protective or decorative coating of walls and ceilings and for moulding and casting decorative elements. In English, "plaster" usually means a material used for the interiors of buildings, while "render" commonly refers to external applications. The term stucco refers to plasterwork that is worked in some way to produce relief decoration, rather than flat surfaces.
The most common types of plaster mainly contain either gypsum, lime, or cement, but all work in a similar way. The plaster is manufactured as a dry powder and is mixed with water to form a stiff but workable paste immediately before it is applied to the surface. The reaction with water liberates heat through crystallization and the hydrated plaster then hardens.
Plaster can be relatively easily worked with metal tools and sandpaper and can be moulded, either on site or in advance, and worked pieces can be put in place with adhesive. Plaster is suitable for finishing rather than load-bearing, and when thickly applied for decoration may require a hidden supporting framework.
Forms of plaster have several other uses. In medicine, plaster orthopedic casts are still often used for supporting set broken bones. In dentistry, plaster is used to make dental models by pouring the material into dental impressions. Various types of models and moulds are made with plaster. In art, lime plaster is the traditional matrix for fresco painting; the pigments are applied to a thin wet top layer of plaster and fuse with it so that the painting is actually in coloured plaster. In the ancient world, as well as the sort of ornamental designs in plaster relief that are still used, plaster was also widely used to create large figurative reliefs for walls, though few of these have survived.
History
Plaster was first used as a building material and for decoration in the Middle East at least 7,000 years ago. In Egypt, gypsum was burned in open fires, crushed into powder, and mixed with water to create plaster, used as a mortar between the blocks of pyramids and to provide a smooth wall facing. In Jericho, a cult arose where human skulls were decorated with plaster and painted to appear lifelike. The Romans brought plaster-work techniques to Europe.
Types
Clay plaster
Clay plaster is a mixture of clay, sand and water often with the addition of plant fibers for tensile strength over wood lath.
Clay plaster has been used around the world at least since antiquity. Settlers in the American colonies used clay plaster on the interiors of their houses: "Interior plastering in the form of clay antedated even the building of houses of frame, and must have been visible in the inside of wattle filling in those earliest frame houses in which … wainscot had not been indulged. Clay continued in use long after the adoption of laths and brick filling for the frame." Where lime was not easily accessible it was rationed and usually substituted with clay as a binder. In Martin E. Weaver's seminal work he says, "Mud plaster consists of clay or earth which is mixed with water to give a 'plastic' or workable consistency. If the clay mixture is too plastic it will shrink, crack and distort on drying. Sand, fine gravels and fibres were added to reduce the concentrations of fine clay particles which were the cause of the excessive shrinkage." Manure was often added for its fibre content. In some building techniques straw or grass was used as reinforcement.
In the Earliest European settlers' plasterwork, a mud plaster was used McKee wrote, of a circa 1675 Massachusetts contract that specified the plasterer, "Is to lath and siele the four rooms of the house betwixt the joists overhead with a coat of lime and haire upon the clay; also to fill the gable ends of the house with ricks and plaister them with clay. 5. To lath and plaster partitions of the house with clay and lime, and to fill, lath, and plaister them with lime and haire besides; and to siele and lath them overhead with lime; also to fill, lath, and plaster the kitchen up to the wall plate on every side. 6. The said Daniel Andrews is to find lime, bricks, clay, stone, haire, together with laborers and workmen." Records of the New Haven colony in 1641 mention clay and hay as well as lime and hair also. In German houses of Pennsylvania the use of clay persisted.
Old Economy Village is one such German settlement. The early Nineteenth-Century utopian village in present-day Ambridge, Pennsylvania, used clay plaster substrate exclusively in the brick and wood frame high architecture of the Feast Hall, Great House and other large and commercial structures as well as in the brick, frame and log dwellings of the society members. The use of clay in plaster and in laying brickwork appears to have been a common practice at that time not just in the construction of Economy village when the settlement was founded in 1824. Specifications for the construction of, "Lock keepers houses on the Chesapeake and Ohio Canal, written about 1828, require stone walls to be laid with clay mortar, excepting 3 inches on the outside of the walls … which (are) to be good lime mortar and well pointed." The choice of clay was because of its low cost, but also the availability. At Economy, root cellars dug under the houses yielded clay and sand (stone), or the nearby Ohio river yielded washed sand from the sand bars; and lime outcroppings and oyster shell for the lime kiln.
The surrounding forests of the new village of Economy provided straight grain, old-growth oak trees for lath. Hand split lath starts with a log of straight grained wood of the required length. The log is split into quarters and then smaller and smaller bolts with wedges and a sledge. When small enough, a froe and mallet were used to split away narrow strips of lath. Farm animals provided hair and manure for the float coat of plaster. Fields of wheat and grains provided straw and hay to reinforce the clay plaster. But there was no uniformity in clay plaster recipes.
Manure provides fiber for tensile strength as well as protein adhesive. Unlike casein used with lime plaster, hydrogen bonds of manure proteins are weakened by moisture. With braced timber-framed structures clay plaster was used on interior walls and ceilings as well as exterior walls as the wall cavity and exterior cladding isolated the clay plaster from moisture penetration. Application of clay plaster in brick structures risked water penetration from failed mortar joints on the exterior brick walls. In Economy Village, the rear and middle wythes of brick dwelling walls are laid in a clay and sand mortar with the front wythe bedded in a lime and sand mortar to provide a weather proof seal to protect from water penetration. This allowed a rendering of clay plaster and setting coat of thin lime and fine sand on exterior-walled rooms.
Split lath was nailed with square cut lath nails, one into each framing member. With hand split lath the plasterer had the luxury of making lath to fit the cavity being plastered. Lengths of lath two to six foot are not uncommon at Economy Village. Hand split lath is not uniform like sawn lath. The straightness or waviness of the grain affected the thickness or width of each lath, and thus the spacing of the lath. The clay plaster rough coat varied to cover the irregular lath. Window and door trim as well as the mudboard (baseboard) acted as screeds. With the variation of the lath thickness and use of coarse straw and manure, the clay coat of plaster was thick in comparison to later lime-only and gypsum plasters. In Economy Village, the lime top coats are thin veneers often an eighth inch or less attesting to the scarcity of limestone supplies there.
Clay plasters with their lack of tensile and compressive strength fell out of favor as industrial mining and technology advances in kiln production led to the exclusive use of lime and then gypsum in plaster applications. However, clay plasters still exist after hundreds of years clinging to split lath on rusty square nails. The wall variations and roughness reveal a hand-made and pleasing textured alternative to machine-made modern substrate finishes. But clay plaster finishes are rare and fleeting. According to Martin Weaver, "Many of North America's historic building interiors … are all too often … one of the first things to disappear in the frenzy of demolition of interiors which has unfortunately come to be a common companion to 'heritage preservation' in the guise of building rehabilitation."
Gypsum plaster (plaster of Paris)
Gypsum plaster, also known as plaster of Paris, is a white powder consisting of calcium sulfate hemihydrate. The natural form of the compound is the mineral bassanite.
Etymology
The name "plaster of Paris" was given because it was originally made by heating gypsum from a large deposit at Montmartre, a hill in the north end of Paris.
Chemistry
Gypsum plaster, gypsum powder, or plaster of Paris, is produced by heating gypsum to about 120–180 °C (248–356 °F) in a kiln:
CaSO4.2H2O \overset{heat}{{}->{}} {CaSO4.1/2H2O} + 1\!1/2 H2O ^
(released as steam).
Plaster of Paris has a remarkable property of setting into a hard mass on wetting with water.
CaSO4.1/2H2O + 1 1/2H2O -> CaSO4.2H2O
Plaster of Paris is stored in moisture-proof containers, because the presence of moisture can cause slow setting of plaster of Paris by bringing about its hydration, which will make it useless after some time.
When the dry plaster powder is mixed with water, it rehydrates over time into gypsum. The setting of plaster slurry starts about 10 minutes after mixing and is complete in about 45 minutes. The setting of plaster of Paris is accompanied by a slight expansion of volume. It is used in making casts for statues, toys, and more. The initial matrix consists mostly of orthorhombic crystals: the kinetic product. Over the next 72 hours, the rhombic crystals give way to an interlocking mass of monoclinic crystal needles, and the plaster increases in hardness and strength. If plaster or gypsum is heated to between 130 °C (266 °F) and 180 °C (350 °F), hemihydrate is formed, which will also re-form as gypsum if mixed with water.
On heating to 180 °C (350 °F), the nearly water-free form, called γ-anhydrite (CaSO4·nH2O where n = 0 to 0.05) is produced. γ-anhydrite reacts slowly with water to return to the dihydrate state, a property exploited in some commercial desiccants. On heating above 250 °C (480 °F), the completely anhydrous form called β-anhydrite or dead burned plaster is formed.
Uses of gypsum plaster
for making surfaces like the walls of a house smooth before painting them and for making ornamental designs on the ceilings of houses and other buildings. (see Plaster In decorative architecture)
for making toys, decorative materials, cheap ornaments, cosmetics, and black-board chalk.
a fire-proofing material. (see Plaster in Fire protection)
an orthopedic cast is used in hospitals for setting fractured bones in the right position to ensure correct healing and avoid nonunion. It keeps the fractured bone straight. It is used in this way, because when plaster of Paris is mixed with a proper quantity of water and applied around the fractured limb, it sets into a hard mass, thereby keeping the bones in a fixed position. It is also used for making casts in dentistry. (see Plaster in Medicine)
chemistry laboratory for sealing air-gaps in apparatus when air-tight arrangement is required.
Lime plaster
Lime plaster is a mixture of calcium hydroxide and sand (or other inert fillers). Carbon dioxide in the atmosphere causes the plaster to set by transforming the calcium hydroxide into calcium carbonate (limestone). Whitewash is based on the same chemistry.
To make lime plaster, limestone (calcium carbonate) is heated above approximately 850 °C (1600 °F) to produce quicklime (calcium oxide). Water is then added to produce slaked lime (calcium hydroxide), which is sold as a wet putty or a white powder. Additional water is added to form a paste prior to use. The paste may be stored in airtight containers. When exposed to the atmosphere, the calcium hydroxide very slowly turns back into calcium carbonate through reaction with atmospheric carbon dioxide, causing the plaster to increase in strength.
Lime plaster was a common building material for wall surfaces in a process known as lath and plaster, whereby a series of wooden strips on a studwork frame was covered with a semi-dry plaster that hardened into a surface. The plaster used in most lath and plaster construction was mainly lime plaster, with a cure time of about a month. To stabilize the lime plaster during curing, small amounts of plaster of Paris were incorporated into the mix. Because plaster of Paris sets quickly, "retardants" were used to slow setting time enough to allow workers to mix large working quantities of lime putty plaster. A modern form of this method uses expanded metal mesh over wood or metal structures, which allows a great freedom of design as it is adaptable to both simple and compound curves. Today this building method has been partly replaced with drywall, also composed mostly of gypsum plaster. In both these methods, a primary advantage of the material is that it is resistant to a fire within a room and so can assist in reducing or eliminating structural damage or destruction provided the fire is promptly extinguished.
Lime plaster is used for frescoes, where pigments, diluted in water, are applied to the still wet plaster.
USA and Iran are the main plaster producers in the world.
Cement plaster
Cement plaster is a mixture of suitable plaster, sand, Portland cement and water which is normally applied to masonry interiors and exteriors to achieve a smooth surface. Interior surfaces sometimes receive a final layer of gypsum plaster. Walls constructed with stock bricks are normally plastered while face brick walls are not plastered. Various cement-based plasters are also used as proprietary spray fireproofing products. These usually use vermiculite as lightweight aggregate. Heavy versions of such plasters are also in use for exterior fireproofing, to protect LPG vessels, pipe bridges and vessel skirts.
Cement plaster was first introduced in America around 1909 and was often called by the generic name adamant plaster after a prominent manufacturer of the time. The advantages of cement plaster noted at that time were its strength, hardness, quick setting time and durability.
Heat-resistant plaster
Heat-resistant plaster is a building material used for coating walls and chimney breasts and for use as a fire barrier in ceilings. Its purpose is to replace conventional gypsum plasters in cases where the temperature can get too high for gypsum plaster to stay on the wall or ceiling.
An example of a heat-resistant plaster composition is a mixture of Portland cement, gypsum, lime, exfoliated insulating aggregate (perlite and vermiculite or mica), phosphate shale, and small amounts of adhesive binder (such as Gum karaya), and a detergent agent (such as sodium dodecylbenzene sulfonate).
Applications
In decorative architecture
Plaster may also be used to create complex detailing for use in room interiors. These may be geometric (simulating wood or stone) or naturalistic (simulating leaves, vines, and flowers). These are also often used to simulate wood or stone detailing found in more substantial buildings.
In modern days this material is also used for false ceiling. In this, the powder form is converted in a sheet form and the sheet is then attached to the basic ceiling with the help of fasteners. It is done in various designs containing various combinations of lights and colors. The common use of this plaster can be seen in the construction of houses. Post-construction, direct painting is possible (which is commonly seen in French architecture), but elsewhere plaster is used. The walls are painted with the plaster which (in some countries) is nothing but calcium carbonate. After drying the calcium carbonate plaster turns white and then the wall is ready to be painted. Elsewhere in the world, such as the UK, ever finer layers of plaster are added on top of the plasterboard (or sometimes the brick wall directly) to give a smooth brown polished texture ready for painting.
Art
Mural paintings are commonly painted onto a plaster secondary support. Some, like Michelangelo's Sistine Chapel ceiling, are executed in fresco, meaning they are painted on a thin layer of wet plaster, called intonaco; the pigments sink into this layer so that the plaster itself becomes the medium holding them, which accounts for the excellent durability of fresco. Additional work may be added a secco on top of the dry plaster, though this is generally less durable.
Plaster (often called stucco in this context) is a far easier material for making reliefs than stone or wood, and was widely used for large interior wall-reliefs in Egypt and the Near East from antiquity into Islamic times (latterly for architectural decoration, as at the Alhambra), Rome, and Europe from at least the Renaissance, as well as probably elsewhere. However, it needs very good conditions to survive long in unmaintained buildings – Roman decorative plasterwork is mainly known from Pompeii and other sites buried by ash from Mount Vesuvius.
Plaster may be cast directly into a damp clay mold. In creating this piece molds (molds designed for making multiple copies) or waste molds (for single use) would be made of plaster. This "negative" image, if properly designed, may be used to produce clay productions, which when fired in a kiln become terra cotta building decorations, or these may be used to create cast concrete sculptures. If a plaster positive was desired this would be constructed or cast to form a durable image artwork. As a model for stonecutters this would be sufficient. If intended for producing a bronze casting the plaster positive could be further worked to produce smooth surfaces. An advantage of this plaster image is that it is relatively cheap; should a patron approve of the durable image and be willing to bear further expense, subsequent molds could be made for the creation of a wax image to be used in lost wax casting, a far more expensive process. In lieu of producing a bronze image suitable for outdoor use the plaster image may be painted to resemble a metal image; such sculptures are suitable only for presentation in a weather-protected environment.
Plaster expands while hardening then contracts slightly just before hardening completely. This makes plaster excellent for use in molds, and it is often used as an artistic material for casting. Plaster is also commonly spread over an armature (form), made of wire mesh, cloth, or other materials; a process for adding raised details. For these processes, limestone or acrylic based plaster may be employed, known as stucco.
Products composed mainly of plaster of Paris and a small amount of Portland cement are used for casting sculptures and other art objects as well as molds. Considerably harder and stronger than straight plaster of Paris, these products are for indoor use only as they degrade in moist conditions.
Medicine
Plaster is widely used as a support for broken bones; a bandage impregnated with plaster is moistened and then wrapped around the damaged limb, setting into a close-fitting yet easily removed tube, known as an orthopedic cast.
Plaster is also used in preparation for radiotherapy when fabricating individualized immobilization shells for patients. Plaster bandages are used to construct an impression of a patient's head and neck, and liquid plaster is used to fill the impression and produce a plaster bust. The transparent material polymethyl methacrylate (Plexiglas, Perspex) is then vacuum formed over this bust to create a clear face mask which will hold the patient's head steady while radiation is being delivered.
In dentistry, plaster is used for mounting casts or models of oral tissues. These diagnostic and working models are usually made from dental stone, a stronger, harder and denser derivative of plaster which is manufactured from gypsum under pressure. Plaster is also used to invest and flask wax dentures, the wax being subsequently removed by "burning out," and replaced with flowable denture base material. The typically acrylic denture base then cures in the plaster investment mold. Plaster investments can withstand the high heat and pressure needed to ensure a rigid denture base. Moreover, in dentistry there are 5 types of gypsum products depending on their consistency and uses: 1) impression plaster (type 1), 2) model plaster (type 2), dental stones (types 3, 4 and 5)
In orthotics and prosthetics, plaster bandages traditionally were used to create impressions of the patient's limb (or residuum). This negative impression was then, itself, filled with plaster of Paris, to create a positive model of the limb and used in fabricating the final medical device.
In addition, dentures (false teeth) are made by first taking a dental impression using a soft, pliable material that can be removed from around the teeth and gums without loss of fidelity and using the impression to creating a wax model of the teeth and gums. The model is used to create a plaster mold (which is heated so the wax melts and flows out) and the denture materials are injected into the mold. After a curing period, the mold is opened and the dentures are cleaned up and polished.
Fire protection
Plasters have been in use in passive fire protection, as fireproofing products, for many decades.
Gypsum plaster releases water vapor when exposed to flame, acting to slow the spread of the fire, for as much as an hour or two depending on thickness. Plaster also provides some insulation to retard heat flow into structural steel elements, that would otherwise lose their strength and collapse in a fire. Early versions of protective plasters often contain asbestos fibres, which since have been outlawed in many industrialized nations.
Recent plasters for fire protection either contain cement or gypsum as binding agents as well as mineral wool or glass fiber to add mechanical strength.
Vermiculite, polystyrene beads or chemical expansion agents are often added to decrease the density of the finished product and increase thermal insulation.
One differentiates between interior and exterior fireproofing. Interior products are typically less substantial, with lower densities and lower cost. Exterior products have to withstand harsher environmental conditions. A rough surface is typically forgiven inside of buildings as dropped ceilings often hide them. Fireproofing plasters are losing ground to more costly intumescent and endothermic products, simply on technical merit. Trade jurisdiction on unionized construction sites in North America remains with the plasterers, regardless of whether the plaster is decorative in nature or is used in passive fire protection. Cementitious and gypsum based plasters tend to be endothermic. Fireproofing plasters are closely related to firestop mortars. Most firestop mortars can be sprayed and tooled very well, due to the fine detail work that is required of firestopping.
3D printing
Powder bed and inkjet head 3D printing is commonly based on the reaction of gypsum plaster with water, where the water is selectively applied by the inkjet head.
Gallery
Safety issues
The chemical reaction that occurs when plaster is mixed with water is exothermic. When plaster sets, it can reach temperatures of more than 60 °C (140 °F) and, in large volumes, can burn the skin. In January 2007, a secondary school student in Lincolnshire, England sustained third-degree burns after encasing her hands in a bucket of plaster as part of a school art project.
Plaster that contains powdered silica or asbestos presents health hazards if inhaled repeatedly. Asbestos is a known irritant when inhaled and can cause cancer, especially in people who smoke, and inhalation can also cause asbestosis. Inhaled silica can cause silicosis and (in very rare cases) can encourage the development of cancer. Persons working regularly with plaster containing these additives should take precautions to avoid inhaling powdered plaster, cured or uncured.
People can be exposed to plaster of Paris in the workplace by breathing it in, swallowing it, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for plaster of Paris exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday.
See also
References
External links
Building materials
Wallcoverings
Sculpture materials
Calcium compounds
Hydrates
Plastering
Impression material | Plaster | [
"Physics",
"Chemistry",
"Engineering"
] | 5,227 | [
"Building engineering",
"Coatings",
"Hydrates",
"Architecture",
"Construction",
"Materials",
"Plastering",
"Matter",
"Building materials"
] |
332,090 | https://en.wikipedia.org/wiki/Computably%20enumerable%20set | In computability theory, a set S of natural numbers is called computably enumerable (c.e.), recursively enumerable (r.e.), semidecidable, partially decidable, listable, provable or Turing-recognizable if:
There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S.
Or, equivalently,
There is an algorithm that enumerates the members of S. That means that its output is a list of all the members of S: s1, s2, s3, ... . If S is infinite, this algorithm will run forever, but each element of S will be returned after a finite amount of time. Note that these elements do not have to be listed in a particular way, say from smallest to largest.
The first condition suggests why the term semidecidable is sometimes used. More precisely, if a number is in the set, one can decide this by running the algorithm, but if the number is not in the set, the algorithm can run forever, and no information is returned. A set that is "completely decidable" is a computable set. The second condition suggests why computably enumerable is used. The abbreviations c.e. and r.e. are often used, even in print, instead of the full phrase.
In computational complexity theory, the complexity class containing all computably enumerable sets is RE. In recursion theory, the lattice of c.e. sets under inclusion is denoted .
Formal definition
A set S of natural numbers is called computably enumerable if there is a partial computable function whose domain is exactly S, meaning that the function is defined if and only if its input is a member of S.
Equivalent formulations
The following are all equivalent properties of a set S of natural numbers:
Semidecidability:
The set S is computably enumerable. That is, S is the domain (co-range) of a partial computable function.
The set S is (referring to the arithmetical hierarchy).
There is a partial computable function f such that:
Enumerability:
The set S is the range of a partial computable function.
The set S is the range of a total computable function, or empty. If S is infinite, the function can be chosen to be injective.
The set S is the range of a primitive recursive function or empty. Even if S is infinite, repetition of values may be necessary in this case.
Diophantine:
There is a polynomial p with integer coefficients and variables x, a, b, c, d, e, f, g, h, i ranging over the natural numbers such that (The number of bound variables in this definition is the best known so far; it might be that a lower number can be used to define all Diophantine sets.)
There is a polynomial from the integers to the integers such that the set S contains exactly the non-negative numbers in its range.
The equivalence of semidecidability and enumerability can be obtained by the technique of dovetailing.
The Diophantine characterizations of a computably enumerable set, while not as straightforward or intuitive as the first definitions, were found by Yuri Matiyasevich as part of the negative solution to Hilbert's Tenth Problem. Diophantine sets predate recursion theory and are therefore historically the first way to describe these sets (although this equivalence was only remarked more than three decades after the introduction of computably enumerable sets).
Examples
Every computable set is computably enumerable, but it is not true that every computably enumerable set is computable. For computable sets, the algorithm must also say if an input is not in the set – this is not required of computably enumerable sets.
A recursively enumerable language is a computably enumerable subset of a formal language.
The set of all provable sentences in an effectively presented axiomatic system is a computably enumerable set.
Matiyasevich's theorem states that every computably enumerable set is a Diophantine set (the converse is trivially true).
The simple sets are computably enumerable but not computable.
The creative sets are computably enumerable but not computable.
Any productive set is not computably enumerable.
Given a Gödel numbering of the computable functions, the set (where is the Cantor pairing function and indicates is defined) is computably enumerable (cf. picture for a fixed x). This set encodes the halting problem as it describes the input parameters for which each Turing machine halts.
Given a Gödel numbering of the computable functions, the set is computably enumerable. This set encodes the problem of deciding a function value.
Given a partial function f from the natural numbers into the natural numbers, f is a partial computable function if and only if the graph of f, that is, the set of all pairs such that f(x) is defined, is computably enumerable.
Properties
If A and B are computably enumerable sets then A ∩ B, A ∪ B and A × B (with the ordered pair of natural numbers mapped to a single natural number with the Cantor pairing function) are computably enumerable sets. The preimage of a computably enumerable set under a partial computable function is a computably enumerable set.
A set is called co-computably-enumerable or co-c.e. if its complement is computably enumerable. Equivalently, a set is co-r.e. if and only if it is at level of the arithmetical hierarchy. The complexity class of co-computably-enumerable sets is denoted co-RE.
A set A is computable if and only if both A and the complement of A are computably enumerable.
Some pairs of computably enumerable sets are effectively separable and some are not.
Remarks
According to the Church–Turing thesis, any effectively calculable function is calculable by a Turing machine, and thus a set S is computably enumerable if and only if there is some algorithm which yields an enumeration of S. This cannot be taken as a formal definition, however, because the Church–Turing thesis is an informal conjecture rather than a formal axiom.
The definition of a computably enumerable set as the domain of a partial function, rather than the range of a total computable function, is common in contemporary texts. This choice is motivated by the fact that in generalized recursion theories, such as α-recursion theory, the definition corresponding to domains has been found to be more natural. Other texts use the definition in terms of enumerations, which is equivalent for computably enumerable sets.
See also
RE (complexity)
Recursively enumerable language
Arithmetical hierarchy
References
Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ; .
Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987. .
Soare, Robert I. Recursively enumerable sets and degrees. Bull. Amer. Math. Soc. 84 (1978), no. 6, 1149–1181.
Computability theory
Theory of computation | Computably enumerable set | [
"Mathematics"
] | 1,609 | [
"Computability theory",
"Mathematical logic"
] |
332,264 | https://en.wikipedia.org/wiki/Computable%20set | In computability theory, a set of natural numbers is called computable, recursive, or decidable if there is an algorithm which takes a number as input, terminates after a finite amount of time (possibly depending on the given number) and correctly decides whether the number belongs to the set or not.
A set which is not computable is called noncomputable or undecidable.
A more general class of sets than the computable ones consists of the computably enumerable (c.e.) sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set.
Formal definition
A subset of the natural numbers is called computable if there exists a total computable function such that if and if . In other words, the set is computable if and only if the indicator function is computable.
Examples and non-examples
Examples:
Every finite or cofinite subset of the natural numbers is computable. This includes these special cases:
The empty set is computable.
The entire set of natural numbers is computable.
Each natural number (as defined in standard set theory) is computable; that is, the set of natural numbers less than a given natural number is computable.
The subset of prime numbers is computable.
A recursive language is a computable subset of a formal language.
The set of Gödel numbers of arithmetic proofs described in Kurt Gödel's paper "On formally undecidable propositions of Principia Mathematica and related systems I" is computable; see Gödel's incompleteness theorems.
Non-examples:
The set of Turing machines that halt is not computable.
The isomorphism class of two finite simplicial complexes is not computable.
The set of busy beaver champions is not computable.
Hilbert's tenth problem is not computable.
Properties
If A is a computable set then the complement of A is a computable set. If A and B are computable sets then A ∩ B, A ∪ B and the image of A × B under the Cantor pairing function are computable sets.
A is a computable set if and only if A and the complement of A are both computably enumerable (c.e.). The preimage of a computable set under a total computable function is a computable set. The image of a computable set under a total computable bijection is computable. (In general, the image of a computable set under a computable function is c.e., but possibly not computable).
A is a computable set if and only if it is at level of the arithmetical hierarchy.
A is a computable set if and only if it is either the range of a nondecreasing total computable function, or the empty set. The image of a computable set under a nondecreasing total computable function is computable.
See also
Decidability (logic)
Recursively enumerable language
Recursive language
Recursion
References
Cutland, N. Computability. Cambridge University Press, Cambridge-New York, 1980. ;
Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ;
Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987.
External links
Computability theory
Theory of computation | Computable set | [
"Mathematics"
] | 788 | [
"Computability theory",
"Mathematical logic"
] |
333,170 | https://en.wikipedia.org/wiki/Fluctuation%20theorem | The fluctuation theorem (FT), which originated from statistical mechanics, deals with the relative probability that the entropy of a system which is currently away from thermodynamic equilibrium (i.e., maximum entropy) will increase or decrease over a given amount of time. While the second law of thermodynamics predicts that the entropy of an isolated system should tend to increase until it reaches equilibrium, it became apparent after the discovery of statistical mechanics that the second law is only a statistical one, suggesting that there should always be some nonzero probability that the entropy of an isolated system might spontaneously decrease; the fluctuation theorem precisely quantifies this probability.
Statement
Roughly, the fluctuation theorem relates to the probability distribution of the time-averaged irreversible entropy production, denoted . The theorem states that, in systems away from equilibrium over a finite time t, the ratio between the probability that takes on a value A and the probability that it takes the opposite value, −A, will be exponential in At.
In other words, for a finite non-equilibrium system in a finite time, the FT gives a precise mathematical expression for the probability that entropy will flow in a direction opposite to that dictated by the second law of thermodynamics.
Mathematically, the FT is expressed as:
This means that as the time or system size increases (since is extensive), the probability of observing an entropy production opposite to that dictated by the second law of thermodynamics decreases exponentially. The FT is one of the few expressions in non-equilibrium statistical mechanics that is valid far from equilibrium.
Note that the FT does not state that the second law of thermodynamics is wrong or invalid. The second law of thermodynamics is a statement about macroscopic systems. The FT is more general. It can be applied to both microscopic and macroscopic systems. When applied to macroscopic systems, the FT is equivalent to the second law of thermodynamics.
History
The FT was first proposed and tested using computer simulations, by Denis Evans, E.G.D. Cohen and Gary Morriss in 1993. The first derivation was given by Evans and Debra Searles in 1994. Since then, much mathematical and computational work has been done to show that the FT applies to a variety of statistical ensembles. The first laboratory experiment that verified the validity of the FT was carried out in 2002. In this experiment, a plastic bead was pulled through a solution by a laser. Fluctuations in the velocity were recorded that were opposite to what the second law of thermodynamics would dictate for macroscopic systems. In 2020, observations at high spatial and spectral resolution of the solar photosphere have shown that solar turbulent convection satisfies the symmetries predicted by the fluctuation relation at a local level.
Second law inequality
A simple consequence of the fluctuation theorem given above is that if we carry out an arbitrarily large ensemble of experiments from some initial time t=0, and perform an ensemble average of time averages of the entropy production, then an exact consequence of the FT is that the ensemble average cannot be negative for any value of the averaging time t:
This inequality is called the second law inequality. This inequality can be proved for systems with time dependent fields of arbitrary magnitude and arbitrary time dependence.
It is important to understand what the second law inequality does not imply. It does not imply that the ensemble averaged entropy production is non-negative at all times. This is untrue, as consideration of the entropy production in a viscoelastic fluid subject to a sinusoidal time dependent shear rate shows (e.g., rogue waves). In this example the ensemble average of the time integral of the entropy production over one cycle is however nonnegative – as expected from the second law inequality.
Nonequilibrium partition identity
Another remarkably simple and elegant consequence of the fluctuation theorem is the so-called "nonequilibrium partition identity" (NPI):
Thus in spite of the second law inequality, which might lead you to expect that the average would decay exponentially with time, the exponential probability ratio given by the FT exactly cancels the negative exponential in the average above leading to an average which is unity for all time.
Implications
There are many important implications from the fluctuation theorem. One is that small machines (such as nanomachines or even mitochondria in a cell) will spend part of their time actually running in "reverse". What is meant by "reverse" is that it is possible to observe that these small molecular machines are able to generate work by taking heat from the environment. This is possible because there exists a symmetry relation in the work fluctuations associated with the forward and reverse changes a system undergoes as it is driven away from thermal equilibrium by the action of an external perturbation, which is a result predicted by the Crooks fluctuation theorem. The environment itself continuously drives these molecular machines away from equilibrium and the fluctuations it generates over the system are very relevant because the probability of observing an apparent violation of the second law of thermodynamics becomes significant at this scale.
This is counterintuitive because, from a macroscopic point of view, it would describe complex processes running in reverse. For example, a jet engine running in reverse, taking in ambient heat and exhaust fumes to generate kerosene and oxygen. Nevertheless, the size of such a system makes this observation almost impossible to occur. Such a process is possible to be observed microscopically because, as it has been stated above, the probability of observing a "reverse" trajectory depends on system size and is significant for molecular machines if an appropriate measurement instrument is available. This is the case with the development of new biophysical instruments such as the optical tweezers or the atomic force microscope. Crooks fluctuation theorem has been verified through RNA folding experiments.
Dissipation function
Strictly speaking the fluctuation theorem refers to a quantity known as the dissipation function. In thermostatted nonequilibrium states that are close to equilibrium, the long time average of the dissipation function is equal to the average entropy production. However the FT refers to fluctuations rather than averages. The dissipation function is defined as
where k is the Boltzmann constant, is the initial (t = 0) distribution of molecular states , and is the molecular state arrived at after time t, under the exact time reversible equations of motion. is the INITIAL distribution of those time evolved states.
Note: in order for the FT to be valid we require that . This condition is known as the condition of ergodic consistency. It is widely satisfied in common statistical ensembles - e.g. the canonical ensemble.
The system may be in contact with a large heat reservoir in order to thermostat the system of interest. If this is the case is the heat lost to the reservoir over the time (0,t) and T is the absolute equilibrium temperature of the reservoir. With this definition of the dissipation function the precise statement of the FT simply replaces entropy production with the dissipation function in each of the FT equations above.
Example: If one considers electrical conduction across an electrical resistor in contact with a large heat reservoir at temperature T, then the dissipation function is
the total electric current density J multiplied by the voltage drop across the circuit, , and the system volume V, divided by the absolute temperature T, of the heat reservoir times the Boltzmann constant. Thus the dissipation function is easily recognised as the Ohmic work done on the system divided by the temperature of the reservoir. Close to equilibrium the long time average of this quantity is (to leading order in the voltage drop), equal to the average spontaneous entropy production per unit time. However, the fluctuation theorem applies to systems arbitrarily far from equilibrium where the definition of the spontaneous entropy production is problematic.
Relation to Loschmidt's paradox
The second law of thermodynamics, which predicts that the entropy of an isolated system out of equilibrium should tend to increase rather than decrease or stay constant, stands in apparent contradiction with the time-reversible equations of motion for classical and quantum systems. The time reversal symmetry of the equations of motion show that if one films a given time dependent physical process, then playing the movie of that process backwards does not violate the laws of mechanics. It is often argued that for every forward trajectory in which entropy increases, there exists a time reversed anti trajectory where entropy decreases, thus if one picks an initial state randomly from the system's phase space and evolves it forward according to the laws governing the system, decreasing entropy should be just as likely as increasing entropy. It might seem that this is incompatible with the second law of thermodynamics which predicts that entropy tends to increase. The problem of deriving irreversible thermodynamics from time-symmetric fundamental laws is referred to as Loschmidt's paradox.
The mathematical derivation of the fluctuation theorem and in particular the second law inequality shows that, for a nonequilibrium process, the ensemble averaged value for the dissipation function will be greater than zero. This result requires causality, i.e. that cause (the initial conditions) precede effect (the value taken on by the dissipation function). This is clearly demonstrated in section 6 of that paper, where it is shown how one could use the same laws of mechanics to extrapolate backwards from a later state to an earlier state, and in this case the fluctuation theorem would lead us to predict the ensemble average dissipation function to be negative, an anti-second law. This second prediction, which is inconsistent with the real world, is obtained using an anti-causal assumption. That is to say that effect (the value taken on by the dissipation function) precedes the cause (here the later state has been incorrectly used for the initial conditions). The fluctuation theorem shows how the second law is a consequence of the assumption of causality. When we solve a problem we set the initial conditions and then let the laws of mechanics evolve the system forward in time, we don't solve problems by setting the final conditions and letting the laws of mechanics run backwards in time.
Summary
The fluctuation theorem is of fundamental importance to non-equilibrium statistical mechanics.
The FT (together with the universal causation proposition) gives a generalisation of the second law of thermodynamics which includes as a special case, the conventional second law. It is then easy to prove the Second Law Inequality and the NonEquilibrium Partition Identity. When combined with the central limit theorem, the FT also implies the Green-Kubo relations for linear transport coefficients, close to equilibrium. The FT is however, more general than the Green-Kubo Relations because unlike them, the FT applies to fluctuations far from equilibrium. In spite of this fact, scientists have not yet been able to derive the equations for nonlinear response theory from the FT.
The FT does not imply or require that the distribution of time averaged dissipation be Gaussian. There are many examples known where the distribution of time averaged dissipation is non-Gaussian and yet the FT (of course) still correctly describes the probability ratios.
Lastly the theoretical constructs used to prove the FT can be applied to nonequilibrium transitions between two different equilibrium states. When this is done the so-called Jarzynski equality or nonequilibrium work relation, can be derived. This equality shows how equilibrium free energy differences can be computed or measured (in the laboratory), from nonequilibrium path integrals. Previously quasi-static (equilibrium) paths were required.
The reason why the fluctuation theorem is so fundamental is that its proof requires so little. It requires:
knowledge of the mathematical form of the initial distribution of molecular states,
that all time evolved final states at time t, must be present with nonzero probability in the distribution of initial states (t = 0) – the so-called condition of ergodic consistency and
an assumption of time reversal symmetry.
In regard to the latter "assumption", while the equations of motion of quantum dynamics may be time-reversible, quantum processes are nondeterministic by nature. What state a wave function collapses into cannot be predicted mathematically, and further the unpredictability of a quantum system comes not from the myopia of an observer's perception, but on the intrinsically nondeterministic nature of the system itself.
In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e. (T-symmetry).
In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry.
Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process.
See also
Linear response function
Green's function (many-body theory)
Loschmidt's paradox
Le Chatelier's principle – a nineteenth century principle that defied a mathematical proof until the advent of the Fluctuation Theorem.
Crooks fluctuation theorem – an example of transient fluctuation theorem relating the dissipated work in non equilibrium transformations to free energy differences.
Jarzynski equality – another nonequilibrium equality closely related to the fluctuation theorem and to the second law of thermodynamics
Green–Kubo relations – there is a deep connection between the fluctuation theorem and the Green–Kubo relations for linear transport coefficients – like shear viscosity or thermal conductivity
Ludwig Boltzmann
Thermodynamics
Brownian motor
Notes
References
Statistical mechanics theorems
Physical paradoxes
Non-equilibrium thermodynamics | Fluctuation theorem | [
"Physics",
"Mathematics"
] | 2,937 | [
"Theorems in dynamical systems",
"Non-equilibrium thermodynamics",
"Statistical mechanics theorems",
"Theorems in mathematical physics",
"Dynamical systems",
"Statistical mechanics",
"Physics theorems"
] |
333,420 | https://en.wikipedia.org/wiki/Archimedes%27%20principle | Archimedes' principle (also spelled Archimedes's principle) states that the upward buoyant force that is exerted on a body immersed in a fluid, whether fully or partially, is equal to the weight of the fluid that the body displaces. Archimedes' principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse.
Explanation
In On Floating Bodies, Archimedes suggested that (c. 246 BC):
Archimedes' principle allows the buoyancy of any floating object partially or fully immersed in a fluid to be calculated. The downward force on the object is simply its weight. The upward, or buoyant, force on the object is that stated by Archimedes' principle above. Thus, the net force on the object is the difference between the magnitudes of the buoyant force and its weight. If this net force is positive, the object rises; if negative, the object sinks; and if zero, the object is neutrally buoyant—that is, it remains in place without either rising or sinking. In simple words, Archimedes' principle states that, when a body is partially or completely immersed in a fluid, it experiences an apparent loss in weight that is equal to the weight of the fluid displaced by the immersed part of the body(s).
Formula
Consider a cuboid immersed in a fluid, its top and bottom faces orthogonal to the direction of gravity (assumed constant across the cube's stretch). The fluid will exert a normal force on each face, but only the normal forces on top and bottom will contribute to buoyancy. The pressure difference between the bottom and the top face is directly proportional to the height (difference in depth of submersion). Multiplying the pressure difference by the area of a face gives a net force on the cuboid—the buoyancy—equaling in size the weight of the fluid displaced by the cuboid. By summing up sufficiently many arbitrarily small cuboids this reasoning may be extended to irregular shapes, and so, whatever the shape of the submerged body, the buoyant force is equal to the weight of the displaced fluid.
The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). The weight of the object in the fluid is reduced, because of the force acting on it, which is called upthrust. In simple terms, the principle states that the buoyant force (Fb) on an object is equal to the weight of the fluid displaced by the object, or the density (ρ) of the fluid multiplied by the submerged volume (V) times the gravity (g)
We can express this relation in the equation:
where denotes the buoyant force applied onto the submerged object, denotes the density of the fluid, represents the volume of the displaced fluid and is the acceleration due to gravity.
Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy.
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting on it. Suppose that, when the rock is lowered into the water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyant force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea-floor. It is generally easier to lift an object through the water than it is to pull it out of the water.
For a fully submerged object, Archimedes' principle can be reformulated as follows:
then inserted into the quotient of weights, which has been expanded by the mutual volume
yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volume is
(This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.)
Example: If you drop wood into water, buoyancy will keep it afloat.
Example: A helium balloon in a moving car. When increasing speed or driving in a curve, the air moves in the opposite direction to the car's acceleration. However, due to buoyancy, the balloon is pushed "out of the way" by the air and will drift in the same direction as the car's acceleration.
When an object is immersed in a liquid, the liquid exerts an upward force, which is known as the buoyant force, that is proportional to the weight of the displaced liquid. The sum force acting on the object, then, is equal to the difference between the weight of the object ('down' force) and the weight of displaced liquid ('up' force). Equilibrium, or neutral buoyancy, is achieved when these two weights (and thus forces) are equal.
Forces and equilibrium
The equation to calculate the pressure inside a fluid in equilibrium is:
where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor:
Here δij is the Kronecker delta. Using this the above equation becomes:
Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function:
Then:
Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is
So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force.
The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid:
The surface integral can be transformed into a volume integral with the help of the Gauss theorem:
where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid doesn't exert force on the part of the body which is outside of it.
The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude:
where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question.
If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to
The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight
If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor.
In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore;
and therefore
showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location.
(Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location. For this reason, a ship may display a Plimsoll line.)
It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined.
If the object would otherwise float, the tension to restrain it fully submerged is:
When a sinking object settles on the solid floor, it experiences a normal force of:
Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies:
Buoyancy force = weight of object in empty space − weight of object immersed in fluid
The final result would be measured in Newtons.
Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam).
Simplified model
A simplified explanation for the integration of the pressure over the contact area may be stated as follows:
Consider a cube immersed in a fluid with the upper surface horizontal.
The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side.
There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero.
The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface.
Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface.
As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence.
This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces.
This analogy is valid for variations in the size of the cube.
If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes.
An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence.
Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way.
Refinements
Archimedes' principle does not consider the surface tension (capillarity) acting on the body. Moreover, Archimedes' principle has been found to break down in complex fluids.
There is an exception to Archimedes' principle known as the bottom (or side) case. This occurs when a side of the object is touching the bottom (or side) of the vessel it is submerged in, and no liquid seeps in along that side. In this case, the net force has been found to be different from Archimedes' principle, as, since no fluid seeps in on that side, the symmetry of pressure is broken.
Principle of flotation
Archimedes' principle shows the buoyant force and displacement of fluid. However, the concept of Archimedes' principle can be applied when considering why objects float. Proposition 5 of Archimedes' treatise On Floating Bodies states that
In other words, for an object floating on a liquid surface (like a boat) or floating submerged in a fluid (like a submarine in water or dirigible in air) the weight of the displaced liquid equals the weight of the object. Thus, only in the special case of floating does the buoyant force acting on an object equal the objects weight. Consider a 1-ton block of solid iron. As iron is nearly eight times as dense as water, it displaces only 1/8 ton of water when submerged, which is not enough to keep it afloat. Suppose the same iron block is reshaped into a bowl. It still weighs 1 ton, but when it is put in water, it displaces a greater volume of water than when it was a block. The deeper the iron bowl is immersed, the more water it displaces, and the greater the buoyant force acting on it. When the buoyant force equals 1 ton, it will sink no farther.
When any boat displaces a weight of water equal to its own weight, it floats. This is often called the "principle of flotation": A floating object displaces a weight of fluid equal to its own weight. Every ship, submarine, and dirigible must be designed to displace a weight of fluid at least equal to its own weight. A 10,000-ton ship's hull must be built wide enough, long enough and deep enough to displace 10,000 tons of water and still have some hull above the water to prevent it from sinking. It needs extra hull to fight waves that would otherwise fill it and, by increasing its mass, cause it to submerge. The same is true for vessels in air: a dirigible that weighs 100 tons needs to displace 100 tons of air. If it displaces more, it rises; if it displaces less, it falls. If the dirigible displaces exactly its weight, it hovers at a constant altitude.
While they are related to it, the principle of flotation and the concept that a submerged object displaces a volume of fluid equal to its own volume are not Archimedes' principle. Archimedes' principle, as stated above, equates the buoyant force to the weight of the fluid displaced.
One common point of confusion regarding Archimedes' principle is the meaning of displaced volume. Common demonstrations involve measuring the rise in water level when an object floats on the surface in order to calculate the displaced water. This measurement approach fails with a buoyant submerged object because the rise in the water level is directly related to the volume of the object and not the mass (except if the effective density of the object equals exactly the fluid density).
Eureka
Archimedes reportedly exclaimed "Eureka" after he realized how to detect whether a crown is made of impure gold. While he did not use Archimedes' principle in the widespread tale and used displaced water only for measuring the volume of the crown, there is an alternative approach using the principle: Balance the crown and pure gold on a scale in the air and then put the scale into water. According to Archimedes' principle, if the density of the crown differs from the density of pure gold, the scale will get out of balance under water.
See also
Phragmen's voting rules – a ballot load balancing method analogous to the idea of Archimedes' principle.
References
External links
Fluid dynamics
Principle
Force
Buoyancy
Scientific laws | Archimedes' principle | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 3,679 | [
"Force",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Mass",
"Mathematical objects",
"Classical mechanics",
"Equations",
"Scientific laws",
"Piping",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
333,981 | https://en.wikipedia.org/wiki/Nucleomorph | Nucleomorphs are small, vestigial eukaryotic nuclei found between the inner and outer pairs of membranes in certain plastids. They are thought to be vestiges of red and green algal nuclei that were engulfed by a larger eukaryote. Because the nucleomorph lies between two sets of membranes, nucleomorphs support the endosymbiotic theory and are evidence that the plastids containing them are complex plastids. Having two sets of membranes indicate that the plastid, a prokaryote, was engulfed by a eukaryote, an alga, which was then engulfed by another eukaryote, the host cell, making the plastid an example of secondary endosymbiosis.
Organisms with known nucleomorphs
As of 2007, only two monophyletic groups of organisms are known to contain plastids with a vestigial nucleus or nucleomorph: the cryptomonads of the supergroup Cryptista and the chlorarachniophytes of the supergroup Rhizaria, both of which have examples of sequenced nucleomorph genomes. Studies of the genomic organization and of the molecular phylogeny have shown that the nucleomorph of the cryptomonads used to be the nucleus of a red alga, whereas the nucleomorph of the chlorarchniophytes was the nucleus of a green alga. In both groups of organisms the plastids originate from engulfed photoautotrophic eukaryotes.
Of the two known plastids that contain nucleomorphs, both have four membranes, the nucleomorph residing in the periplastidial compartment, evidence of being engulfed by a eukaryote through phagocytosis.
In 2020, genetic work identified the plastid in Lepidodinium and two previously undescribed dinoflagellates ("MGD" and "TGD") as being most closely related to the green alga Pedinomonas. The observation of a nucleomorph in Lepidodinium is controversial, but MGD and TGD are proven to have DNA-containing nucleomorphs. The transcriptomes of the nucleomorphs have been sequenced. One slight issue in understanding the sequence of evolution is that although the phylogenetic tree built from Lepidodinium-MGD-TGD's plastid is monophyletic, the tree built from their host-nucleus DNA is not, implying that they might have acquired very similar algae independently.
Structure
A cryptomonad nucleomorph is typically much smaller than the host nucleus. A relatively large portion of its size is devoted to the nucleolus, which contains its own ribosomes and rRNA. There seems to be nuclear pores observable by imaging, but genetic work has failed to find any protein appropriate for forming the nuclear pore complex.
There is one nucleomorph per plastid. The nucleomorph divides before the accompanying plastid. The dividing nucleomorph lacks a mitotic spindle, and the nucleomorph envelope persists throughout division.
Between the plastid and the cytoplasm of the host there are four membranes: the inner and outer membranes of the chloroplast, the periplastid membrane, and the epiplastid membrane. The epiplastid membrane is encrusted with ribosomes (in cryptomonads) and is in many ways similar to a endoplasmic reticulum, hence the name "chloroplast endoplasmic reticulum" (cER). Plastid-targeted proteins encoded in the host genome must cross all four membranes to reach the plastid. First they use classic secretory signal peptides to cross the epiplastid membrane. Then the symbiont-specific ERAD-like machinery (SELMA) – encoded in the nucleomorph as a repurposed ERAD – pulls the protein from the epiplastid space (or the lumen of the cER) into the periplastid space (the cytoplasm of the symbiote). The standard chloroplast transit peptide then acts to cross the remaining two layers via TIC/TOC complex.
The chlorarachniophytes, on the other hand, has no such thing as a cER, hence the initial import into the epiplastid space must occur by some other mechanism. It's only known that their plastid-targeted proteins are prefixed by both a signal peptide and a chloroplast-targeting peptide much like cryptomonads. Based on research done on apicomplexa, which also has 4 membranes but no cER, it's possible that the protein is first sent into the ER, then sent to the epiplastid space by the endomembrane sorting system. Some sort of a pore may then move the peptide into the periplastid space, but there seems to be no SELMA-like pore in this group. It's only known that the TIC/TOC complex exists for crossing the last two layers.
Nucleomorph genome
Nucleomorphs represent some of the smallest genomes ever sequenced. After the red or green alga was engulfed by a cryptomonad or chlorarachniophyte, respectively, its genome was reduced. The nucleomorph genomes of both cryptomonads and chlorarachniophytes converged upon a similar size from larger genomes. They retained only three chromosomes and many genes were transferred to the nucleus of the host cell, while others were lost entirely. Chlorarachniophytes contain a nucleomorph genome that is diploid and cryptomonads contain a nucleomorph genome that is tetraploid. The unique combination of host cell and complex plastid results in cells with four genomes: two prokaryotic genomes (mitochondrion and plastid of the red or green algae) and two eukaryotic genomes (nucleus of host cell and nucleomorph).
The model cryptomonad Guillardia theta became an important focus for scientists studying nucleomorphs. Its complete nucleomorph sequence was published in 2001, coming in at 551 Kbp. The G. theta sequence gave insight as to what genes were retained in nucleomorphs. Most of the genes that moved to the host cell involved protein synthesis, leaving behind a compact genome with mostly single-copy “housekeeping” genes (affecting transcription, translation, protein folding and degradation and splicing) and no mobile elements. The genome contains 513 genes, 465 of which code for protein. Thirty genes are considered “plastid” genes, coding for plastid proteins. It has three chromosomes with eukaryotic telomeres subtended by rRNA.
The genome sequence of another organism, the chlorarachniophyte Bigelowiella natans indicates that its nucleomorph is probably the vestigial nucleus of a green alga, whereas the nucleomorph in G. theta probably came from a red alga. The B. natans genome is smaller than that of G. theta, with about 373 Kbp and contains 293 protein-coding genes as compared to the 465 genes in G. theta. B. natans also only has 17 genes that code for plastid proteins, again fewer than G. theta. Comparisons between the two organisms have shown that B. natans contains significantly more introns (852) than G. theta (17). B. natans also had smaller introns, ranging from 18-21 bp, whereas G. theta’s introns ranged from 42-52 bp.
Both the genomes of B. natans and G. theta display evidence of genome reduction besides elimination of genes and tiny size, including elevated composition of adenine (A) and thymine (T), and high substitution rates.
Persistence of nucleomorphs
There are no recorded instances of vestigial nuclei in any other secondary plastid-containing organisms, yet they have been retained independently in the cryptomonads and chlorarachniophytes. Plastid gene transfer happens frequently in many organisms, and it is unusual that these nucleomorphs have not disappeared entirely. One theory as to why these nucleomorphs have not disappeared as they have in other groups is that introns present in nucleomorphs are not recognized by host spliceosomes because they are too small and therefore cannot be cut and later incorporated into host DNA.
Nucleomorphs also often code for many of their own critical functions, like transcription and translation. Some say that as long as there exists a gene in the nucleomorph that codes for proteins necessary for the plastid’s functioning that are not produced by the host cell, the nucleomorph will persist. The cryptomonad nucleomorph also codes for genes that function in plastid maintenance.
In cryptophytes and chlorarachniophytes all DNA transfer between the nucleomorph and host genome seems to have ceased, but the process is still going on in a few dinoflagellates (MGD and TGD).
Tertiary endosymbiosis
The standard nucleomorph is the result of secondary endosymbiosis: a cyanobacterium first became the chloroplast of ancestral plants, which diverged into green and red algae among other groups; the algal cell is then captured by another eukaryote. The chloroplast is surrounded by 4 membranes: 2 layers resulting from the primary, and 2 resulting from the secondary. When the nucleus of the algal endosymbiont remains, it's called a "nucleomorph".
Most tertiary endosymbiosis events end up with only the plastid retained. However, in the case of dinotoms (i.e. those having diatom endosymbionts), the symbiont's nucleus appears to be of normal size with a large amount of DNA, surrounded by plenty of cytoplasm. The symbiont even has its own DNA-containing mitochondria. As a result, the organism has two eukaryotic genomes and three prokaryotic-derived organelle genomes.
See also
Endosymbiont
References
External links
Insight into the Diversity and Evolution of the Cryptomonad Nucleomorph Genome
Cryptophyta at NCBI taxbrowser
Cercozoa at NCBI taxbrowser
According to GenBank release 164 (Feb 2008), there are 13 Cercozoa and 181 Cryptophyta entries (an entry is the submission of a sequence to the DDBJ/EMBL/GenBank public database of sequences). Most sequenced organisms were:
Guillardia theta: 54;
Rhodomonas salina: 18;
Cryptomonas sp.: 15;
Chlorarachniophyceae sp.:10;
Cryptomonas paramecium: 9;
Cryptomonas erosa: 7.
Organelles
Plant physiology
Mitochondrial genetics
Microbiology
Algae
Phycology
Evolution
Symbiosis
Endosymbiotic events | Nucleomorph | [
"Chemistry",
"Biology"
] | 2,437 | [
"Plant physiology",
"Behavior",
"Algae",
"Symbiosis",
"Plants",
"Endosymbiotic events",
"Biological interactions",
"Microbiology",
"Phycology",
"Microscopy"
] |
333,996 | https://en.wikipedia.org/wiki/Ultrametric%20space | In mathematics, an ultrametric space is a metric space in which the triangle inequality is strengthened to for all , , and . Sometimes the associated metric is also called a non-Archimedean metric or super-metric.
Formal definition
An ultrametric on a set is a real-valued function
(where denote the real numbers), such that for all :
;
(symmetry);
;
if then ;
} (strong triangle inequality or ultrametric inequality).
An ultrametric space is a pair consisting of a set together with an ultrametric on , which is called the space's associated distance function (also called a metric).
If satisfies all of the conditions except possibly condition 4 then is called an ultrapseudometric on . An ultrapseudometric space is a pair consisting of a set and an ultrapseudometric on .
In the case when is an Abelian group (written additively) and is generated by a length function (so that ), the last property can be made stronger using the Krull sharpening to:
with equality if .
We want to prove that if , then the equality occurs if . Without loss of generality, let us assume that This implies that . But we can also compute . Now, the value of cannot be , for if that is the case, we have contrary to the initial assumption. Thus, , and . Using the initial inequality, we have and therefore .
Properties
From the above definition, one can conclude several typical properties of ultrametrics. For example, for all , at least one of the three equalities or or holds. That is, every triple of points in the space forms an isosceles triangle, so the whole space is an isosceles set.
Defining the (open) ball of radius centred at as , we have the following properties:
Every point inside a ball is its center, i.e. if then .
Intersecting balls are contained in each other, i.e. if is non-empty then either or .
All balls of strictly positive radius are both open and closed sets in the induced topology. That is, open balls are also closed, and closed balls (replace with ) are also open.
The set of all open balls with radius and center in a closed ball of radius forms a partition of the latter, and the mutual distance of two distinct open balls is (greater or) equal to .
Proving these statements is an instructive exercise. All directly derive from the ultrametric triangle inequality. Note that, by the second statement, a ball may have several center points that have non-zero distance. The intuition behind such seemingly strange effects is that, due to the strong triangle inequality, distances in ultrametrics do not add up.
Examples
The discrete metric is an ultrametric.
The p-adic numbers form a complete ultrametric space.
Consider the set of words of arbitrary length (finite or infinite), Σ*, over some alphabet Σ. Define the distance between two different words to be 2−n, where n is the first place at which the words differ. The resulting metric is an ultrametric.
The set of words with glued ends of the length n over some alphabet Σ is an ultrametric space with respect to the p-close distance. Two words x and y are p-close if any substring of p consecutive letters (p < n) appears the same number of times (which could also be zero) both in x and y.
If r = (rn) is a sequence of real numbers decreasing to zero, then |x|r := lim supn→∞ |xn|rn induces an ultrametric on the space of all complex sequences for which it is finite. (Note that this is not a seminorm since it lacks homogeneity — If the rn are allowed to be zero, one should use here the rather unusual convention that 00 = 0.)
If G is an edge-weighted undirected graph, all edge weights are positive, and d(u,v) is the weight of the minimax path between u and v (that is, the largest weight of an edge, on a path chosen to minimize this largest weight), then the vertices of the graph, with distance measured by d, form an ultrametric space, and all finite ultrametric spaces may be represented in this way.
Applications
A contraction mapping may then be thought of as a way of approximating the final result of a computation (which can be guaranteed to exist by the Banach fixed-point theorem). Similar ideas can be found in domain theory. p-adic analysis makes heavy use of the ultrametric nature of the p-adic metric.
In condensed matter physics, the self-averaging overlap between spins in the SK Model of spin glasses exhibits an ultrametric structure, with the solution given by the full replica symmetry breaking procedure first outlined by Giorgio Parisi and coworkers. Ultrametricity also appears in the theory of aperiodic solids.
In taxonomy and phylogenetic tree construction, ultrametric distances are also utilized by the UPGMA and WPGMA methods. These algorithms require a constant-rate assumption and produce trees in which the distances from the root to every branch tip are equal. When DNA, RNA and protein data are analyzed, the ultrametricity assumption is called the molecular clock.
Models of intermittency in three dimensional turbulence of fluids make use of so-called cascades, and in discrete models of dyadic cascades, which have an ultrametric structure.
In geography and landscape ecology, ultrametric distances have been applied to measure landscape complexity and to assess the extent to which one landscape function is more important than another.
References
Bibliography
Further reading
.
External links
Metric geometry
Metric spaces | Ultrametric space | [
"Mathematics"
] | 1,166 | [
"Mathematical structures",
"Space (mathematics)",
"Metric spaces"
] |
334,290 | https://en.wikipedia.org/wiki/Neutralino | In supersymmetry, the neutralino is a hypothetical particle. In the Minimal Supersymmetric Standard Model (MSSM), a popular model of realization of supersymmetry at a low energy, there are four neutralinos that are fermions and are electrically neutral, the lightest of which is stable in an R-parity conserved scenario of MSSM. They are typically labeled (the lightest), , and (the heaviest) although sometimes is also used when is used to refer to charginos.
These four states are composites of the bino and the neutral wino (which are the neutral electroweak gauginos), and the neutral higgsinos. As the neutralinos are Majorana fermions, each of them is identical to its antiparticle.
Expected behavior
If they exist, these particles would only interact with the weak vector bosons, so they would not be directly produced at hadron colliders in copious numbers. They would primarily appear as particles in cascade decays (decays that happen in multiple steps) of heavier particles usually originating from colored supersymmetric particles such as squarks or gluinos.
In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade-decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum in a detector.
The heavier neutralinos typically decay through a neutral Z boson to a lighter neutralino or through a charged W boson to a light chargino:
{|
|
|
|
| +
|
| colspan=6|
|
| Missing energy
| +
|
| +
|
|-
|
|
|
| +
|
|
|
| +
|
| +
|
|
| Missing energy
| +
| +
| +
| +
|}
The mass splittings between the different neutralinos will dictate which patterns of decays are allowed.
Up to present, neutralinos have never been observed or detected in an experiment.
Origins in supersymmetric theories
In supersymmetry models, all Standard Model particles have partner particles with the same quantum numbers except for the quantum number spin, which differs by from its partner particle. Since the superpartners of the Z boson (zino), the photon (photino) and the neutral higgs (higgsino) have the same quantum numbers, they can mix to form four eigenstates of the mass operator called "neutralinos". In many models the lightest of the four neutralinos turns out to be the lightest supersymmetric particle (LSP), though other particles may also take on this role.
Phenomenology
The exact properties of each neutralino will depend on the details of the mixing (e.g. whether they are more higgsino-like or gaugino-like), but they tend to have masses at the weak scale (100 GeV ~ 1 TeV) and couple to other particles with strengths characteristic of the weak interaction. In this way, except for mass, they are phenomenologically similar to neutrinos, and so are not directly observable in particle detectors at accelerators.
In models in which R-parity is conserved and the lightest of the four neutralinos is the LSP, the lightest neutralino is stable and is eventually produced in the decay chain of all other superpartners. In such cases supersymmetric processes at accelerators are characterized by the expectation of a large discrepancy in energy and momentum between the visible initial and final state particles, with this energy being carried off by a neutralino which departs the detector unnoticed.
This is an important signature to discriminate supersymmetry from Standard Model backgrounds.
Relationship to dark matter
As a heavy, stable particle, the lightest neutralino is an excellent candidate to form the universe's cold dark matter. In many models the lightest neutralino can be produced thermally in the hot early universe and leave approximately the right relic abundance to account for the observed dark matter. A lightest neutralino of roughly is the leading weakly interacting massive particle (WIMP) dark matter candidate.
Neutralino dark matter could be observed experimentally in nature either indirectly or directly. For indirect observation, gamma ray and neutrino telescopes look for evidence of neutralino annihilation in regions of high dark matter density such as the galactic or solar centre. For direct observation, special purpose experiments such as the Cryogenic Dark Matter Search (CDMS) seek to detect the rare impacts of WIMPs in terrestrial detectors. These experiments have begun to probe interesting supersymmetric parameter space, excluding some models for neutralino dark matter, and upgraded experiments with greater sensitivity are under development.
See also
List of hypothetical particles
Weakly interacting slender particle
References
Dark matter
Fermions
Supersymmetric quantum field theory
Hypothetical elementary particles | Neutralino | [
"Physics",
"Materials_science",
"Astronomy"
] | 1,031 | [
"Dark matter",
"Symmetry",
"Unsolved problems in astronomy",
"Supersymmetric quantum field theory",
"Concepts in astronomy",
"Fermions",
"Unsolved problems in physics",
"Subatomic particles",
"Condensed matter physics",
"Exotic matter",
"Hypothetical elementary particles",
"Supersymmetry",
"... |
334,816 | https://en.wikipedia.org/wiki/Route%20of%20administration | In pharmacology and toxicology, a route of administration is the way by which a drug, fluid, poison, or other substance is taken into the body.
Routes of administration are generally classified by the location at which the substance is applied. Common examples include oral and intravenous administration. Routes can also be classified based on where the target of action is. Action may be topical (local), enteral (system-wide effect, but delivered through the gastrointestinal tract), or parenteral (systemic action, but is delivered by routes other than the GI tract). Route of administration and dosage form are aspects of drug delivery.
Classification
Routes of administration are usually classified by application location (or exposition).
The route or course the active substance takes from application location to the location where it has its target effect is usually rather a matter of pharmacokinetics (concerning the processes of uptake, distribution, and elimination of drugs). Exceptions include the transdermal or transmucosal routes, which are still commonly referred to as routes of administration.
The location of the target effect of active substances is usually rather a matter of pharmacodynamics (concerning, for example, the physiological effects of drugs). An exception is topical administration, which generally means that both the application location and the effect thereof is local.
Topical administration is sometimes defined as both a local application location and local pharmacodynamic effect, and sometimes merely as a local application location regardless of location of the effects.
By application location
Enteral/gastrointestinal route
Through the gastrointestinal tract is sometimes termed enteral or enteric administration (literally meaning 'through the intestines'). Enteral/enteric administration usually includes oral (through the mouth) and rectal (into the rectum) administration, in the sense that these are taken up by the intestines. However, uptake of drugs administered orally may also occur already in the stomach, and as such gastrointestinal (along the gastrointestinal tract) may be a more fitting term for this route of administration. Furthermore, some application locations often classified as enteral, such as sublingual (under the tongue) and sublabial or buccal (between the cheek and gums/gingiva), are taken up in the proximal part of the gastrointestinal tract without reaching the intestines. Strictly enteral administration (directly into the intestines) can be used for systemic administration, as well as local (sometimes termed topical), such as in a contrast enema, whereby contrast media are infused into the intestines for imaging. However, for the purposes of classification based on location of effects, the term enteral is reserved for substances with systemic effects.
Many drugs as tablets, capsules, or drops are taken orally. Administration methods directly into the stomach include those by gastric feeding tube or gastrostomy. Substances may also be placed into the small intestines, as with a duodenal feeding tube and enteral nutrition. Enteric coated tablets are designed to dissolve in the intestine, not the stomach, because the drug present in the tablet causes irritation in the stomach.
The rectal route is an effective route of administration for many medications, especially those used at the end of life. The walls of the rectum absorb many medications quickly and effectively. Medications delivered to the distal one-third of the rectum at least partially avoid the "first pass effect" through the liver, which allows for greater bio-availability of many medications than that of the oral route. Rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. A suppository is a solid dosage form that fits for rectal administration. In hospice care, a specialized rectal catheter, designed to provide comfortable and discreet administration of ongoing medications provides a practical way to deliver and retain liquid formulations in the distal rectum, giving health practitioners a way to leverage the established benefits of rectal administration. The Murphy drip is an example of rectal infusion.
Parenteral route
The parenteral route is any route that is not enteral (par- + enteral).
Parenteral administration can be performed by injection, that is, using a needle (usually a hypodermic needle) and a syringe, or by the insertion of an indwelling catheter.
Locations of application of parenteral administration include:
Central nervous system:
Epidural (synonym: peridural) (injection or infusion into the epidural space), e.g. epidural anesthesia.
Intracerebral (into the cerebrum) administration by direct injection into the brain. Used in experimental research of chemicals and as a treatment for malignancies of the brain. The intracerebral route can also interrupt the blood brain barrier from holding up against subsequent routes.
Intracerebroventricular (into the cerebral ventricles) administration into the ventricular system of the brain. One use is as a last line of opioid treatment for terminal cancer patients with intractable cancer pain.
Epicutaneous (application onto the skin). It can be used both for local effect as in allergy testing and typical local anesthesia, as well as systemic effects when the active substance diffuses through skin in a transdermal route.
Sublingual and buccal medication administration is a way of giving someone medicine orally (by mouth). Sublingual administration is when medication is placed under the tongue to be absorbed by the body. The word "sublingual" means "under the tongue." Buccal administration involves placement of the drug between the gums and the cheek. These medications can come in the form of tablets, films, or sprays. Many drugs are designed for sublingual administration, including cardiovascular drugs, steroids, barbiturates, opioid analgesics with poor gastrointestinal bioavailability, enzymes and, increasingly, vitamins and minerals.
Extra-amniotic administration, between the endometrium and fetal membranes.
Nasal administration (through the nose) can be used for topically acting substances, as well as for insufflation of e.g. decongestant nasal sprays to be taken up along the respiratory tract. Such substances are also called inhalational, e.g. inhalational anesthetics.
Intra-arterial (into an artery), e.g. vasodilator drugs in the treatment of vasospasm and thrombolytic drugs for treatment of embolism.
Intra-articular, into a joint space. It is generally performed by joint injection. It is mainly used for symptomatic relief in osteoarthritis.
Intracardiac (into the heart), e.g. adrenaline during cardiopulmonary resuscitation (no longer commonly performed).
Intracavernous injection, an injection into the base of the penis.
Intradermal, (into the skin itself) is used for skin testing some allergens, and also for mantoux test for tuberculosis.
Intralesional (into a skin lesion), is used for local skin lesions, e.g. acne medication.
Intramuscular (into a muscle), e.g. many vaccines, antibiotics, and long-term psychoactive agents. Recreationally the colloquial term 'muscling' is used.
Intraocular, into the eye, e.g., some medications for glaucoma or eye neoplasms.
Intraosseous infusion (into the bone marrow) is, in effect, an indirect intravenous access because the bone marrow drains directly into the venous system. This route is occasionally used for drugs and fluids in emergency medicine and pediatrics when intravenous access is difficult.
Intraperitoneal, (infusion or injection into the peritoneum) e.g. peritoneal dialysis.
Intrathecal (into the spinal canal) is most commonly used for spinal anesthesia and chemotherapy.
Intrauterine.
Intravaginal administration, in the vagina.
Intravenous (into a vein), e.g. many drugs, total parenteral nutrition.
Intravesical infusion is into the urinary bladder.
Intravitreal, through the eye.
Subcutaneous (under the skin). This generally takes the form of subcutaneous injection, e.g. with insulin. Skin popping is a slang term that includes subcutaneous injection, and is usually used in association with recreational drugs. In addition to injection, it is also possible to slowly infuse fluids subcutaneously in the form of hypodermoclysis.
Transdermal (diffusion through the intact skin for systemic rather than topical distribution), e.g. transdermal patches such as fentanyl in pain therapy, nicotine patches for treatment of addiction and nitroglycerine for treatment of angina pectoris.
Perivascular administration (perivascular medical devices and perivascular drug delivery systems are conceived for local application around a blood vessel during open vascular surgery).
Transmucosal (diffusion through a mucous membrane), e.g. insufflation (snorting) of cocaine, sublingual, i.e. under the tongue, sublabial, i.e. between the lips and gingiva, and oral spray or vaginal suppository for nitroglycerine.
Topical route
The definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local.
In other cases, topical is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution.
If defined strictly as having local effect, the topical route of administration can also include enteral administration of medications that are poorly absorbable by the gastrointestinal tract. One such medication is the antibiotic vancomycin, which cannot be absorbed in the gastrointestinal tract and is used orally only as a treatment for Clostridioides difficile colitis.
Choice of routes
The reason for choice of routes of drug administration are governing by various factors:
Physical and chemical properties of the drug. The physical properties are solid, liquid and gas. The chemical properties are solubility, stability, pH, irritancy etc.
Site of desired action: the action may be localised and approachable or generalised and not approachable.
Rate of extent of absorption of the drug from different routes.
Effect of digestive juices and the first pass metabolism of drugs.
Condition of the patient.
In acute situations, in emergency medicine and intensive care medicine, drugs are most often given intravenously. This is the most reliable route, as in acutely ill patients the absorption of substances from the tissues and from the digestive tract can often be unpredictable due to altered blood flow or bowel motility.
Convenience
Enteral routes are generally the most convenient for the patient, as no punctures or sterile procedures are necessary. Enteral medications are therefore often preferred in the treatment of chronic disease. However, some drugs can not be used enterally because their absorption in the digestive tract is low or unpredictable. Transdermal administration is a comfortable alternative; there are, however, only a few drug preparations that are suitable for transdermal administration.
Desired target effect
Identical drugs can produce different results depending on the route of administration. For example, some drugs are not significantly absorbed into the bloodstream from the gastrointestinal tract and their action after enteral administration is therefore different from that after parenteral administration. This can be illustrated by the action of naloxone (Narcan), an antagonist of opiates such as morphine. Naloxone counteracts opiate action in the central nervous system when given intravenously and is therefore used in the treatment of opiate overdose. The same drug, when swallowed, acts exclusively on the bowels; it is here used to treat constipation under opiate pain therapy and does not affect the pain-reducing effect of the opiate.
Oral
The oral route is generally the most convenient and costs the least. However, some drugs can cause gastrointestinal tract irritation. For drugs that come in delayed release or time-release formulations, breaking the tablets or capsules can lead to more rapid delivery of the drug than intended. The oral route is limited to formulations containing small molecules only while biopharmaceuticals (usually proteins) would be digested in the stomach and thereby become ineffective. Biopharmaceuticals have to be given by injection or infusion. However, recent research found various ways to improve oral bioavailability of these drugs. In particular permeation enhancers, ionic liquids, lipid-based nanocarriers, enzyme inhibitors and microneedles have shown potential.
Oral administration is often denoted "PO" from "per os", the Latin for "by mouth".
The bioavailability of oral administration is affected by the amount of drug that is absorbed across the intestinal epithelium and first-pass metabolism.
Oral mucosal
The oral mucosa is the mucous membrane lining the inside of the mouth.
Buccal
Buccally administered medication is achieved by placing the drug between gums and the inner lining of the cheek. In comparison with sublingual tissue, buccal tissue is less permeable resulting in slower absorption.
Sublabial
Sublingual
Sublingual administration is fulfilled by placing the drug between the tongue and the lower surface of the mouth. The sublingual mucosa is highly permeable and thereby provides access to the underlying expansive network composed of capillaries, leading to rapid drug absorption.
Intranasal
Drug administration via the nasal cavity yields rapid drug absorption and therapeutic effects. This is because drug absorption through the nasal passages does not go through the gut before entering capillaries situated at tissue cells and then systemic circulation and such absorption route allows transport of drugs into the central nervous system via the pathways of olfactory and trigeminal nerve.
Intranasal absorption features low lipophilicity, enzymatic degradation within the nasal cavity, large molecular size, and rapid mucociliary clearance from the nasal passages, which explains the low risk of systemic exposure of the administered drug absorbed via intranasal.
Local
By delivering drugs almost directly to the site of action, the risk of systemic side effects is reduced.
Skin absorption (dermal absorption), for example, is to directly deliver drug to the skin and, hopefully, to the systemic circulation. However, skin irritation may result, and for some forms such as creams or lotions, the dosage is difficult to control. Upon contact with the skin, the drug penetrates into the dead stratum corneum and can afterwards reach the viable epidermis, the dermis, and the blood vessels.
Parenteral
The term parenteral is from para-1 'beside' + Greek enteron 'intestine' + -al. This name is due to the fact that it encompasses a route of administration that is not intestinal. However, in common English the term has mostly been used to describe the four most well-known routes of injection.
The term injection encompasses intravenous (IV), intramuscular (IM), subcutaneous (SC) and intradermal (ID) administration.
Parenteral administration generally acts more rapidly than topical or enteral administration, with onset of action often occurring in 15–30 seconds for IV, 10–20 minutes for IM and 15–30 minutes for SC. They also have essentially 100% bioavailability and can be used for drugs that are poorly absorbed or ineffective when they are given orally. Some medications, such as certain antipsychotics, can be administered as long-acting intramuscular injections. Ongoing IV infusions can be used to deliver continuous medication or fluids.
Disadvantages of injections include potential pain or discomfort for the patient and the requirement of trained staff using aseptic techniques for administration. However, in some cases, patients are taught to self-inject, such as SC injection of insulin in patients with insulin-dependent diabetes mellitus. As the drug is delivered to the site of action extremely rapidly with IV injection, there is a risk of overdose if the dose has been calculated incorrectly, and there is an increased risk of side effects if the drug is administered too rapidly.
Respiratory tract
Mouth inhalation
Inhaled medications can be absorbed quickly and act both locally and systemically. Proper technique with inhaler devices is necessary to achieve the correct dose. Some medications can have an unpleasant taste or irritate the mouth.
In general, only 20–50% of the pulmonary-delivered dose rendered in powdery particles will be deposited in the lung upon mouth inhalation. The remainder of 50-70% undeposited aerosolized particles are cleared out of lung as soon as exhalation.
An inhaled powdery particle that is >8 μm is structurally predisposed to depositing in the central and conducting airways (conducting zone) by inertial impaction.
An inhaled powdery particle that is between 3 and 8 μm in diameter tend to largely deposit in the transitional zones of the lung by sedimentation.
An inhaled powdery particle that is <3 μm in diameter is structurally predisposed to depositing primarily in the respiratory regions of the peripheral lung via diffusion.
Particles that deposit in the upper and central airways are generally absorbed systemically to great extent because they are only partially removed by mucociliary clearance, which results in orally mediated absorption when the transported mucus is swallowed, and first pass metabolism or incomplete absorption through loss at the fecal route can sometimes reduce the bioavailability. This should in no way suggest to clinicians or researchers that inhaled particles are not a greater threat than swallowed particles, it merely signifies that a combination of both methods may occur with some particles, no matter the size of or lipo/hydrophilicity of the different particle surfaces.
Nasal inhalation
Inhalation by nose of a substance is almost identical to oral inhalation, except that some of the drug is absorbed intranasally instead of in the oral cavity before entering the airways. Both methods can result in varying levels of the substance to be deposited in their respective initial cavities, and the level of mucus in either of these cavities will reflect the amount of substance swallowed. The rate of inhalation will usually determine the amount of the substance which enters the lungs. Faster inhalation results in more rapid absorption because more substance finds the lungs. Substances in a form that resists absorption in the lung will likely resist absorption in the nasal passage, and the oral cavity, and are often even more resistant to absorption after they fail absorption in the former cavities and are swallowed.
Research
Neural drug delivery is the next step beyond the basic addition of growth factors to nerve guidance conduits. Drug delivery systems allow the rate of growth factor release to be regulated over time, which is critical for creating an environment more closely representative of in vivo development environments.
See also
ADME
Catheter
Dosage form
Drug injection
Ear instillation
Hypodermic needle
Intravenous marijuana syndrome
List of medical inhalants
Nanomedicine
Absorption (pharmacology)
References
External links
The 10th US-Japan Symposium on Drug Delivery Systems
FDA Center for Drug Evaluation and Research Data Standards Manual: Route of Administration.
FDA Center for Drug Evaluation and Research Data Standards Manual: Dosage Form.
A.S.P.E.N. American Society for Parenteral and Enteral Nutrition
Drugs
Pharmacokinetics | Route of administration | [
"Chemistry"
] | 4,148 | [
"Pharmacology",
"Pharmacokinetics",
"Products of chemical industry",
"Routes of administration",
"Chemicals in medicine",
"Drugs"
] |
334,820 | https://en.wikipedia.org/wiki/Subcutaneous%20administration | Subcutaneous administration is the insertion of medications beneath the skin either by injection or infusion.
A subcutaneous injection is administered as a bolus into the subcutis, the layer of skin directly below the dermis and epidermis, collectively referred to as the cutis. The instruments are usually a hypodermic needle and a syringe. Subcutaneous injections are highly effective in administering medications such as insulin, morphine, diacetylmorphine and goserelin. Subcutaneous administration may be abbreviated as SC, SQ, subcu, sub-Q, SubQ, or subcut. Subcut is the preferred abbreviation to reduce the risk of misunderstanding and potential errors.
Subcutaneous tissue has few blood vessels and so drugs injected into it are intended for slow, sustained rates of absorption, often with some amount of depot effect. Compared with other routes of administration, it is slower than intramuscular injections but still faster than intradermal injections. Subcutaneous infusion (as opposed to subcutaneous injection) is similar but involves a continuous drip from a bag and line, as opposed to injection with a syringe.
Medical uses
A subcutaneous injection is administered into the fatty tissue of the subcutaneous tissue, located below the dermis and epidermis. They are commonly used to administer medications, especially those which cannot be administered by mouth as they would not be absorbed from the gastrointestinal tract. A subcutaneous injection is absorbed slower than a substance injected intravenously or into a muscle, but faster than a medication administered by mouth.
Medications
Medications commonly administered via subcutaneous injection or infusion include insulin, live vaccines, monoclonal antibodies, and heparin. These medications cannot be administered orally as the molecules are too large to be absorbed in the intestines. Subcutaneous injections can also be used when the increased bioavailability and more rapid effects over oral administration are preferred. They are also the easiest form of parenteral administration of medication to perform by lay people, and are associated with less adverse effects such as pain or infection than other forms of injection.
Insulin
Perhaps the most common medication administered subcutaneously is insulin. While attempts have been made since the 1920s to administer insulin orally, the large size of the molecule has made it difficult to create a formulation with absorption and predictability that comes close to subcutaneous injections of insulin. People with type 1 diabetes almost all require insulin as part of their treatment regimens, and a smaller proportion of people with type 2 diabetes do as well — with tens of millions of prescriptions per year in the United States alone.
Insulin historically was injected from a vial using a syringe and needle, but may also be administered subcutaneously using devices such as injector pens or insulin pumps. An insulin pump consists of a catheter which is inserted into the subcutaneous tissue, and then secured in place to allow insulin to be administered multiple times through the same injection site.
Recreational drug use
Subcutaneous injection may also be used by people to (self-) administer recreational drugs. This can be referred to as skin popping. In some cases, the administration of illicit drugs in this way is associated with unsafe practices leading to infections and other adverse effects. In rare cases, this results in serious side effects such as AA amyloidosis. Recreational drugs reported to be administered subcutaneously have included cocaine, mephedrone, and amphetamine derivatives such as PMMA.
Contraindications
Contraindications to subcutaneous injections primarily depend on the specific medication being administered. Doses which would require more than 2 mL to be injected at once are not administered subcutaneously. Medications which may cause necrosis or otherwise be damaging or irritating to tissues should also not be administered subcutaneously. An injection should not be given at a specific site if there is inflammation or skin damage in the area.
Risks and complications
With normal doses of medicine (less than 2 mL in volume), complications or adverse effects are very rare. The most common adverse reactions after subcutaneous injections are administered are termed "injection site reactions". This term encompasses any combination of redness, swelling, itching, bruising, or other irritation that does not spread beyond the immediate vicinity of the injection. Injection site reactions may be minimized if repeated injections are necessary by moving the injection site at least one inch from previous injections, or using a different injection location altogether. There may also be specific complications associated with the specific medication being administered.
Medication-specific
Due to the frequency of injections required for the administration of insulin products via subcutaneous injection, insulin is associated with the development of lipohypertrophy and lipoatrophy. This can lead to slower or incomplete absorption from the injection site. Rotating the injection site is the primary method of preventing changes in tissue structure from insulin administration. Heparin-based anticoagulants injected subcutaneously may cause hematoma and bruising around the injection site due to their anticoagulant effect. This includes heparin and low molecular weight heparin products such as enoxaparin. There is some low certainty evidence that administering the injection more slowly may decrease the pain from heparin injections, but not the risk of or extent of bruising. Subcutaneous heparin-based anticoagulation may also lead to necrosis of the surrounding skin or lesions, most commonly when injected in the abdomen.
Many medications have the potential to cause local lesions or swelling due to the irritating effect the medications have on the skin and subcutaneous tissues. This includes medications such as apomorphine and hyaluronic acid injected as a filler, which may cause the area to appear bruised. Hyaluronic acid "bruising" may be treated using injections of hyaluronidase enzyme around the location.
Other common medication-specific side effects include pain, burning or stinging, warmth, rash, flushing, or multiple of these reactions at the injection site, collectively termed "injection site reactions". This is seen with the subcutaneous injection of triptans for migraine headache, medroxyprogesterone acetate for contraception, as well as many monoclonal antibodies. In most cases, injection site reactions are self-limiting and resolve on their own after a short time without treatment, and do not require the medication to be discontinued.
The administration of vaccines subcutaneously is also associated with injection site reactions. This includes the BCG vaccine which is associated with a specific scar appearance which can be used as evidence of prior vaccination. Other subcutaneous vaccines, many of which are live vaccines including the MMR vaccine and the varicella vaccine, which may cause fever and rash, as well as a feeling of general malaise for a day or two following the vaccination.
Technique
Subcutaneous injections are performed by cleaning the area to be injected followed by an injection, usually at a 45-degree angle to the skin when using a syringe and needle, or at a 90-degree angle (perpendicular) if using an injector pen. The appropriate injection angle is based on the length of needle used, and the depth of the subcutaneous fat in the skin of the specific person. A 90-degree angle is always used for medications such as heparin. If administered at an angle, the skin and underlying tissue may be pinched upwards prior to injection. The injection is administered slowly, lasting about 10 seconds per milliliter of fluid injected, and the needle may be left in place for 10 seconds following injection to ensure the medicine is fully injected.
Equipment
The gauge of the needle used can range from 25 gauge to 27 gauge, while the length can vary between -inch to -inch for injections using a syringe and needle. For subcutaneous injections delivered using devices such as injector pens, the needle used may be as thin as 34 gauge (commonly 30–32 gauge), and as short as 3.5 mm (commonly 3.5 mm to 5 mm). Subcutaneous injections can also be delivered via a pump system which uses a cannula inserted under the skin. The specific needle size/length, as well as appropriateness of a device such as a pen or pump, is based on the characteristics of a person's skin layers.
Locations
Commonly used injection sites include:
The outer area of the upper arm.
The abdomen, avoiding a 2-inch circle around the navel.
The front of the thigh, between 4 inches from the top of the thigh and 4 inches above the knee.
The upper back.
The upper area of the buttock, just behind the hip bone.
The choice of specific injection site is based on the medication being administered, with heparin almost always being administered in the abdomen, as well as preference. Injections administered frequently or repeatedly should be administered in a different location each time, either within the same general site or a different site, but at least one inch away from recent injections.
Self-administration
As opposed to intramuscular or intravenous injections, subcutaneous injections can be easily performed by people with minor skill and training required. The injection sites for self-injection of medication are the same as for injection by a healthcare professional, and the skill can be taught to patients using pictures, videos, or models of the subcutaneous tissue for practice. People who are to self-inject medicine subcutaneously should be trained how to evaluate and rotate the injection site if complications or contraindications arise. Self-administration by subcutaneous injection generally does not require disinfection of the skin outside of a hospital setting as the risk of infection is extremely low, but instead it is recommended to ensure that the site and person's hands are simply clean prior to administration.
Infusion
Subcutaneous infusion, also known as interstitial infusion or hypodermoclysis, is a form of subcutaneous (under the skin) administration of fluids to the body, often saline or glucose solutions. It is the infusion counterpart of subcutaneous injection with a syringe.
Subcutaneous infusion can be used where a slow rate of fluid uptake is required compared to intravenous infusion. Typically, it is limited to 1 mL per minute, although it is possible to increase this by using two sites simultaneously. The chief advantages of subcutaneous infusion over intravenous infusion is that it is cheap and can be administered by non-medical personnel with minimal supervision. It is therefore particularly suitable for home care. The enzyme hyaluronidase can be added to the fluid to improve absorption during the infusion.
Subcutaneous infusion can be speeded up by applying it to multiple sites simultaneously. The technique was pioneered by Evan O'Neill Kane in 1900. Kane was looking for a technique that was as fast as intravenous infusion but not so risky to use on trauma patients in unhygienic conditions in the field.
See also
Intramuscular injection
Intravenous injection
Intradermal injection
References
Dosage forms
Routes of administration
Injection (medicine) | Subcutaneous administration | [
"Chemistry"
] | 2,310 | [
"Pharmacology",
"Routes of administration"
] |
334,821 | https://en.wikipedia.org/wiki/Intramuscular%20injection | Intramuscular injection, often abbreviated IM, is the injection of a substance into a muscle. In medicine, it is one of several methods for parenteral administration of medications. Intramuscular injection may be preferred because muscles have larger and more numerous blood vessels than subcutaneous tissue, leading to faster absorption than subcutaneous or intradermal injections. Medication administered via intramuscular injection is not subject to the first-pass metabolism effect which affects oral medications.
Common sites for intramuscular injections include the deltoid muscle of the upper arm and the gluteal muscle of the buttock. In infants, the vastus lateralis muscle of the thigh is commonly used. The injection site must be cleaned before administering the injection, and the injection is then administered in a fast, darting motion to decrease the discomfort to the individual. The volume to be injected in the muscle is usually limited to 2–5 milliliters, depending on injection site. A site with signs of infection or muscle atrophy should not be chosen. Intramuscular injections should not be used in people with myopathies or those with trouble clotting.
Intramuscular injections commonly result in pain, redness, and swelling or inflammation around the injection site. These side effects are generally mild and last no more than a few days at most. Rarely, nerves or blood vessels around the injection site can be damaged, resulting in severe pain or paralysis. If proper technique is not followed, intramuscular injections can result in localized infections such as abscesses and gangrene. While historically aspiration, or pulling back on the syringe before injection, was recommended to prevent inadvertent administration into a vein, it is no longer recommended for most injection sites by some countries.
Uses
Intramuscular injection is commonly used for medication administration. Medication administered in the muscle is generally quickly absorbed in the bloodstream, and avoids the first pass metabolism which occurs with oral administration. The medication may not be considered 100% bioavailable as it must still be absorbed from the muscle, which occurs over time. An intramuscular injection is less invasive than an intravenous injection and also generally takes less time, as the site of injection (a muscle versus a vein) is much larger. Medications administered in the muscle may also be administered as depot injections, which provide slow, continuous release of medicine over a longer period of time. Certain substances, including ketamine, may be injected intramuscularly for recreational purposes. Disadvantages of intramuscular administration include skill and technique required, pain from injection, anxiety or fear (especially in children), and difficulty in self-administration which limits its use in outpatient medicine.
Vaccines, especially inactivated vaccines, are commonly administered via intramuscular injection. However, it has been estimated that for every vaccine injected intramuscularly, 20 injections are given to administer drugs or other therapy. This can include medications such as antibiotics, immunoglobulin, and hormones such as testosterone and medroxyprogesterone. In a case of severe allergic reaction, or anaphylaxis, a person may use an epinephrine autoinjector to self-administer epinephrine into the muscle.
Contraindications
Because an intramuscular injection can be used to administer many types of medications, specific contraindications depend in large part on the medication being administered. Injections of medications are necessarily more invasive than other forms of administration such as by mouth or topical and require training to perform appropriately, without which complications can arise regardless of the medication being administered. For this reason, unless there are desired differences in rate of absorption, time to onset, or other pharmacokinetic parameters in the specific situation, a less invasive form of drug administration (usually by mouth) is preferred.
Intramuscular injections are generally avoided in people with low platelet count or clotting problems, to prevent harm due to potential damage to blood vessels during the injection. They are also not recommended in people who are in hypovolemic shock, or have myopathy or muscle atrophy, as these conditions may alter the absorption of the medication. The damage to the muscle caused by an intramuscular injections may interfere with the accuracy of certain cardiac tests for people with suspected myocardial infarction and for this reason other methods of administration are preferred in such instances. In people with an active myocardial infarction, the decrease in circulation may result in slower absorption from an IM injection. Specific sites of administration may also be contraindicated if the desired injection site has an infection, swelling, or inflammation. Within a specific site of administration, the injection should not be given directly over irritation or redness, birthmarks or moles, or areas with scar tissue.
Risks and complications
As an injection necessitates piercing the skin, there is a risk of infection from bacteria or other organisms present in the environment or on the skin before the injection. This risk is minimized by using proper aseptic technique in preparing the injection and sanitizing the injection site before administration. Intramuscular injections may also cause an abscess or gangrene at the injection site, depending on the specific medication and amount administered. There is also a risk of nerve or vascular injury if a nerve or blood vessel is inadvertently hit during injection. If single-use or sterilized equipment is not used, there is the risk of transmission of infectious disease between users, or to a practitioner who inadvertently injures themselves with a used needle, termed a needlestick injury.
Site-specific complications
Injections into the deltoid site in the arm can result in unintentional damage to the radial and axillary nerves. In rare cases when not performed properly, the injection may result in shoulder dysfunction. The most frequent complications of a deltoid injection include pain, redness, and inflammation around the injection site, which are almost always mild and last only a few days at most.
The dorsogluteal site of injection is associated with a higher risk of skin and tissue trauma, muscle fibrosis or contracture, hematoma, nerve palsy, paralysis, and infections such as abscesses and gangrene. Furthermore, injection in the gluteal muscle poses a risk for damage to the sciatic nerve, which may cause shooting pain or a sensation of burning. Sciatic nerve damage can also affect a person's ability to move their foot on the affected side, and other parts of the body controlled by the nerve. Damage to the sciatic nerve can be prevented by using the ventrogluteal site instead, and by selecting an appropriate size and length of needle for the injection.
Technique
An intramuscular injection can be administered in multiple different muscles of the body. Common sites for intramuscular injection include: deltoid, dorsogluteal, rectus femoris, vastus lateralis and ventrogluteal muscles. Sites that are bruised, tender, red, swollen, inflamed or scarred are generally avoided. The specific medication and amount being administered will influence the decision of the specific muscle chosen for injection.
The injection site is first cleaned using an antimicrobial and allowed to dry. The injection is performed in a quick, darting motion perpendicular to the skin, at an angle between 72 and 90 degrees. The practitioner will stabilize the needle with one hand while using their other hand to depress the plunger to slowly inject the medication – a rapid injection causes more discomfort. The needle is withdrawn at the same angle inserted. Gentle pressure may be applied with gauze if bleeding occurs. Pressure or gentle massage of the muscle following injection may reduce the risk of pain.
Aspiration
Aspirating for blood to rule out injecting into a blood vessel is not recommended by the US CDC, Public Health Agency of Canada, or Norway Institute of Public Health, as the injection sites do not contain large blood vessels and aspiration results in greater pain. There is no evidence that aspiration is useful to increase safety of intramuscular injections when injecting in a site other than the dorsogluteal site.
Aspiration was recommended by the Danish Health Authority for COVID-19 vaccines for a time to investigate the potential rare risk of blood clotting and bleeding, but it is no longer a recommendation.
Z-track method
The Z-track method is a method of administering an IM injection that prevents the medication being tracked through the subcutaneous tissue, sealing the medication in the muscle, and minimizing irritation from the medication. Using the Z-track technique, the skin is pulled laterally, away from the injection site, before the injection; then the medication is injected, the needle is withdrawn, and the skin is released. This method can be used if the overlying tissue can be displaced.
Injection sites
The deltoid muscle in the outer portion of the upper arm is used for injections of small volume, usually equal to or less than 1 mL. This includes most intramuscular vaccinations. It is not recommended to use the deltoid for repeated injections due to its small area, which makes it difficult to space out injections from each other. The deltoid site is located by locating the lower edge of the acromion process, and injecting in the area which forms an upside down triangle with its base at the acromion process and its midpoint in line with the armpit. An injection into the deltoid muscle is commonly administered using a 1-inch long needle, but may use a -inch long needle for younger people or very frail elderly people.
The ventrogluteal site on the hip is used for injections which require a larger volume to be administered, greater than 1 mL, and for medications which are known to be irritating, viscous, or oily. It is also used to administer narcotic medications, antibiotics, sedatives and anti-emetics. The ventrogluteal site is located in a triangle formed by the anterior superior iliac spine and the iliac crest, and may be located using a hand as a guide. The ventrogluteal site is less painful for injection than other sites such as the deltoid site.
The vastus lateralis site is used for infants less than 7 months old and people who are unable to walk or who have loss of muscular tone. The site is located by dividing the front thigh into thirds vertically and horizontally to form nine squares; the injection is administered in the outer middle square. This site is also the usual site of administration for epinephrine autoinjectors, which are used in the outer thigh, corresponding to the location of the vastus lateralis muscle.
The dorsogluteal site of the buttock site is not routinely used due to its location near major blood vessels and nerves, as well as having inconsistent depth of adipose tissue. Many injections in this site do not penetrate deep enough under the skin to be correctly administered in the muscle. While current evidence-based practice recommends against using this site, many healthcare providers still use this site, often due to a lack of knowledge about alternative sites for injection.
This site is located by dividing the buttock into four using a cross shape, and administering the injection in the upper outer quadrant. This is the only intramuscular injection site for which aspiration is recommended of the syringe before injection, due to higher likelihood of accidental intravenous administration in this area. However, aspiration is not recommended by the Centers for Disease Control and Prevention, which considers it outdated for any intramuscular injection.
Special populations
Some populations require a different injection site, needle length, or technique. In very young or weak elderly patients, a normal-length needle may be too long to inject properly. In these patients, a shorter needle is indicated to avoid injecting too deeply. It is also recommended to consider using the anterolateral thigh as an injection site in infants under one year old.
To help infants and children cooperate with injection administration, the Advisory Committee on Immunization Practices in the United States recommends using distractions, giving something sweet, and rocking the baby side to side. In people who are overweight, a 1.5-inch needle may be used to ensure the injection is given below the subcutaneous layer of skin, while a -inch needle may be used for people who weigh under . In any case, the skin does not need to be pinched up before injecting when the appropriate length needle is used.
History
Injections into muscular tissue may have taken place as early as the year 500 AD. Beginning in the late 1800s, the procedure began to be described in more detail and techniques began to be developed by physicians. In the early days of intramuscular injections, the procedure was performed almost exclusively by physicians. After the introduction of antibiotics in the middle of the 20th century, nurses began preparing equipment for intramuscular injections as part of their delegated duties from physicians, and by 1961 they had "essentially taken over the procedure". Until this delegation became virtually universal, there were no uniform procedures or education for nurses in proper administration of intramuscular injections, and complications from improper injection were common.
Intramuscular injections began to be used for administration of vaccines for diphtheria in 1923, whooping cough in 1926, and tetanus in 1927. By the 1970s, researchers and instructors began forming guidance on injection site and technique to reduce the risk of injection complications and side effects such as pain. Also in the early 1970s, botulinum toxin began to be injected into muscles to intentionally paralyze them for therapeutic reasons, and later for cosmetic reasons. Until the 2000s, aspiration after inserting the needle was recommended as a safety measure, to ensure the injection was being administered in a muscle and not inadvertently in a vein. However, this is no longer recommended as evidence shows no safety benefit and it lengthens the time taken for injection, which causes more pain.
Veterinary medicine
In animals common sites for intramuscular injection include the quadriceps, the lumbodorsal muscles, and the triceps muscle.
See also
Subcutaneous injection
Intradermal injection
Intravenous injection
References
External links
Prevention and Control of Influenza, Recommendations of ACIP
Medical treatments
Routes of administration
Dosage forms
Injection (medicine)
Muscular system | Intramuscular injection | [
"Chemistry"
] | 2,960 | [
"Pharmacology",
"Routes of administration"
] |
334,955 | https://en.wikipedia.org/wiki/Therapeutic%20index | The therapeutic index (TI; also referred to as therapeutic ratio) is a quantitative measurement of the relative safety of a drug with regard to risk of overdose. It is a comparison of the amount of a therapeutic agent that causes toxicity to the amount that causes the therapeutic effect. The related terms therapeutic window or safety window refer to a range of doses optimized between efficacy and toxicity, achieving the greatest therapeutic benefit without resulting in unacceptable side-effects or toxicity.
Classically, for clinical indications of an approved drug, TI refers to the ratio of the dose of the drug that causes adverse effects at an incidence/severity not compatible with the targeted indication (e.g. toxic dose in 50% of subjects, TD) to the dose that leads to the desired pharmacological effect (e.g. efficacious dose in 50% of subjects, ED). In contrast, in a drug development setting TI is calculated based on plasma exposure levels.
In the early days of pharmaceutical toxicology, TI was frequently determined in animals as lethal dose of a drug for 50% of the population (LD50) divided by the minimum effective dose for 50% of the population (ED50). In modern settings, more sophisticated toxicity endpoints are used.
For many drugs, severe toxicities in humans occur at sublethal doses, which limit their maximum dose. A higher safety-based therapeutic index is preferable instead of a lower one; an individual would have to take a much higher dose of a drug to reach the lethal threshold than the dose taken to induce the therapeutic effect of the drug. However, a lower efficacy-based therapeutic index is preferable instead of a higher one; an individual would have to take a higher dose of a drug to reach the toxic threshold than the dose taken to induce the therapeutic effect of the drug.
Generally, a drug or other therapeutic agent with a narrow therapeutic range (i.e. having little difference between toxic and therapeutic doses) may have its dosage adjusted according to measurements of its blood levels in the person taking it. This may be achieved through therapeutic drug monitoring (TDM) protocols. TDM is recommended for use in the treatment of psychiatric disorders with lithium due to its narrow therapeutic range.
Types
Based on efficacy and safety of drugs, there are two types of therapeutic index:
Safety-based therapeutic index
It is desirous for the value of LD to be as large as possible, to decrease risk of lethal effects and increase the therapeutic window. In the above formula, TI increases as the difference between LD and ED increases—hence, a higher safety-based therapeutic index indicates a larger therapeutic window, and vice versa.
Efficacy-based therapeutic index
Ideally the ED is as low as possible for faster drug response and larger therapeutic window, whereas a drugs TD is ideally as large as possible to decrease risk of toxic effects. In the above equation, the greater the difference between ED and TD, the greater the value of TI. Hence, a lower efficacy-based therapeutic index indicates a larger therapeutic window.
Protective index
Similar to safety-based therapeutic index, the protective index uses TD50 (median toxic dose) in place of LD50.
For many substances, toxicity can occur at levels far below lethal effects (that cause death), and thus, if toxicity is properly specified, the protective index is often more informative about a substance's relative safety. Nevertheless, the safety-based therapeutic index () is still useful as it can be considered an upper bound of the protective index, and the former also has the advantages of objectivity and easier comprehension.
Since the protective index (PI) is calculated as TD divided by ED, it can be mathematically expressed that:
which means that is a reciprocal of protective index.
All the above types of therapeutic index can be used in both pre-clinical trials and clinical trials.
Drug development
A low efficacy-based therapeutic index () and a high safety-based therapeutic index () are preferable for a drug to have a favorable efficacy vs safety profile. At the early discovery/development stage, the clinical TI of a drug candidate is unknown. However, understanding the preliminary TI of a drug candidate is of utmost importance as early as possible since TI is an important indicator of the probability of successful development. Recognizing drug candidates with potentially suboptimal TI at the earliest possible stage helps to initiate mitigation or potentially re-deploy resources.
TI is the quantitative relationship between pharmacological efficacy and toxicological safety of a drug, without considering the nature of pharmacological or toxicological endpoints themselves. However, to convert a calculated TI into something useful, the nature and limitations of pharmacological and/or toxicological endpoints must be considered. Depending on the intended clinical indication, the associated unmet medical need and/or the competitive situation, more or less weight can be given to either the safety or efficacy of a drug candidate in order to create a well balanced indication-specific efficacy vs safety profile.
In general, it is the exposure of a given tissue to drug (i.e. drug concentration over time), rather than dose, that drives the pharmacological and toxicological effects. For example, at the same dose there may be marked inter-individual variability in exposure due to polymorphisms in metabolism, DDIs or differences in body weight or environmental factors. These considerations emphasize the importance of using exposure instead of dose to calculate TI. To account for delays between exposure and toxicity, the TI for toxicities that occur after multiple dose administrations should be calculated using the exposure to drug at steady state rather than after administration of a single dose.
A review published by Muller and Milton in Nature Reviews Drug Discovery critically discusses TI determination and interpretation in a translational drug development setting for both small molecules and biotherapeutics.
Range of therapeutic indices
The therapeutic index varies widely among substances, even within a related group.
For instance, the opioid painkiller remifentanil is very forgiving, offering a therapeutic index of 33,000:1, while Diazepam, a benzodiazepine sedative-hypnotic and skeletal muscle relaxant, has a less forgiving therapeutic index of 100:1. Morphine is even less so with a therapeutic index of 70.
Less safe are cocaine (a stimulant and local anaesthetic) and ethanol (colloquially, the "alcohol" in alcoholic beverages, a widely available sedative consumed worldwide): the therapeutic indices for these substances are 15:1 and 10:1, respectively. Paracetamol, alternatively known by its trade names Tylenol or Panadol, also has a therapeutic index of 10.
Even less safe are drugs such as digoxin, a cardiac glycoside; its therapeutic index is approximately 2:1.
Other examples of drugs with a narrow therapeutic range, which may require drug monitoring both to achieve therapeutic levels and to minimize toxicity, include dimercaprol, theophylline, warfarin and lithium carbonate.
Some antibiotics and antifungals require monitoring to balance efficacy with minimizing adverse effects, including: gentamicin, vancomycin, amphotericin B (nicknamed 'amphoterrible' for this very reason), and polymyxin B.
Cancer radiotherapy
Radiotherapy aims to shrink tumors and kill cancer cells using high energy. The energy arises from x-rays, gamma rays, or charged or heavy particles. The therapeutic ratio in radiotherapy for cancer treatment is determined by the maximum radiation dose for killing cancer cells and the minimum radiation dose causing acute or late morbidity in cells of normal tissues. Both of these parameters have sigmoidal dose–response curves. Thus, a favorable outcome in dose–response for tumor tissue is greater than that of normal tissue for the same dose, meaning that the treatment is effective on tumors and does not cause serious morbidity to normal tissue. Conversely, overlapping response for two tissues is highly likely to cause serious morbidity to normal tissue and ineffective treatment of tumors. The mechanism of radiation therapy is categorized as direct or indirect radiation. Both direct and indirect radiation induce DNA mutation or chromosomal rearrangement during its repair process. Direct radiation creates a DNA free radical from radiation energy deposition that damages DNA. Indirect radiation occurs from radiolysis of water, creating a free hydroxyl radical, hydronium and electron. The hydroxyl radical transfers its radical to DNA. Or together with hydronium and electron, a free hydroxyl radical can damage the base region of DNA.
Cancer cells cause an imbalance of signals in the cell cycle. G1 and G2/M arrest were found to be major checkpoints by irradiating human cells. G1 arrest delays the repair mechanism before synthesis of DNA in S phase and mitosis in M phase, suggesting it is a key checkpoint for survival of cells. G2/M arrest occurs when cells need to repair after S phase but before mitotic entry. It is known that S phase is the most resistant to radiation and M phase is the most sensitive to radiation. p53, a tumor suppressor protein that plays a role in G1 and G2/M arrest, enabled the understanding of the cell cycle through radiation. For example, irradiation of myeloid leukemia cells leads to an increase in p53 and a decrease in the level of DNA synthesis. Patients with Ataxia telangiectasia delays have hypersensitivity to radiation due to the delay of accumulation of p53. In this case, cells are able to replicate without repair of their DNA, becoming prone to incidence of cancer. Most cells are in G1 and S phase. Irradiation at G2 phase showed increased radiosensitivity and thus G1 arrest has been a focus for therapeutic treatment.
Irradiation of a tissue induces a response in both irradiated and non-irridiated cells. It was found that even cells up to 50–75 cell diameters distant from irradiated cells exhibit a phenotype of enhanced genetic instability such as micronucleation. This suggests an effect on cell-to-cell communication such as paracrine and juxtacrine signaling. Normal cells do not lose their DNA repair mechanism whereas cancer cells often lose it during radiotherapy. However, the high energy radiation can override the ability of damaged normal cells to repair, leading to additional risk of carcinogenesis. This suggests a significant risk associated with radiation therapy. Thus, it is desirable to improve the therapeutic ratio during radiotherapy. Employing IG-IMRT, protons and heavy ions are likely to minimize the dose to normal tissues by altered fractionation. Molecular targeting of the DNA repair pathway can lead to radiosensitization or radioprotection. Examples are direct and indirect inhibitors on DNA double-strand breaks. Direct inhibitors target proteins (PARP family) and kinases (ATM, DNA-PKCs) that are involved in DNA repair. Indirect inhibitors target protein tumor cell signaling proteins such as EGFR and insulin growth factor.
The effective therapeutic index can be affected by targeting, in which the therapeutic agent is concentrated in its desirable area of effect. For example, in radiation therapy for cancerous tumors, shaping the radiation beam precisely to the profile of a tumor in the "beam's eye view" can increase the delivered dose without increasing toxic effects, though such shaping might not change the therapeutic index. Similarly, chemotherapy or radiotherapy with infused or injected agents can be made more efficacious by attaching the agent to an oncophilic substance, as in peptide receptor radionuclide therapy for neuroendocrine tumors and in chemoembolization or radioactive microspheres therapy for liver tumors and metastases. This concentrates the agent in the targeted tissues and lowers its concentration in others, increasing efficacy and lowering toxicity.
Safety ratio
Sometimes the term safety ratio is used, particularly when referring to psychoactive drugs used for non-therapeutic purposes, e.g. recreational use. In such cases, the effective dose is the amount and frequency that produces the desired effect, which can vary, and can be greater or less than the therapeutically effective dose.
The Certain Safety Factor, also referred to as the Margin of Safety (MOS), is the ratio of the lethal dose to 1% of population to the effective dose to 99% of the population (LD/ED). This is a better safety index than the LD50 for materials that have both desirable and undesirable effects, because it factors in the ends of the spectrum where doses may be necessary to produce a response in one person but can, at the same dose, be lethal in another.
Synergistic effect
A therapeutic index does not consider drug interactions or synergistic effects. For example, the risk associated with benzodiazepines increases significantly when taken with alcohol, opiates, or stimulants when compared with being taken alone. Therapeutic index also does not take into account the ease or difficulty of reaching a toxic or lethal dose. This is more of a consideration for recreational drug users, as the purity can be highly variable.
Therapeutic window
The therapeutic window (or pharmaceutical window) of a drug is the range of drug dosages which can treat disease effectively without having toxic effects. Medication with a small therapeutic window must be administered with care and control, frequently measuring blood concentration of the drug, to avoid harm. Medications with narrow therapeutic windows include theophylline, digoxin, lithium, and warfarin.
Optimal biological dose
Optimal biological dose (OBD) is the quantity of a drug that will most effectively produce the desired effect while remaining in the range of acceptable toxicity.
Maximum tolerated dose
The maximum tolerated dose (MTD) refers to the highest dose of a radiological or pharmacological treatment that will produce the desired effect without unacceptable toxicity. The purpose of administering MTD is to determine whether long-term exposure to a chemical might lead to unacceptable adverse health effects in a population, when the level of exposure is not sufficient to cause premature mortality due to short-term toxic effects. The maximum dose is used, rather than a lower dose, to reduce the number of test subjects (and, among other things, the cost of testing), to detect an effect that might occur only rarely. This type of analysis is also used in establishing chemical residue tolerances in foods. Maximum tolerated dose studies are also done in clinical trials.
MTD is an essential aspect of a drug's profile. All modern healthcare systems dictate a maximum safe dose for each drug, and generally have numerous safeguards (e.g. insurance quantity limits and government-enforced maximum quantity/time-frame limits) to prevent the prescription and dispensing of quantities exceeding the highest dosage which has been demonstrated to be safe for members of the general patient population.
Patients are often unable to tolerate the theoretical MTD of a drug due to the occurrence of side-effects which are not innately a manifestation of toxicity (not considered to severely threaten a patient's health) but cause the patient sufficient distress and/or discomfort to result in non-compliance with treatment. Such examples include emotional "blunting" with antidepressants, pruritus with opiates, and blurred vision with anticholinergics.
See also
Drug titration – process of finding the correct dose of a drug
Effective dose
EC50
IC50
LD50
Hormesis
References
Pharmacokinetics
Life sciences industry | Therapeutic index | [
"Chemistry",
"Biology"
] | 3,160 | [
"Pharmacology",
"Life sciences industry",
"Pharmacokinetics"
] |
3,035,156 | https://en.wikipedia.org/wiki/Fermi%20acceleration | Fermi acceleration, sometimes referred to as diffusive shock acceleration (a subclass of Fermi acceleration), is the acceleration that charged particles undergo when being repeatedly reflected, usually by a magnetic mirror (see also Centrifugal mechanism of acceleration). It receives its name from physicist Enrico Fermi who first proposed the mechanism. This is thought to be the primary mechanism by which particles gain non-thermal energies in astrophysical shock waves. It plays a very important role in many astrophysical models, mainly of shocks including solar flares and supernova remnants.
There are two types of Fermi acceleration: first-order Fermi acceleration (in shocks) and second-order Fermi acceleration (in the environment of moving magnetized gas clouds). In both cases the environment has to be collisionless in order for the mechanism to be effective. This is because Fermi acceleration only applies to particles with energies exceeding the thermal energies, and frequent collisions with surrounding particles will cause severe energy loss and as a result no acceleration will occur.
First order Fermi acceleration
Shock waves typically have moving magnetic inhomogeneities both preceding and following them. Consider the case of a charged particle traveling through the shock wave (from upstream to downstream). If it encounters a moving change in the magnetic field, this can reflect it back through the shock (downstream to upstream) at increased velocity. If a similar process occurs upstream, the particle will again gain energy. These multiple reflections greatly increase its energy. The resulting energy spectrum of many particles undergoing this process (assuming that they do not influence the structure of the shock) turns out to be a power law:
where the spectral index depends, for non-relativistic shocks, only on the compression ratio of the shock.
The term "First order" comes from the fact that the energy gain per shock crossing is proportional to , the velocity of the shock divided by the speed of light.
The injection problem
A mystery of first order Fermi processes is the injection problem. In the environment of a shock, only particles with energies that exceed the thermal energy by much (a factor of a few at least) can cross the shock and 'enter the game' of acceleration. It is presently unclear what mechanism causes the particles to initially have energies sufficiently high to do so.
Second order Fermi acceleration
Second order Fermi acceleration relates to the amount of energy gained during the motion of a charged particle in the presence of randomly moving "magnetic mirrors". So, if the magnetic mirror is moving towards the particle, the particle will end up with increased energy upon reflection. The opposite holds if the mirror is receding. This notion was used by Fermi (1949) to explain the mode of formation of cosmic rays. In this case the magnetic mirror is a moving interstellar magnetized cloud. In a random motion environment, Fermi argued, the probability of a head-on collision is greater than a head-tail collision, so particles would, on average, be accelerated. This random process is now called second-order Fermi acceleration, because the mean energy gain per bounce depends on the mirror velocity squared, .
The resulting energy spectrum anticipated from this physical setup, however, is not universal as in the case of diffusive shock acceleration.
See also
Fermi-Ulam model
Fermi glow
Shock waves in astrophysics
References
External links
David Darling's article on Fermi acceleration
Rieger, Bosch-Ramon and Duffy: Fermi acceleration in astrophysical jets. Astrophys.Space Sci. 309:119-125 (2007)
Fusion power
Dynamics (mechanics)
Cosmic rays
Acceleration | Fermi acceleration | [
"Physics",
"Chemistry",
"Mathematics"
] | 728 | [
"Physical phenomena",
"Physical quantities",
"Acceleration",
"Plasma physics",
"Quantity",
"Fusion power",
"Astrophysics",
"Classical mechanics",
"Motion (physics)",
"Radiation",
"Dynamics (mechanics)",
"Nuclear fusion",
"Wikipedia categories named after physical quantities",
"Cosmic rays"... |
3,036,126 | https://en.wikipedia.org/wiki/Vafa%E2%80%93Witten%20theorem | In theoretical physics, the Vafa–Witten theorem, named after Cumrun Vafa and Edward Witten, is a theorem that shows that vector-like global symmetries (those that transform as expected under reflections) such as isospin and baryon number in vector-like gauge theories like quantum chromodynamics cannot be spontaneously broken as long as the theta angle is zero. This theorem can be proved by showing the exponential fall off of the propagator of fermions.
See also
F-theory
References
Gauge theories
Theorems in quantum mechanics | Vafa–Witten theorem | [
"Physics",
"Mathematics"
] | 116 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Quantum physics stubs",
"Physics theorems"
] |
11,872,111 | https://en.wikipedia.org/wiki/Naturally%20occurring%20radioactive%20material | Naturally occurring radioactive materials (NORM) and technologically enhanced naturally occurring radioactive materials (TENORM) consist of materials, usually industrial wastes or by-products enriched with radioactive elements found in the environment, such as uranium, thorium and potassium and any of their decay products, such as radium and radon. Produced water discharges and spills are a good example of entering NORMs into the surrounding environment.
Natural radioactive elements are present in very low concentrations in Earth's crust, and are brought to the surface through human activities such as oil and gas exploration or mining, and through natural processes like leakage of radon gas to the atmosphere or through dissolution in ground water. Another example of TENORM is coal ash produced from coal burning in power plants. If radioactivity is much higher than background level, handling TENORM may cause problems in many industries and transportation.
NORM in oil and gas exploration
Oil and gas TENORM and/or NORM is created in the production process, when produced fluids from reservoirs carry sulfates up to the surface of the Earth's crust. Some states, such as North Dakota, uses the term "diffuse NORM". Barium, calcium and strontium sulfates are larger compounds, and the smaller atoms, such as radium-226 and radium-228, can fit into the empty spaces of the compound and be carried through the produced fluids. As the fluids approach the surface, changes in the temperature and pressure cause the barium, calcium, strontium and radium sulfates to precipitate out of solution and form scale on the inside, or on occasion, the outside of the tubulars and/or casing. The use of tubulars in the production process that are NORM contaminated does not cause a health hazard if the scale is inside the tubulars and the tubulars remain downhole. Enhanced concentrations of the radium 226 and 228 and the daughter products such as lead-210 may also occur in sludge that accumulates in oilfield pits, tanks and lagoons. Radon gas in the natural gas streams concentrate as NORM in gas processing activities. Radon decays to lead-210, then to bismuth-210, polonium-210 and stabilizes with lead-206. Radon decay elements occur as a shiny film on the inner surface of inlet lines, treating units, pumps and valves associated with propylene, ethane and propane processing systems.
NORM characteristics vary depending on the nature of the waste. NORM may be created in a crystalline form, which is brittle and thin, and can cause flaking to occur in tubulars. NORM formed in carbonate matrix can have a density of 3.5 grams/cubic centimeters and must be noted when packing for transportation. NORM scales may be white or a brown solid, or thick sludge to solid, dry flaky substances. NORM may also be found in oil and gas production produced waters.
Cutting and reaming oilfield pipe, removing solids from tanks and pits, and refurbishing gas processing equipment may expose employees to particles containing increased levels of alpha emitting radionuclides that could pose health risks if inhaled or ingested.
NORM is found in many industries including
The coal industry (mining and combustion)
Metal mining and smelting
Mineral sands (rare earth minerals, titanium and zirconium).
Fertilizer (phosphate) industry
Building industry
Hazards
The hazards associated with NORM are inhalation and ingestion routes of entry as well as external exposure where there has been a significant accumulation of scales. Respirators may be necessary in dry processes, where NORM scales and dust become air borne and have a significant chance to enter the body.
The hazardous elements found in NORM are radium 226, 228 and radon 222 and also daughter products from these radionuclides. The elements are referred to as "bone seekers" which when inside the body migrate to the bone tissue and concentrate. This exposure can cause bone cancers and other bone abnormalities. The concentration of radium and other daughter products build over time, with several years of excessive exposures. Therefore, from a liability standpoint an employee that has not had respiratory protection over several years could develop bone or other cancers from NORM exposure and decide to seek compensation such as medical expenses and lost wages from the oil company which generated the TENORM and the employer.
Radium radionuclides emit alpha and beta particles as well as gamma rays. The radiation emitted from a radium 226 atom is 96% alpha particles and 4% gamma rays. The alpha particle is not the most dangerous particle associated with NORM, as an external hazard. Alpha particles are identical with helium-4 nuclei. Alpha particles travel short distances in air, of only 2–3 cm, and cannot penetrate through a dead layer of skin on the human body. However, some radium alpha particle emitters are "bone seekers" due to radium possessing a high affinity for chloride ions. In the case that radium atoms are not expelled from the body, they concentrate in areas where chloride ions are prevalent, such as bone tissue. The half-life for radium 226 is approximately 1,620 years, and will remain in the body for the lifetime of the human — a significant length of time to cause damage.
Beta particles are electrons or positrons and can travel farther than alpha particles in air. They are in the middle of the scale in terms of ionizing potential and penetrating power, being stopped by a few millimeters of plastic. This radiation is a small portion of the total emitted during radium 226 decay. Radium 228 emits beta particles, and is also a concern for human health through inhalation and ingestion.
The gamma rays emitted from radium 226, accounting for 4% of the radiation, are harmful to humans with sufficient exposure. Gamma rays are highly penetrating and some can pass through metals, so Geiger counters or a scintillation probe are used to measure gamma ray exposures when monitoring for NORM.
Alpha and beta particles are harmful once inside the body. Breathing NORM contaminates from dusts should be prevented by wearing respirators with particulate filters. In the case of properly trained occupational NORM workers, air monitoring and analysis may be necessary. These measurements, ALI and DAC, are calculated values based on the dose an average employee working 2,000 hours a year may be exposed to. The current legal limit exposure in the United States is 1 ALI, or 5 rems. A rem, or roentgen equivalent man, is a measurement of absorption of radiation on parts of the body over an extended period of time. A DAC is a concentration of alpha and beta particles that an average working employee is exposed to for 2,000 hours of light work. If an employee is exposed to over 10% of an ALI, 500 mREM, then the employee's dose must be documented under instructions with federal and state regulations.
Regulation
United States
NORM is not federally regulated in the United States. The Nuclear Regulatory Commission (NRC) has jurisdiction over a relatively narrow spectrum of radiation, and the Environmental Protection Agency (EPA) has jurisdiction over NORM. Since no federal entity has implemented NORM regulations, NORM is variably regulated by the states.
United Kingdom
In the UK regulation is via the Environmental Permitting (England and Wales) Regulations 2010.
This defines two types of NORM activity:
Type 1 NORM industrial activity means:
(a) the production and use of thorium, or thorium compounds, and the production of products where thorium is deliberately added; or
(b) the production and use of uranium or uranium compounds, and the production of products where uranium is deliberately added
Type 2 NORM industrial activity means:
(a) the extraction, production and use of rare earth elements and rare earth element alloys;
(b) the mining and processing of ores other than uranium ore;
(c) the production of oil and gas;
(d) the removal and management of radioactive scales and precipitates from equipment associated with industrial activities;
(e) any industrial activity utilising phosphate ore;
(f) the manufacture of titanium dioxide pigments;
(g) the extraction and refining of zircon and manufacture of zirconium compounds;
(h) the production of tin, copper, aluminium, zinc, lead and iron and steel;
(i) any activity related to coal mine de-watering plants;
(j) china clay extraction;
(k) water treatment associated with provision of drinking water;
or
(l) The remediation of contamination from any type 1 NORM industrial activity or any of the activities listed above.
An activity which involves the processing of radionuclides of natural terrestrial or cosmic origin for their radioactive, fissile or fertile properties is not a type 1 NORM industrial activity or a type 2 NORM industrial activity.
See also
Background radiation, ionizing radiation constantly present in the natural environment of the Earth
Environmental radioactivity
References
External links
North Dakota Department of Health
NORM Technology Connection, Interstate Oil and Gas Compact Commission
Radiation Quick Reference Guide, Domestic Nuclear Detection Office
Naturally Occurring Radioactive Materials from the World Nuclear Association
UK guidance on Radioactive Substances Regulation For the Environmental Permitting (England and Wales) Regulations 2010:Defra
Radioactive waste
By-products
Environmental impact of fossil fuels
Environmental impact of mining
Water pollution | Naturally occurring radioactive material | [
"Chemistry",
"Technology",
"Environmental_science"
] | 1,889 | [
"Water pollution",
"Hazardous waste",
"Environmental impact of nuclear power",
"Radioactivity",
"Radioactive waste"
] |
164,321 | https://en.wikipedia.org/wiki/Secondary%20circulation | In fluid dynamics, a secondary circulation or secondary flow is a weak circulation that plays a key maintenance role in sustaining a stronger primary circulation that contains most of the kinetic energy and momentum of a flow. For example, a tropical cyclone's primary winds are tangential (horizontally swirling), but its evolution and maintenance against friction involves an in-up-out secondary circulation flow that is also important to its clouds and rain. On a planetary scale, Earth's winds are mostly east–west or zonal, but that flow is maintained against friction by the Coriolis force acting on a small north–south or meridional secondary circulation.
See also
Hough function
Primitive equations
Secondary flow
References
Geophysics
Physical oceanography
Atmospheric dynamics
Fluid mechanics | Secondary circulation | [
"Physics",
"Chemistry",
"Engineering"
] | 150 | [
"Applied and interdisciplinary physics",
"Atmospheric dynamics",
"Civil engineering",
"Physical oceanography",
"Geophysics",
"Fluid mechanics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
164,402 | https://en.wikipedia.org/wiki/Magnetic%20dipole | In electromagnetism, a magnetic dipole is the limit of either a closed loop of electric current or a pair of poles as the size of the source is reduced to zero while keeping the magnetic moment constant.
It is a magnetic analogue of the electric dipole, but the analogy is not perfect. In particular, a true magnetic monopole, the magnetic analogue of an electric charge, has never been observed in nature. However, magnetic monopole quasiparticles have been observed as emergent properties of certain condensed matter systems. Moreover, one form of magnetic dipole moment is associated with a fundamental quantum property—the spin of elementary particles.
Because magnetic monopoles do not exist, the magnetic field at a large distance from any static magnetic source looks like the field of a dipole with the same dipole moment. For higher-order sources (e.g. quadrupoles) with no dipole moment, their field decays towards zero with distance faster than a dipole field does.
External magnetic field produced by a magnetic dipole moment
In classical physics, the magnetic field of a dipole is calculated as the limit of either a current loop or a pair of charges as the source shrinks to a point while keeping the magnetic moment constant. For the current loop, this limit is most easily derived from the vector potential:
where μ0 is the vacuum permeability constant and is the surface of a sphere of radius .
The magnetic flux density (strength of the B-field) is then
Alternatively one can obtain the scalar potential first from the magnetic pole limit,
and hence the magnetic field strength (or strength of the H-field) is
The magnetic field strength is symmetric under rotations about the axis of the magnetic moment.
In spherical coordinates, with , and with the magnetic moment aligned with the z-axis, then the field strength can more simply be expressed as
Internal magnetic field of a dipole
The two models for a dipole (current loop and magnetic poles), give the same predictions for the magnetic field far from the source. However, inside the source region they give different predictions. The magnetic field between poles is in the opposite direction to the magnetic moment (which points from the negative charge to the positive charge), while inside a current loop it is in the same direction (see the figure to the right (above for mobile users)). Clearly, the limits of these fields must also be different as the sources shrink to zero size. This distinction only matters if the dipole limit is used to calculate fields inside a magnetic material.
If a magnetic dipole is formed by making a current loop smaller and smaller, but keeping the product of current and area constant, the limiting field is
where is the Dirac delta function in three dimensions. Unlike the expressions in the previous section, this limit is correct for the internal field of the dipole.
If a magnetic dipole is formed by taking a "north pole" and a "south pole", bringing them closer and closer together but keeping the product of magnetic pole-charge and distance constant, the limiting field is
These fields are related by , where
is the magnetization.
Forces between two magnetic dipoles
The force exerted by one dipole moment on another separated in space by a vector can be calculated using:
or
where is the distance between dipoles. The force acting on is in the opposite direction.
The torque can be obtained from the formula
Dipolar fields from finite sources
The magnetic scalar potential produced by a finite source, but external to it, can be represented by a multipole expansion. Each term in the expansion is associated with a characteristic moment and a potential having a characteristic rate of decrease with distance from the source. Monopole moments have a rate of decrease, dipole moments have a rate, quadrupole moments have a rate, and so on. The higher the order, the faster the potential drops off. Since the lowest-order term observed in magnetic sources is the dipole term, it dominates at large distances. Therefore, at large distances any magnetic source looks like a dipole of the same magnetic moment.
Notes
References
Magnetostatics
Magnetism
Electric and magnetic fields in matter | Magnetic dipole | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 842 | [
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
164,483 | https://en.wikipedia.org/wiki/Scattering | In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century). As more "ray"-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of "heat rays" (not then recognized as electromagnetic in nature) in 1800. John Tyndall, a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. Near the end of the 19th century, the scattering of cathode rays (electron beams) and X-rays was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena.
Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons, photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors.
The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers, are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory.
Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound, semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery. Particle-particle scattering theory is important in areas such as particle physics, atomic, molecular, and optical physics, nuclear physics and astrophysics. In particle physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix, introduced and developed by John Archibald Wheeler and Werner Heisenberg.
Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients, the bidirectional scattering distribution function (BSDF), S-matrices, and mean free path.
Single and multiple scattering
When radiation is only scattered by one localized scattering center, this is called single scattering. It is more common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering. The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory.
Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions.
With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog. Multiple scattering is highly analogous to diffusion, and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusers. Coherent backscattering, an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization.
Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft.
Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles. Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately.
The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality.
Theory
Scattering theory is a framework for studying and understanding the scattering of waves and particles. Wave scattering corresponds to the collision and scattering of a wave with some material object, for instance (sunlight) scattered by rain drops to form a rainbow. Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei, the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations, propagating freely "in the distant past", come together and interact with one another or with a boundary condition, and then propagate away "to the distant future".
The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object.
Attenuation due to scattering
When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time (), i.e. that
where Q is an interaction coefficient and x is the distance traveled in the target.
The above ordinary first-order differential equation has solutions of the form:
where Io is the initial flux, path length Δx ≡ x − xo, the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/λ = ησ = ρ/τ, as shown in the figure at left.
In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm−1) is variously called opacity, absorption coefficient, and attenuation coefficient. In nuclear physics, area cross-sections (e.g. σ in barns or units of 10−24 cm2), density mean free path (e.g. τ in grams/cm2), and its reciprocal the mass attenuation coefficient (e.g. in cm2/gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path (e.g. λ in nanometers) is often discussed instead.
Elastic and inelastic scattering
The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles.
The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential. The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized, representing an inelastic scattering process.
The term "deep inelastic scattering" refers to a special kind of scattering experiment in particle physics.
Mathematical framework
In mathematics, scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time. One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future".
Solutions to differential equations are often posed on manifolds. Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space, and scattering is described by a certain map, the S matrix, on Hilbert spaces. Solutions with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together.
An important, notable development is the inverse scattering transform, central to the solution of many exactly solvable models.
Theoretical physics
In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles.
In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation.
Electromagnetics
Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering. Scattering of light and radio waves (especially in radar) is particularly important. Several different aspects of electromagnetic scattering are distinct enough to have conventional names. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering. Inelastic scattering includes Brillouin scattering, Raman scattering, inelastic X-ray scattering and Compton scattering.
Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper. More generally, the gloss (or lustre or sheen) of the surface is determined by scattering. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone.
Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering. The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris, and the feathers of some birds (Prum et al. 1998). However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al. 2006).
Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as:
where πDp is the circumference of a particle and λ is the wavelength of incident radiation in the medium. Based on the value of α, these domains are:
α ≪ 1: Rayleigh scattering (small particle compared to wavelength of light);
α ≈ 1: Mie scattering (particle about the same size as wavelength of light, valid only for spheres);
α ≫ 1: geometric scattering (particle much larger than wavelength of light).
Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation. This effect was first modeled successfully by Lord Rayleigh, from whom it gets its name. In order for Rayleigh's model to apply, the sphere must be much smaller in diameter than the wavelength (λ) of the scattered wave; typically the upper limit is taken to be about 1/10 the wavelength. In this size regime, the exact shape of the scattering center is usually not very significant and can often be treated as a sphere of equivalent volume. The inherent scattering that radiation undergoes passing through a pure gas is due to microscopic density fluctuations as the gas molecules move around, which are normally small enough in scale for Rayleigh's model to apply. This scattering mechanism is the primary cause of the blue color of the Earth's sky on a clear day, as the shorter blue wavelengths of sunlight passing overhead are more strongly scattered than the longer red wavelengths according to Rayleigh's famous 1/λ4 relation. Along with absorption, such scattering is a major cause of the attenuation of radiation by the atmosphere. The degree of scattering varies as a function of the ratio of the particle diameter to the wavelength of the radiation, along with many other factors including polarization, angle, and coherence.
For larger diameters, the problem of electromagnetic scattering by spheres was first solved by Gustav Mie, and scattering by spheres larger than the Rayleigh range is therefore usually known as Mie scattering. In the Mie regime, the shape of the scattering center becomes much more significant and the theory only applies well to spheres and, with some modification, spheroids and ellipsoids. Closed-form solutions for scattering by certain other simple shapes exist, but no general closed-form solution is known for arbitrary shapes.
Both Mie and Rayleigh scattering are considered elastic scattering processes, in which the energy (and thus wavelength and frequency) of the light is not substantially changed. However, electromagnetic radiation scattered by moving scattering centers does undergo a Doppler shift, which can be detected and used to measure the velocity of the scattering center/s in forms of techniques such as lidar and radar. This shift involves a slight change in energy.
At values of the ratio of particle diameter to wavelength more than about 10, the laws of geometric optics are mostly sufficient to describe the interaction of light with the particle. Mie theory can still be used for these larger spheres, but the solution often becomes numerically unwieldy.
For modeling of scattering in cases where the Rayleigh and Mie models do not apply such as larger, irregularly shaped particles, there are many numerical methods that can be used. The most common are finite-element methods which solve Maxwell's equations to find the distribution of the scattered electromagnetic field. Sophisticated software packages exist which allow the user to specify the refractive index or indices of the scattering feature in space, creating a 2- or sometimes 3-dimensional model of the structure. For relatively large and complex structures, these models usually require substantial execution times on a computer.
Electrophoresis involves the migration of macromolecules under the influence of an electric field. Electrophoretic light scattering involves passing an electric field through a liquid which makes particles move. The bigger the charge is on the particles, the faster they are able to move.
See also
Attenuation#Light scattering
Backscattering
Bragg diffraction
Brillouin scattering
Characteristic mode analysis
Compton scattering
Coulomb scattering
Deep scattering layer
Diffuse sky radiation
Doppler effect
Dynamic Light Scattering
Electron diffraction
Electron scattering
Electrophoretic light scattering
Extinction
Haag–Ruelle scattering theory
Kikuchi line
Levinson's theorem
Light scattering by particles
Linewidth
Mie scattering
Mie theory
Molecular scattering
Mott scattering
Neutron scattering
Phase space measurement with forward modeling
Photon diffusion
Powder diffraction
Raman scattering
Rayleigh scattering
Resonances in scattering from potentials
Rutherford scattering
Small-angle scattering
Scattering amplitude
Scattering from rough surfaces
Scintillation (physics)
S-Matrix
Tyndall effect
Thomson scattering
Wolf effect
X-ray crystallography
References
External links
Research group on light scattering and diffusion in complex systems
Multiple light scattering from a photonic science point of view
Neutron Scattering Web
Neutron and X-Ray Scattering
World directory of neutron scattering instruments
Scattering and diffraction
Optics Classification and Indexing Scheme (OCIS), Optical Society of America, 1997
Lectures of the European school on theoretical methods for electron and positron induced chemistry, Prague, Feb. 2005
E. Koelink, Lectures on scattering theory, Delft the Netherlands 2006
Physical phenomena
Atomic physics
Nuclear physics
Particle physics
Radar theory
Scattering, absorption and radiative transfer (optics) | Scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,024 | [
"Physical phenomena",
" absorption and radiative transfer (optics)",
"Nuclear physics",
"Quantum mechanics",
"Scattering",
"Atomic physics",
"Particle physics",
"Condensed matter physics",
"Atomic",
" molecular",
" and optical physics"
] |
164,572 | https://en.wikipedia.org/wiki/Dissipation | In thermodynamics, dissipation is the result of an irreversible process that affects a thermodynamic system. In a dissipative process, energy (internal, bulk flow kinetic, or system potential) transforms from an initial form to a final form, where the capacity of the final form to do thermodynamic work is less than that of the initial form. For example, transfer of energy as heat is dissipative because it is a transfer of energy other than by thermodynamic work or by transfer of matter, and spreads previously concentrated energy. Following the second law of thermodynamics, in conduction and radiation from one body to another, the entropy varies with temperature (reduces the capacity of the combination of the two bodies to do work), but never decreases in an isolated system.
In mechanical engineering, dissipation is the irreversible conversion of mechanical energy into thermal energy with an associated increase in entropy.
Processes with defined local temperature produce entropy at a certain rate. The entropy production rate times local temperature gives the dissipated power. Important examples of irreversible processes are: heat flow through a thermal resistance, fluid flow through a flow resistance, diffusion (mixing), chemical reactions, and electric current flow through an electrical resistance (Joule heating).
Definition
Dissipative thermodynamic processes are essentially irreversible because they produce entropy. Planck regarded friction as the prime example of an irreversible thermodynamic process. In a process in which the temperature is locally continuously defined, the local density of rate of entropy production times local temperature gives the local density of dissipated power.
A particular occurrence of a dissipative process cannot be described by a single individual Hamiltonian formalism. A dissipative process requires a collection of admissible individual Hamiltonian descriptions, exactly which one describes the actual particular occurrence of the process of interest being unknown. This includes friction and hammering, and all similar forces that result in decoherency of energy—that is, conversion of coherent or directed energy flow into an indirected or more isotropic distribution of energy.
Energy
"The conversion of mechanical energy into heat is called energy dissipation." – François Roddier The term is also applied to the loss of energy due to generation of unwanted heat in electric and electronic circuits.
Computational physics
In computational physics, numerical dissipation (also known as "Numerical diffusion") refers to certain side-effects that may occur as a result of a numerical solution to a differential equation. When the pure advection equation, which is free of dissipation, is solved by a numerical approximation method, the energy of the initial wave may be reduced in a way analogous to a diffusional process. Such a method is said to contain 'dissipation'. In some cases, "artificial dissipation" is intentionally added to improve the numerical stability characteristics of the solution.
Mathematics
A formal, mathematical definition of dissipation, as commonly used in the mathematical study of measure-preserving dynamical systems, is given in the article wandering set.
Examples
In hydraulic engineering
Dissipation is the process of converting mechanical energy of downward-flowing water into thermal and acoustical energy. Various devices are designed in stream beds to reduce the kinetic energy of flowing waters to reduce their erosive potential on banks and river bottoms. Very often, these devices look like small waterfalls or cascades, where water flows vertically or over riprap to lose some of its kinetic energy.
Irreversible processes
Important examples of irreversible processes are:
Heat flow through a thermal resistance
Fluid flow through a flow resistance
Diffusion (mixing)
Chemical reactions
Electrical current flow through an electrical resistance (Joule heating).
Waves or oscillations
Waves or oscillations, lose energy over time, typically from friction or turbulence. In many cases, the "lost" energy raises the temperature of the system. For example, a wave that loses amplitude is said to dissipate. The precise nature of the effects depends on the nature of the wave: an atmospheric wave, for instance, may dissipate close to the surface due to friction with the land mass, and at higher levels due to radiative cooling.
History
The concept of dissipation was introduced in the field of thermodynamics by William Thomson (Lord Kelvin) in 1852. Lord Kelvin deduced that a subset of the above-mentioned irreversible dissipative processes will occur unless a process is governed by a "perfect thermodynamic engine". The processes that Lord Kelvin identified were friction, diffusion, conduction of heat and the absorption of light.
See also
Entropy production
General equation of heat transfer
Flood control
Principle of maximum entropy
Two-dimensional gas
References
Thermodynamic processes
Thermodynamic entropy
Non-equilibrium thermodynamics
Dynamical systems | Dissipation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,016 | [
"Physical quantities",
"Non-equilibrium thermodynamics",
"Thermodynamic processes",
"Thermodynamic entropy",
"Entropy",
"Mechanics",
"Thermodynamics",
"Statistical mechanics",
"Dynamical systems"
] |
164,598 | https://en.wikipedia.org/wiki/Radiative%20cooling | In the study of heat transfer, radiative cooling is the process by which a body loses heat by thermal radiation. As Planck's law describes, every physical body spontaneously and continuously emits electromagnetic radiation.
Radiative cooling has been applied in various contexts throughout human history, including ice making in India and Iran, heat shields for spacecraft, and in architecture. In 2014, a scientific breakthrough in the use of photonic metamaterials made daytime radiative cooling possible. It has since been proposed as a strategy to mitigate local and global warming caused by greenhouse gas emissions known as passive daytime radiative cooling.
Terrestrial radiative cooling
Mechanism
Infrared radiation can pass through dry, clear air in the wavelength range of 8–13 μm. Materials that can absorb energy and radiate it in those wavelengths exhibit a strong cooling effect. Materials that can also reflect 95% or more of sunlight in the 200 nanometres to 2.5 μm range can exhibit cooling even in direct sunlight.
Earth's energy budget
The Earth-atmosphere system is radiatively cooled, emitting long-wave (infrared) radiation which balances the absorption of short-wave (visible light) energy from the sun.
Convective transport of heat, and evaporative transport of latent heat are both important in removing heat from the surface and distributing it in the atmosphere. Pure radiative transport is more important higher up in the atmosphere. Diurnal and geographical variation further complicate the picture.
The large-scale circulation of the Earth's atmosphere is driven by the difference in absorbed solar radiation per square meter, as the sun heats the Earth more in the Tropics, mostly because of geometrical factors. The atmospheric and oceanic circulation redistributes some of this energy as sensible heat and latent heat partly via the mean flow and partly via eddies, known as cyclones in the atmosphere. Thus the tropics radiate less to space than they would if there were no circulation, and the poles radiate more; however in absolute terms the tropics radiate more energy to space.
Nocturnal surface cooling
Radiative cooling is commonly experienced on cloudless nights, when heat is radiated into outer space from Earth's surface, or from the skin of a human observer. The effect is well-known among amateur astronomers.
The effect can be experienced by comparing skin temperature from looking straight up into a cloudless night sky for several seconds, to that after placing a sheet of paper between the face and the sky. Since outer space radiates at about a temperature of , and the sheet of paper radiates at about (around room temperature), the sheet of paper radiates more heat to the face than does the darkened cosmos. The effect is blunted by Earth's surrounding atmosphere, and particularly the water vapor it contains, so the apparent temperature of the sky is far warmer than outer space. The sheet does not block the cold, but instead reflects heat to the face and radiates the heat of the face that it just absorbed.
The same radiative cooling mechanism can cause frost or black ice to form on surfaces exposed to the clear night sky, even when the ambient temperature does not fall below freezing.
Kelvin's estimate of the Earth's age
The term radiative cooling is generally used for local processes, though the same principles apply to cooling over geological time, which was first used by Kelvin to estimate the age of the Earth (although his estimate ignored the substantial heat released by radioisotope decay, not known at the time, and the effects of convection in the mantle).
Astronomy
Radiative cooling is one of the few ways an object in space can give off energy. In particular, white dwarf stars are no longer generating energy by fusion or gravitational contraction, and have no solar wind. So the only way their temperature changes is by radiative cooling. This makes their temperature as a function of age very predictable, so by observing the temperature, astronomers can deduce the age of the star.
Applications
Climate change
Architecture
Cool roofs combine high solar reflectance with high infrared emittance, thereby simultaneously reducing heat gain from the sun and increasing heat removal through radiation. Radiative cooling thus offers potential for passive cooling for residential and commercial buildings. Traditional building surfaces, such as paint coatings, brick and concrete have high emittances of up to 0.96. They radiate heat into the sky to passively cool buildings at night. If made sufficiently reflective to sunlight, these materials can also achieve radiative cooling during the day.
The most common radiative coolers found on buildings are white cool-roof paint coatings, which have solar reflectances of up to 0.94, and thermal emittances of up to 0.96. The solar reflectance of the paints arises from optical scattering by the dielectric pigments embedded in the polymer paint resin, while the thermal emittance arises from the polymer resin. However, because typical white pigments like titanium dioxide and zinc oxide absorb ultraviolet radiation, the solar reflectances of paints based on such pigments do not exceed 0.95.
In 2014, researchers developed the first daytime radiative cooler using a multi-layer thermal photonic structure that selectively emits long wavelength infrared radiation into space, and can achieve 5 °C sub-ambient cooling under direct sunlight. Later researchers developed paintable porous polymer coatings, whose pores scatter sunlight to give solar reflectance of 0.96-0.99 and thermal emittance of 0.97. In experiments under direct sunlight, the coatings achieve 6 °C sub-ambient temperatures and cooling powers of 96 W/m2.
Other notable radiative cooling strategies include dielectric films on metal mirrors, and polymer or polymer composites on silver or aluminum films. Silvered polymer films with solar reflectances of 0.97 and thermal emittance of 0.96, which remain 11 °C cooler than commercial white paints under the mid-summer sun, were reported in 2015. Researchers explored designs with dielectric silicon dioxide or silicon carbide particles embedded in polymers that are translucent in the solar wavelengths and emissive in the infrared. In 2017, an example of this design with resonant polar silica microspheres randomly embedded in a polymeric matrix, was reported. The material is translucent to sunlight and has infrared emissivity of 0.93 in the infrared atmospheric transmission window. When backed with silver coating, the material achieved a midday radiative cooling power of 93 W/m2 under direct sunshine along with high-throughput, economical roll-to-roll manufacturing.
Heat shields
High emissivity coatings that facilitate radiative cooling may be used in reusable thermal protection systems (RTPS) in spacecraft and hypersonic aircraft. In such heat shields a high emissivity material, such as molybdenum disilicide (MoSi2) is applied on a thermally insulating ceramic substrate. In such heat shields high levels of total emissivity, typically in the range 0.8 - 0.9, need to be maintained across a range of high temperatures. Planck's law dictates that at higher temperatures the radiative emission peak shifts to lower wavelengths (higher frequencies), influencing material selection as a function of operating temperature. In addition to effective radiative cooling, radiative thermal protection systems should provide damage tolerance and may incorporate self-healing functions through the formation of a viscous glass at high temperatures.
James Webb Space Telescope
The James Webb Space Telescope uses radiative cooling to reach its operation temperature of about 50 K. To do this, its large reflective sunshield blocks radiation from the Sun, Earth, and Moon. The telescope structure, kept permanently in shadow by the sunshield, then cools by radiation.
Nocturnal ice making in early India and Iran
Before the invention of artificial refrigeration technology, ice making by nocturnal cooling was common in both India and Iran.
In India, such apparatuses consisted of a shallow ceramic tray with a thin layer of water, placed outdoors with a clear exposure to the night sky. The bottom and sides were insulated with a thick layer of hay. On a clear night the water would lose heat by radiation upwards. Provided the air was calm and not too far above freezing, heat gain from the surrounding air by convection was low enough to allow the water to freeze.
In Iran, this involved making large flat ice pools, which consisted of a reflection pool of water built on a bed of highly insulative material surrounded by high walls. The high walls provided protection against convective warming, the insulative material of the pool walls would protect against conductive heating from the ground, the large flat plane of water would then permit evaporative and radiative cooling to take place.
Types
The three basic types of radiant cooling are direct, indirect, and fluorescent:
Direct radiant cooling - In a building designed to optimize direct radiation cooling, the building roof acts as a heat sink to absorb the daily internal loads. The roof acts as the best heat sink because it is the greatest surface exposed to the night sky. Radiate heat transfer with the night sky will remove heat from the building roof, thus cooling the building structure. Roof ponds are an example of this strategy. The roof pond design became popular with the development of the Sky thermal system designed by Harold Hay in 1977. There are various designs and configurations for the roof pond system but the concept is the same for all designs. The roof uses water, either plastic bags filled with water or an open pond, as the heat sink while a system of movable insulation panels regulate the mode of heating or cooling. During daytime in the summer, the water on the roof is protected from the solar radiation and ambient air temperature by movable insulation, which allows it to serve as a heat sink and absorb the heat generated inside through the ceiling. At night, the panels are retracted to allow nocturnal radiation between the roof pond and the night sky, thus removing the stored heat. In winter, the process is reversed so that the roof pond is allowed to absorb solar radiation during the day and release it during the night into the space below.
Indirect radiant cooling - A heat transfer fluid removes heat from the building structure through radiate heat transfer with the night sky. A common design for this strategy involves a plenum between the building roof and the radiator surface. Air is drawn into the building through the plenum, cooled from the radiator, and cools the mass of the building structure. During the day, the building mass acts as a heat sink.
Fluorescent radiant cooling - An object can be made fluorescent: it will then absorb light at some wavelengths, but radiate the energy away again at other, selected wavelengths. By selectively radiating heat in the infrared atmospheric window, a range of frequencies in which the atmosphere is unusually transparent, an object can effectively use outer space as a heat sink, and cool to well below ambient air temperature.
See also
Heat shield
Optical solar reflector, used for thermal control of spacecraft
Passive cooling
Radiative forcing
Stefan–Boltzmann law
Terrestrial albedo effect
Urban heat island
Urban thermal plume
References
Thermodynamics
Atmospheric radiation | Radiative cooling | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,304 | [
"Thermodynamics",
"Dynamical systems"
] |
164,600 | https://en.wikipedia.org/wiki/General%20circulation%20model | A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.
GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Such integrated multi-system models are sometimes referred to as either "earth system models" or "global climate models."
Versions designed for decade to century time scale climate applications were originally created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.
Terminology
The acronym GCM originally stood for General Circulation Model. Recently, a second meaning came into use, namely Global Climate Model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modelling climate, and hence the two terms are sometimes used interchangeably. However, the term "global climate model" is ambiguous and may refer to an integrated framework that incorporates multiple components including a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically.
Atmospheric and oceanic models
Atmospheric (AGCMs) and oceanic GCMs (OGCMs) can be coupled to form an atmosphere-ocean coupled general circulation model (CGCM or AOGCM). With the addition of submodels such as a sea ice model or a model for evapotranspiration over land, AOGCMs become the basis for a full climate model.
Structure
General Circulation Models (GCMs) discretise the equations for fluid motion and energy transfer and integrate these over time. Unlike simpler models, GCMs divide the atmosphere and/or oceans into grids of discrete "cells", which represent computational units. Unlike simpler models which make mixing assumptions, processes internal to a cell—such as convection—that occur on scales too small to be resolved directly are parameterised at the cell level, while other functions govern the interface between cells.
Three-dimensional (more properly four-dimensional) GCMs apply discrete equations for fluid motion and integrate these forward in time. They contain parameterisations for processes such as convection that occur on scales too small to be resolved directly.
A simple general circulation model (SGCM) consists of a dynamic core that relates properties such as temperature to others such as pressure and velocity. Examples are programs that solve the primitive equations, given energy input and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are most attenuated. Such models may be used to study atmospheric processes, but are not suitable for climate projections.
Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) using imposed sea surface temperatures (SSTs). They may include atmospheric chemistry.
AGCMs consist of a dynamical core which integrates the equations of fluid motion, typically for:
surface pressure
horizontal components of velocity in layers
temperature and water vapor in layers
radiation, split into solar/short wave and terrestrial/infrared/long wave
parameters for:
convection
land surface processes
albedo
hydrology
cloud cover
A GCM contains prognostic equations that are a function of time (typically winds, temperature, moisture, and surface pressure) together with diagnostic equations that are evaluated from them for a specific time period. As an example, pressure at any height can be diagnosed by applying the hydrostatic equation to the predicted surface pressure and the predicted values of temperature between the surface and the height of interest. Pressure is used to compute the pressure gradient force in the time-dependent equation for the winds.
OGCMs model the ocean (with fluxes from the atmosphere imposed) and may contain a sea ice model. For example, the standard resolution of HadOM3 is 1.25 degrees in latitude and longitude, with 20 vertical levels, leading to approximately 1,500,000 variables.
AOGCMs (e.g. HadCM3, GFDL CM2.X) combine the two submodels. They remove the need to specify fluxes across the interface of the ocean surface. These models are the basis for model predictions of future climate, such as are discussed by the IPCC. AOGCMs internalise as many processes as possible. They have been used to provide predictions at a regional scale. While the simpler models are generally susceptible to analysis and their results are easier to understand, AOGCMs may be nearly as hard to analyse as the climate itself.
Grid
The fluid equations for AGCMs are made discrete using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude / longitude grid). However, non-rectangular grids (e.g., icosahedral) and grids of variable resolution are more often used. The LMDz model can be arranged to give high resolution over any given section of the planet. HadGEM1 (and other ocean models) use an ocean grid with higher resolution in the tropics to help resolve processes believed to be important for the El Niño Southern Oscillation (ENSO). Spectral models generally use a Gaussian grid, because of the mathematics of transformation between spectral and grid-point space. Typical AGCM resolutions are between 1 and 5 degrees in latitude or longitude: HadCM3, for example, uses 3.75 in longitude and 2.5 degrees in latitude, giving a grid of 96 by 73 points (96 x 72 for some variables); and has 19 vertical levels. This results in approximately 500,000 "basic" variables, since each grid point has four variables (u,v, T, Q), though a full count would give more (clouds; soil levels). HadGEM1 uses a grid of 1.875 degrees in longitude and 1.25 in latitude in the atmosphere; HiGEM, a high-resolution variant, uses 1.25 x 0.83 degrees respectively. These resolutions are lower than is typically used for weather forecasting. Ocean resolutions tend to be higher, for example HadCM3 has 6 ocean grid points per atmospheric grid point in the horizontal.
For a standard finite difference model, uniform gridlines converge towards the poles. This would lead to computational instabilities (see CFL condition) and so the model variables must be filtered along lines of latitude close to the poles. Ocean models suffer from this problem too, unless a rotated grid is used in which the North Pole is shifted onto a nearby landmass. Spectral models do not suffer from this problem. Some experiments use geodesic grids and icosahedral grids, which (being more uniform) do not have pole-problems. Another approach to solving the grid spacing problem is to deform a Cartesian cube such that it covers the surface of a sphere.
Flux buffering
Some early versions of AOGCMs required an ad hoc process of "flux correction" to achieve a stable climate. This resulted from separately prepared ocean and atmospheric models that each used an implicit flux from the other component different than that component could produce. Such a model failed to match observations. However, if the fluxes were 'corrected', the factors that led to these unrealistic fluxes might be unrecognised, which could affect model sensitivity. As a result, the vast majority of models used in the current round of IPCC reports do not use them. The model improvements that now make flux corrections unnecessary include improved ocean physics, improved resolution in both atmosphere and ocean, and more physically consistent coupling between atmosphere and ocean submodels. Improved models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate projections.
Convection
Moist convection releases latent heat and is important to the Earth's energy budget. Convection occurs on too small a scale to be resolved by climate models, and hence it must be handled via parameters. This has been done since the 1950s. Akio Arakawa did much of the early work, and variants of his scheme are still used, although a variety of different schemes are now in use. Clouds are also typically handled with a parameter, for a similar lack of scale. Limited understanding of clouds has limited the success of this strategy, but not due to some inherent shortcoming of the method.
Software
Most models include software to diagnose a wide range of variables for comparison with observations or study of atmospheric processes. An example is the 2-metre temperature, which is the standard height for near-surface observations of air temperature. This temperature is not directly predicted from the model but is deduced from surface and lowest-model-layer temperatures. Other software is used for creating plots and animations.
Projections
Coupled AOGCMs use transient climate simulations to project/predict climate changes under various scenarios. These can be idealised scenarios (most commonly, CO2 emissions increasing at 1%/yr) or based on recent history (usually the "IS92a" or more recently the SRES scenarios). Which scenarios are most realistic remains uncertain.
The 2001 IPCC Third Assessment Report Figure 9.3 shows the global mean response of 19 different coupled models to an idealised experiment in which emissions increased at 1% per year. Figure 9.5 shows the response of a smaller number of models to more recent trends. For the 7 climate models shown there, the temperature change to 2100 varies from 2 to 4.5 °C with a median of about 3 °C.
Future scenarios do not include unknown events for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to greenhouse gas (GHG) forcing in the long term, but large volcanic eruptions, for example, can exert a substantial temporary cooling effect.
Human GHG emissions are a model input, although it is possible to include an economic/technological submodel to provide these as well. Atmospheric GHG levels are usually supplied as an input, though it is possible to include a carbon cycle model that reflects vegetation and oceanic processes to calculate such levels.
Emissions scenarios
For the six SRES marker scenarios, IPCC (2007:7–8) gave a "best estimate" of global mean temperature increase (2090–2099 relative to the period 1980–1999) of 1.8 °C to 4.0 °C. Over the same time period, the "likely" range (greater than 66% probability, based on expert judgement) for these scenarios was for a global mean temperature increase of 1.1 to 6.4 °C.
In 2008 a study made climate projections using several emission scenarios. In a scenario where global emissions start to decrease by 2010 and then declined at a sustained rate of 3% per year, the likely global average temperature increase was predicted to be 1.7 °C above pre-industrial levels by 2050, rising to around 2 °C by 2100. In a projection designed to simulate a future where no efforts are made to reduce global emissions, the likely rise in global average temperature was predicted to be 5.5 °C by 2100. A rise as high as 7 °C was thought possible, although less likely.
Another no-reduction scenario resulted in a median warming over land (2090–99 relative to the period 1980–99) of 5.1 °C. Under the same emissions scenario but with a different model, the predicted median warming was 4.1 °C.
Model accuracy
AOGCMs internalise as many processes as are sufficiently understood. However, they are still under development and significant uncertainties remain. They may be coupled to models of other processes in Earth system models, such as the carbon cycle, so as to better model feedbacks. Most recent simulations show "plausible" agreement with the measured temperature anomalies over the past 150 years, when driven by observed changes in greenhouse gases and aerosols. Agreement improves by including both natural and anthropogenic forcings.
Imperfect models may nevertheless produce useful results. GCMs are capable of reproducing the general features of the observed global temperature over the past century.
A debate over how to reconcile climate model predictions that upper air (tropospheric) warming should be greater than observed surface warming, some of which appeared to show otherwise, was resolved in favour of the models, following data revisions.
Cloud effects are a significant area of uncertainty in climate models. Clouds have competing effects on climate. They cool the surface by reflecting sunlight into space; they warm it by increasing the amount of infrared radiation transmitted from the atmosphere to the surface. In the 2001 IPCC report possible changes in cloud cover were highlighted as a major uncertainty in predicting climate.
Climate researchers around the world use climate models to understand the climate system. Thousands of papers have been published about model-based studies. Part of this research is to improve the models.
In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.
The precise magnitude of future changes in climate is still uncertain; for the end of the 21st century (2071 to 2100), for SRES scenario A2, the change of global average SAT change from AOGCMs compared with 1961 to 1990 is +3.0 °C (5.4 °F) and the range is +1.3 to +4.5 °C (+2.3 to 8.1 °F).
The IPCC's Fifth Assessment Report asserted "very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period". However, the report also observed that the rate of warming over the period 1998–2012 was lower than that predicted by 111 out of 114 Coupled Model Intercomparison Project climate models.
Relation to weather forecasting
The global climate models used for climate projections are similar in structure to (and often share computer code with) numerical models for weather prediction, but are nonetheless logically distinct.
Most weather forecasting is done on the basis of interpreting numerical model results. Since forecasts are typically a few days or a week and sea surface temperatures change relatively slowly, such models do not usually contain an ocean model but rely on imposed SSTs. They also require accurate initial conditions to begin the forecast typically these are taken from the output of a previous forecast, blended with observations. Weather predictions are required at higher temporal resolutions than climate projections, often sub-hourly compared to monthly or yearly averages for climate. However, because weather forecasts only cover around 10 days the models can also be run at higher vertical and horizontal resolutions than climate mode. Currently the ECMWF runs at resolution as opposed to the scale used by typical climate model runs. Often local models are run using global model results for boundary conditions, to achieve higher local resolution: for example, the Met Office runs a mesoscale model with an resolution covering the UK, and various agencies in the US employ models such as the NGM and NAM models. Like most global numerical weather prediction models such as the GFS, global climate models are often spectral models instead of grid models. Spectral models are often used for global models because some computations in modeling can be performed faster, thus reducing run times.
Computations
Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface and ice.
All climate models take account of incoming energy as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing energy as long wave (far) infrared electromagnetic radiation from the earth. Any imbalance results in a change in temperature.
The most talked-about models of recent years relate temperature to emissions of greenhouse gases. These models project an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher altitudes.
Three (or more properly, four since time is also considered) dimensional GCM's discretise the equations for fluid motion and energy transfer and integrate these over time. They also contain parametrisations for processes such as convection that occur on scales too small to be resolved directly.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models.
Models range in complexity:
A simple radiant heat transfer model treats the earth as a single point and averages outgoing energy
This can be expanded vertically (radiative-convective models), or horizontally
Finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
Box models treat flows across and within ocean basins.
Other submodels can be interlinked, such as land use, allowing researchers to predict the interaction between climate and ecosystems.
Comparison with other climate models
Earth-system models of intermediate complexity (EMICs)
The Climber-3 model uses a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of 1/2 a day. An oceanic submodel is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.
Radiative-convective models (RCM)
One-dimensional, radiative-convective models were used to verify basic climate assumptions in the 1980s and 1990s.
Earth system models
GCMs can form part of Earth system models, e.g. by coupling ice sheet models for the dynamics of the Greenland and Antarctic ice sheets, and one or more chemical transport models (CTMs) for species important to climate. Thus a carbon chemistry transport model may allow a GCM to better predict anthropogenic changes in carbon dioxide concentrations. In addition, this approach allows accounting for inter-system feedback: e.g. chemistry-climate models allow the effects of climate change on the ozone hole to be studied.
History
In 1956, Norman Phillips developed a mathematical model that could realistically depict monthly and seasonal patterns in the troposphere. It became the first successful climate model. Following Phillips's work, several groups began working to create GCMs. The first to combine both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined. In 1996, efforts began to model soil and vegetation types. Later the Hadley Centre for Climate Prediction and Research's HadCM3 model coupled ocean-atmosphere elements. The role of gravity waves was added in the mid-1980s. Gravity waves are required to simulate regional and global scale circulations accurately.
See also
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM) (in the US)
Earth Simulator
Global Environmental Multiscale Model
Ice-sheet model
Intermediate General Circulation Model
NCAR
Prognostic variable
Charney Report
References
.
Further reading
External links
IPCC AR5, Evaluation of Climate Models
with media including videos, animations, podcasts and transcripts on climate models
GFDL's Flexible Modeling System containing code for the climate models
Program for climate model diagnosis and intercomparison (PCMDI/CMIP)
National Operational Model Archive and Distribution System (NOMADS)
Hadley Centre for Climate Prediction and Research model info
NCAR/UCAR Community Climate System Model (CESM)
Climate prediction, community modeling
NASA/GISS, primary research GCM model
EDGCM/NASA: Educational Global Climate Modeling
NOAA/GFDL
MAOAM: Martian Atmosphere Observation and Modeling / MPI & MIPT
Numerical climate and weather models
Climate forcing
Computational science
Climate change
Articles containing video clips | General circulation model | [
"Mathematics"
] | 4,396 | [
"Computational science",
"Applied mathematics"
] |
164,605 | https://en.wikipedia.org/wiki/Geopotential%20height | Geopotential height or geopotential altitude is a vertical coordinate referenced to Earth's mean sea level (assumed zero geopotential) that represents the work involved in lifting one unit of mass over one unit of length through a hypothetical space in which the acceleration of gravity is assumed constant.
In SI units, a geopotential height difference of one meter implies the vertical transport of a parcel of one kilogram; adopting the standard gravity value (9.80665 m/s2), it corresponds to a constant work or potential energy difference of 9.80665 joules.
Geopotential height differs from geometric height (as given by a tape measure) because Earth's gravity is not constant, varying markedly with altitude and latitude; thus, a 1-m geopotential height difference implies a different vertical distance in physical space: "the unit-mass must be lifted higher at the equator than at the pole, if the same amount of work is to be performed".
It is a useful concept in meteorology, climatology, and oceanography; it also remains a historical convention in aeronautics as the altitude used for calibration of aircraft barometric altimeters.
Definition
Geopotential is the gravitational potential energy per unit mass at elevation :
where is the acceleration due to gravity, is latitude, and is the geometric elevation.
Geopotential height may be obtained from normalizing geopotential by the acceleration of gravity:
where = 9.80665 m/s2, the standard gravity at mean sea level. Expressed in differential form,
Role in planetary fluids
Geopotential height plays an important role in atmospheric and oceanographic studies.
The differential form above may be substituted into the hydrostatic equation and ideal gas law in order to relate pressure to ambient temperature and geopotential height for measurement by barometric altimeters regardless of latitude or geometric elevation:
where and are ambient pressure and temperature, respectively, as functions of geopotential height, and is the specific gas constant. For the subsequent definite integral, the simplification obtained by assuming a constant value of gravitational acceleration is the sole reason for defining the geopotential altitude.
Usage
Geophysical sciences such as meteorology often prefer to express the horizontal pressure gradient force as the gradient of geopotential along a constant-pressure surface, because then it has the properties of a conservative force. For example, the primitive equations that weather forecast models solve use hydrostatic pressure as a vertical coordinate, and express the slopes of those pressure surfaces in terms of geopotential height.
A plot of geopotential height for a single pressure level in the atmosphere shows the troughs and ridges (highs and lows) which are typically seen on upper air charts. The geopotential thickness between pressure levels – difference of the 850 hPa and 1000 hPa geopotential heights for example – is proportional to mean virtual temperature in that layer. Geopotential height contours can be used to calculate the geostrophic wind, which is faster where the contours are more closely spaced and tangential to the geopotential height contours.
The United States National Weather Service defines geopotential height as:
See also
Atmospheric model
Above mean sea level
Dynamic height, a similar quantity used in geodesy, based on a slightly different gravity value
References
Further reading
Hofmann-Wellenhof, B. and Moritz, H. "Physical Geodesy", 2005. .
Eskinazi, S. "Fluid Mechanics and Thermodynamics of our Environment", 1975. .
External links
Atmospheric dynamics
Vertical position
fr:Hauteur du géopotentiel | Geopotential height | [
"Physics",
"Chemistry"
] | 736 | [
"Vertical position",
"Atmospheric dynamics",
"Physical quantities",
"Distance",
"Fluid dynamics"
] |
164,610 | https://en.wikipedia.org/wiki/Latent%20heat | Latent heat (also known as latent energy or heat of transformation) is energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process—usually a first-order phase transition, like melting or condensation.
Latent heat can be understood as hidden energy which is supplied or extracted to change the state of a substance without changing its temperature or pressure. This includes the latent heat of fusion (solid to liquid), the latent heat of vaporization (liquid to gas) and the latent heat of sublimation (solid to gas).
The term was introduced around 1762 by Scottish chemist Joseph Black. Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant.
In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body.
Usage
The terms sensible heat and latent heat refer to energy transferred between a body and its surroundings, defined by the occurrence or non-occurrence of temperature change; they depend on the properties of the body. Sensible heat is sensed or felt in a process as a change in the body's temperature. Latent heat is energy transferred in a process without change of the body's temperature, for example, in a phase change (solid/liquid/gas).
Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume.
The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related latent heats. These latent heats are defined independently of the conceptual framework of thermodynamics.
When a body is heated at constant temperature by thermal radiation in a microwave field for example, it may expand by an amount described by its latent heat with respect to volume or latent heat of expansion, or increase its pressure by an amount described by its latent heat with respect to pressure.
Latent heat is energy released or absorbed by a body or a thermodynamic system during a constant-temperature process. Two common forms of latent heat are latent heat of fusion (melting) and latent heat of vaporization (boiling). These names describe the direction of energy flow when changing from one phase to the next: from solid to liquid, and liquid to gas.
In both cases the change is endothermic, meaning that the system absorbs energy. For example, when water evaporates, an input of energy is required for the water molecules to overcome the forces of attraction between them and make the transition from water to vapor.
If the vapor then condenses to a liquid on a surface, then the vapor's latent energy absorbed during evaporation is released as the liquid's sensible heat onto the surface.
The large value of the enthalpy of condensation of water vapor is the reason that steam is a far more effective heating medium than boiling water, and is more hazardous.
Meteorology
In meteorology, latent heat flux is the flux of energy from the Earth's surface to the atmosphere that is associated with evaporation or transpiration of water at the surface and subsequent condensation of water vapor in the troposphere. It is an important component of Earth's surface energy budget. Latent heat flux has been commonly measured with the Bowen ratio technique, or more recently since the mid-1900s by the eddy covariance method.
History
Background
Evaporative cooling
In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen. Cullen had used an air pump to lower the pressure in a container with diethyl ether. No heat was withdrawn from the ether, yet the ether boiled, but its temperature decreased. And in 1758, on a warm day in Cambridge, England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Another thermometer showed that the room temperature was constant at . In his letter Cooling by Evaporation, Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day."
Latent heat
The English word latent comes from Latin latēns, meaning lying hidden. The term latent heat was introduced into calorimetry around 1750 by Joseph Black, commissioned by producers of Scotch whisky in search of ideal quantities of fuel and water for their distilling process to study system changes, such as of volume and pressure, when the thermodynamic system was held at constant temperature in a thermal bath.
It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no more heat was required than what the increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone.
Black would compare the change in temperature of two identical quantities of water, heated by identical means, one of which was, say, melted from ice, whereas the other was heated from merely cold liquid state. By comparing the resulting temperatures, he could conclude that, for instance, the temperature of the sample melted from ice was 140 °F lower than the other sample, thus melting the ice absorbed 140 "degrees of heat" that could not be measured by the thermometer, yet needed to be supplied, thus it was "latent" (hidden). Black also deduced that as much latent heat as was supplied into boiling the distillate (thus giving the quantity of fuel needed) also had to be absorbed to condense it again (thus giving the cooling water required).
Quantifying latent heat
In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. The temperature of the ice had increased by 8 °F. The ice now stored, as it were, an additional 8 “degrees of heat” in a form which Black called sensible heat, manifested as temperature, which could be felt and measured. 147 – 8 = 139 “degrees of heat” were, so to speak, stored as latent heat, not manifesting itself. (In modern thermodynamics the idea of heat contained has been abandoned, so sensible heat and latent heat have been redefined. They do not reside anywhere.)
Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”).
Finally Black increased the temperature of and vaporized respectively two equal masses of water through even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale.
James Prescott Joule
Later, James Prescott Joule characterised latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy, and the sensible heat as an energy that was indicated by the thermometer, relating the latter to thermal energy.
Specific latent heat
A specific latent heat (L) expresses the amount of energy in the form of heat (Q) required to completely effect a phase change of a unit of mass (m), usually , of a substance as an intensive property:
Intensive properties are material characteristics and are not dependent on the size or extent of the sample. Commonly quoted and tabulated in the literature are the specific latent heat of fusion and the specific latent heat of vaporization for many substances.
From this definition, the latent heat for a given mass of a substance is calculated by
where:
Q is the amount of energy released or absorbed during the change of phase of the substance (in kJ or in BTU),
m is the mass of the substance (in kg or in lb), and
L is the specific latent heat for a particular substance (in kJ kg−1 or in BTU lb−1), either Lf for fusion, or Lv for vaporization.
Table of specific latent heats
The following table shows the specific latent heats and change of phase temperatures (at standard pressure) of some common fluids and gases.
Specific latent heat for condensation of water in clouds
The specific latent heat of condensation of water in the temperature range from −25 °C to 40 °C is approximated by the following empirical cubic function:
where the temperature is taken to be the numerical value in °C.
For sublimation and deposition from and into ice, the specific latent heat is almost constant in the temperature range from −40 °C to 0 °C and can be approximated by the following empirical quadratic function:
Variation with temperature (or pressure)
As the temperature (or pressure) rises to the critical point, the latent heat of vaporization falls to zero.
See also
Bowen ratio
Eddy covariance flux (eddy correlation, eddy flux)
Sublimation (physics)
Specific heat capacity
Enthalpy of fusion
Enthalpy of vaporization
Ton of refrigeration -- the power required to freeze or melt 2000 lb of water in 24 hours
Notes
References
Thermochemistry
Atmospheric thermodynamics
Thermodynamics
Physical phenomena | Latent heat | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,280 | [
"Thermochemistry",
"Physical phenomena",
"Thermodynamics",
"Dynamical systems"
] |
164,631 | https://en.wikipedia.org/wiki/Wavenumber%E2%80%93frequency%20diagram | A wavenumber–frequency diagram is a plot displaying the relationship between the wavenumber (spatial frequency) and the frequency (temporal frequency) of certain phenomena. Usually frequencies are placed on the vertical axis, while wavenumbers are placed on the horizontal axis.
In the atmospheric sciences, these plots are a common way to visualize atmospheric waves.
In the geosciences, especially seismic data analysis, these plots also called f–k plot, in which energy density within a given time interval is contoured on a frequency-versus-wavenumber basis. They are used to examine the direction and apparent velocity of seismic waves and in velocity filter design.
Origins
In general, the relationship between wavelength , frequency , and the phase velocity of a sinusoidal wave is:
Using the wavenumber () and angular frequency () notation, the previous equation can be rewritten as
On the other hand, the group velocity is equal to the slope of the wavenumber–frequency diagram:
Analyzing such relationships in detail often yields information on the physical properties of the medium, such as density, composition, etc.
See also
Dispersion relation
References
Wave mechanics
Diagrams | Wavenumber–frequency diagram | [
"Physics"
] | 237 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
164,633 | https://en.wikipedia.org/wiki/Atmospheric%20science | Atmospheric science is the study of the Earth's atmosphere and its various inner-working physical processes. Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather forecasting. Climatology is the study of atmospheric changes (both long and short-term) that define average climates and their change over time climate variability. Aeronomy is the study of the upper layers of the atmosphere, where dissociation and ionization are important. Atmospheric science has been extended to the field of planetary science and the study of the atmospheres of the planets and natural satellites of the Solar System.
Experimental instruments used in atmospheric science include satellites, rocketsondes, radiosondes, weather balloons, radars, and lasers.
The term aerology (from Greek ἀήρ, aēr, "air"; and -λογία, -logia) is sometimes used as an alternative term for the study of Earth's atmosphere; in other definitions, aerology is restricted to the free atmosphere, the region above the planetary boundary layer.
Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann.
Atmospheric chemistry
Atmospheric chemistry is a branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines. Research is increasingly connected with other areas of study such as climatology.
The composition and chemistry of the atmosphere is of importance for several reasons, but primarily because of the interactions between the atmosphere and living organisms. The composition of the Earth's atmosphere has been changed by human activity and some of these changes are harmful to human health, crops and ecosystems. Examples of problems which have been addressed by atmospheric chemistry include acid rain, photochemical smog and global warming. Atmospheric chemistry seeks to understand the causes of these problems, and by obtaining a theoretical understanding of them, allow possible solutions to be tested and the effects of changes in government policy evaluated.
Atmospheric dynamics
Atmospheric dynamics is the study of motion systems of meteorological importance, integrating observations at multiple locations and times and theories. Common topics studied include diverse phenomena such as thunderstorms, tornadoes, gravity waves, tropical cyclones, extratropical cyclones, jet streams, and global-scale circulations. The goal of dynamical studies is to explain the observed circulations on the basis of fundamental principles from physics. The objectives of such studies incorporate improving weather forecasting, developing methods for predicting seasonal and interannual climate fluctuations, and understanding the implications of human-induced perturbations (e.g., increased carbon dioxide concentrations or depletion of the ozone layer) on the global climate.
Atmospheric physics
Atmospheric physics is the application of physics to the study of the atmosphere. Atmospheric physicists attempt to model Earth's atmosphere and the atmospheres of the other planets using fluid flow equations, chemical models, radiation balancing, and energy transfer processes in the atmosphere and underlying oceans and land. In order to model weather systems, atmospheric physicists employ elements of scattering theory, wave propagation models, cloud physics, statistical mechanics and spatial statistics, each of which incorporate high levels of mathematics and physics. Atmospheric physics has close links to meteorology and climatology and also covers the design and construction of instruments for studying the atmosphere and the interpretation of the data they provide, including remote sensing instruments.
In the United Kingdom, atmospheric studies are underpinned by the Meteorological Office. Divisions of the U.S. National Oceanic and Atmospheric Administration (NOAA) oversee research projects and weather modeling involving atmospheric physics. The U.S. National Astronomy and Ionosphere Center also carries out studies of the high atmosphere.
The Earth's magnetic field and the solar wind interact with the atmosphere, creating the ionosphere, Van Allen radiation belts, telluric currents, and radiant energy.
Climatology
Is a science that bases its more general knowledge of the more specialized disciplines of meteorology, oceanography, geology, and astronomy, which in turn are based on the basic sciences of physics, chemistry, and mathematics. In contrast to meteorology, which studies short term weather systems lasting up to a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over years to millennia, as well as changes in long-term average weather patterns, in relation to atmospheric conditions. Climatologists, those who practice climatology, study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climates to change. Climatology considers the past and tries to predict future climate change.
Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere and the oceans and land surface (particularly vegetation, land use and topography), and the chemical and physical composition of the atmosphere. Related disciplines include astrophysics, atmospheric physics, chemistry, ecology, physical geography, geology, geophysics, glaciology, hydrology, oceanography, and volcanology.
Aeronomy
Aeronomy is the scientific study of the upper atmosphere of the Earth — the atmospheric layers above the stratopause — and corresponding regions of the atmospheres of other planets, where the entire atmosphere may correspond to the Earth's upper atmosphere or a portion of it. A branch of both atmospheric chemistry and atmospheric physics, aeronomy contrasts with meteorology, which focuses on the layers of the atmosphere below the stratopause. In atmospheric regions studied by aeronomers, chemical dissociation and ionization are important phenomena.
Atmospheres on other celestial bodies
All of the Solar System's planets have atmospheres. This is because their gravity is strong enough to keep gaseous particles close to the surface. Larger gas giants are massive enough to keep large amounts of the light gases hydrogen and helium close by, while the smaller planets lose these gases into space. The composition of the Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen. Much of Mercury's atmosphere has been blasted away by the solar wind. The only moon that has retained a dense atmosphere is Titan. There is a thin atmosphere on Triton, and a trace of an atmosphere on the Moon.
Planetary atmospheres are affected by the varying degrees of energy received from either the Sun or their interiors, leading to the formation of dynamic weather systems such as hurricanes (on Earth), planet-wide dust storms (on Mars), an Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). At least one extrasolar planet, HD 189733 b, has been claimed to possess such a weather system, similar to the Great Red Spot but twice as large.
Hot Jupiters have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides which produce supersonic winds, although the day and night sides of HD 189733b appear to have very similar temperatures, indicating that planet's atmosphere effectively redistributes the star's energy around the planet.
See also
Air pollution
References
External links
Atmospheric fluid dynamics applied to weather maps – Principles such as Advection, Deformation and Vorticity
National Center for Atmospheric Research (NCAR) Archives, documents the history of the atmospheric sciences
Fluid dynamics | Atmospheric science | [
"Chemistry",
"Engineering"
] | 1,549 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
164,901 | https://en.wikipedia.org/wiki/Post-translational%20modification | In molecular biology, post-translational modification (PTM) is the covalent process of changing proteins following protein biosynthesis. PTMs may involve enzymes or occur spontaneously. Proteins are created by ribosomes, which translate mRNA into polypeptide chains, which may then change to form the mature protein product. PTMs are important components in cell signalling, as for example when prohormones are converted to hormones.
Post-translational modifications can occur on the amino acid side chains or at the protein's C- or N- termini. They can expand the chemical set of the 22 amino acids by changing an existing functional group or adding a new one such as phosphate. Phosphorylation is highly effective for controlling the enzyme activity and is the most common change after translation. Many eukaryotic and prokaryotic proteins also have carbohydrate molecules attached to them in a process called glycosylation, which can promote protein folding and improve stability as well as serving regulatory functions. Attachment of lipid molecules, known as lipidation, often targets a protein or part of a protein attached to the cell membrane.
Other forms of post-translational modification consist of cleaving peptide bonds, as in processing a propeptide to a mature form or removing the initiator methionine residue. The formation of disulfide bonds from cysteine residues may also be referred to as a post-translational modification. For instance, the peptide hormone insulin is cut twice after disulfide bonds are formed, and a propeptide is removed from the middle of the chain; the resulting protein consists of two polypeptide chains connected by disulfide bonds.
Some types of post-translational modification are consequences of oxidative stress. Carbonylation is one example that targets the modified protein for degradation and can result in the formation of protein aggregates. Specific amino acid modifications can be used as biomarkers indicating oxidative damage.
Sites that often undergo post-translational modification are those that have a functional group that can serve as a nucleophile in the reaction: the hydroxyl groups of serine, threonine, and tyrosine; the amine forms of lysine, arginine, and histidine; the thiolate anion of cysteine; the carboxylates of aspartate and glutamate; and the N- and C-termini. In addition, although the amide of asparagine is a weak nucleophile, it can serve as an attachment point for glycans. Rarer modifications can occur at oxidized methionines and at some methylene groups in side chains.
Post-translational modification of proteins can be experimentally detected by a variety of techniques, including mass spectrometry, Eastern blotting, and Western blotting.
PTMs involving addition of functional groups
Addition by an enzyme in vivo
Hydrophobic groups for membrane localization
myristoylation (a type of acylation), attachment of myristate, a C14 saturated acid
palmitoylation (a type of acylation), attachment of palmitate, a C16 saturated acid
isoprenylation or prenylation, the addition of an isoprenoid group (e.g. farnesol and geranylgeraniol)
farnesylation
geranylgeranylation
glypiation, glycosylphosphatidylinositol (GPI) anchor formation via an amide bond to C-terminal tail
Cofactors for enhanced enzymatic activity
lipoylation (a type of acylation), attachment of a lipoate (C8) functional group
flavin moiety (FMN or FAD) may be covalently attached
heme C attachment via thioether bonds with cysteines
phosphopantetheinylation, the addition of a 4'-phosphopantetheinyl moiety from coenzyme A, as in fatty acid, polyketide, non-ribosomal peptide and leucine biosynthesis
retinylidene Schiff base formation
Modifications of translation factors
diphthamide formation (on a histidine found in eEF2)
ethanolamine phosphoglycerol attachment (on glutamate found in eEF1α)
hypusine formation (on conserved lysine of eIF5A (eukaryotic) and aIF5A (archaeal))
beta-Lysine addition on a conserved lysine of the elongation factor P (EFP) in most bacteria. EFP is a homolog to eIF5A (eukaryotic) and aIF5A (archaeal) (see above).
Smaller chemical groups
acylation, e.g. O-acylation (esters), N-acylation (amides), S-acylation (thioesters)
acetylation, the addition of an acetyl group, either at the N-terminus of the protein or at lysine residues. The reverse is called deacetylation.
formylation
alkylation, the addition of an alkyl group, e.g. methyl, ethyl
methylation the addition of a methyl group, usually at lysine or arginine residues. The reverse is called demethylation.
amidation at C-terminus. Formed by oxidative dissociation of a C-terminal Gly residue.
amide bond formation
amino acid addition
arginylation, a tRNA-mediation addition
polyglutamylation, covalent linkage of glutamic acid residues to the N-terminus of tubulin and some other proteins. (See tubulin polyglutamylase)
polyglycylation, covalent linkage of one to more than 40 glycine residues to the tubulin C-terminal tail
butyrylation
gamma-carboxylation dependent on Vitamin K
glycosylation, the addition of a glycosyl group to either arginine, asparagine, cysteine, hydroxylysine, serine, threonine, tyrosine, or tryptophan resulting in a glycoprotein. Distinct from glycation, which is regarded as a nonenzymatic attachment of sugars.
O-GlcNAc, addition of N-acetylglucosamine to serine or threonine residues in a β-glycosidic linkage
polysialylation, addition of polysialic acid, PSA, to NCAM
malonylation
hydroxylation: addition of an oxygen atom to the side-chain of a Pro or Lys residue
iodination: addition of an iodine atom to the aromatic ring of a tyrosine residue (e.g. in thyroglobulin)
nucleotide addition such as ADP-ribosylation
phosphate ester (O-linked) or phosphoramidate (N-linked) formation
phosphorylation, the addition of a phosphate group, usually to serine, threonine, and tyrosine (O-linked), or histidine (N-linked)
adenylylation, the addition of an adenylyl moiety, usually to tyrosine (O-linked), or histidine and lysine (N-linked)
uridylylation, the addition of an uridylyl-group (i.e. uridine monophosphate, UMP), usually to tyrosine
propionylation
pyroglutamate formation
S-glutathionylation
S-nitrosylation
S-sulfenylation (aka S-sulphenylation), reversible covalent addition of one oxygen atom to the thiol group of a cysteine residue
S-sulfinylation, normally irreversible covalent addition of two oxygen atoms to the thiol group of a cysteine residue
S-sulfonylation, normally irreversible covalent addition of three oxygen atoms to the thiol group of a cysteine residue, resulting in the formation of a cysteic acid residue
succinylation addition of a succinyl group to lysine
sulfation, the addition of a sulfate group to a tyrosine.
Non-enzymatic modifications in vivo
Examples of non-enzymatic PTMs are glycation, glycoxidation, nitrosylation, oxidation, succination, and lipoxidation.
glycation, the addition of a sugar molecule to a protein without the controlling action of an enzyme.
carbamylation the addition of Isocyanic acid to a protein's N-terminus or the side-chain of Lys.
carbonylation the addition of carbon monoxide to other organic/inorganic compounds.
spontaneous isopeptide bond formation, as found in many surface proteins of Gram-positive bacteria.
Non-enzymatic additions in vitro
biotinylation: covalent attachment of a biotin moiety using a biotinylation reagent, typically for the purpose of labeling a protein.
carbamylation: the addition of Isocyanic acid to a protein's N-terminus or the side-chain of Lys or Cys residues, typically resulting from exposure to urea solutions.
oxidation: addition of one or more Oxygen atoms to a susceptible side-chain, principally of Met, Trp, His or Cys residues. Formation of disulfide bonds between Cys residues.
pegylation: covalent attachment of polyethylene glycol (PEG) using a pegylation reagent, typically to the N-terminus or the side-chains of Lys residues. Pegylation is used to improve the efficacy of protein pharmaceuticals.
Conjugation with other proteins or peptides
ubiquitination, the covalent linkage to the protein ubiquitin.
SUMOylation, the covalent linkage to the SUMO protein (Small Ubiquitin-related MOdifier)
neddylation, the covalent linkage to the Nedd protein
ISGylation, the covalent linkage to the ISG15 protein (Interferon-Stimulated Gene 15)
pupylation, the covalent linkage to the prokaryotic ubiquitin-like protein
Chemical modification of amino acids
citrullination, or deimination, the conversion of arginine to citrulline
deamidation, the conversion of glutamine to glutamic acid or asparagine to aspartic acid
eliminylation, the conversion to an alkene by beta-elimination of phosphothreonine and phosphoserine, or dehydration of threonine and serine
Structural changes
disulfide bridges, the covalent linkage of two cysteine amino acids
lysine-cysteine bridges, the covalent linkage of 1 lysine and 1 or 2 cystine residues via an oxygen atom (NOS and SONOS bridges)
proteolytic cleavage, cleavage of a protein at a peptide bond
isoaspartate formation, via the cyclisation of asparagine or aspartic acid amino-acid residues
racemization
of serine by protein-serine epimerase
of alanine in dermorphin, a frog opioid peptide
of methionine in deltorphin, also a frog opioid peptide
protein splicing, self-catalytic removal of inteins analogous to mRNA processing
Statistics
Common PTMs by frequency
In 2011, statistics of each post-translational modification experimentally and putatively detected have been compiled using proteome-wide information from the Swiss-Prot database. The 10 most common experimentally found modifications were as follows:
Common PTMs by residue
Some common post-translational modifications to specific amino-acid residues are shown below. Modifications occur on the side-chain unless indicated otherwise.
Databases and tools
Protein sequences contain sequence motifs that are recognized by modifying enzymes, and which can be documented or predicted in PTM databases. With the large number of different modifications being discovered, there is a need to document this sort of information in databases. PTM information can be collected through experimental means or predicted from high-quality, manually curated data. Numerous databases have been created, often with a focus on certain taxonomic groups (e.g. human proteins) or other features.
List of resources
PhosphoSitePlus – A database of comprehensive information and tools for the study of mammalian protein post-translational modification
ProteomeScout – A database of proteins and post-translational modifications experimentally
Human Protein Reference Database – A database for different modifications and understand different proteins, their class, and function/process related to disease causing proteins
PROSITE – A database of Consensus patterns for many types of PTM's including sites
RESID – A database consisting of a collection of annotations and structures for PTMs.
iPTMnet – A database that integrates PTM information from several knowledgbases and text mining results.
dbPTM – A database that shows different PTM's and information regarding their chemical components/structures and a frequency for amino acid modified site
Uniprot has PTM information although that may be less comprehensive than in more specialized databases.
The O-GlcNAc Database - A curated database for protein O-GlcNAcylation and referencing more than 14 000 protein entries and 10 000 O-GlcNAc sites.
Tools
List of software for visualization of proteins and their PTMs
PyMOL – introduce a set of common PTM's into protein models
AWESOME – Interactive tool to see the role of single nucleotide polymorphisms to PTM's
Chimera – Interactive Database to visualize molecules
Case examples
Cleavage and formation of disulfide bridges during the production of insulin
PTM of histones as regulation of transcription: RNA polymerase control by chromatin structure
PTM of RNA polymerase II as regulation of transcription
Cleavage of polypeptide chains as crucial for lectin specificity
See also
Protein targeting
Post-translational regulation
References
External links
Controlled vocabulary of post-translational modifications in Uniprot
List of posttranslational modifications in ExPASy
Browse SCOP domains by PTM — from the dcGO database
Overview and description of commonly used post-translational modification detection techniques
Gene expression
Protein structure
Protein biosynthesis
Cell biology | Post-translational modification | [
"Chemistry",
"Biology"
] | 3,062 | [
"Protein biosynthesis",
"Cell biology",
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Structural biology",
"Molecular biology",
"Biochemistry",
"Protein structure"
] |
164,974 | https://en.wikipedia.org/wiki/Hydrography | Hydrography is the branch of applied sciences which deals with the measurement and description of the physical features of oceans, seas, coastal areas, lakes and rivers, as well as with the prediction of their change over time, for the primary purpose of safety of navigation and in support of all other marine activities, including economic development, security and defense, scientific research, and environmental protection.
History
The origins of hydrography lay in the making of charts to aid navigation, by individual mariners as they navigated into new waters. These were usually the private property, even closely held secrets, of individuals who used them for commercial or military advantage. As transoceanic trade and exploration increased, hydrographic surveys started to be carried out as an exercise in their own right, and the commissioning of surveys was increasingly done by governments and special hydrographic offices. National organizations, particularly navies, realized that the collection, systematization and distribution of this knowledge gave it great organizational and military advantages. Thus were born dedicated national hydrographic organizations for the collection, organization, publication and distribution of hydrography incorporated into charts and sailing directions.
Prior to the establishment of the United Kingdom Hydrographic Office, Royal Navy captains were responsible for the provision of their own charts. In practice this meant that ships often sailed with inadequate information for safe navigation, and that when new areas were surveyed, the data rarely reached all those who needed it. The Admiralty appointed Alexander Dalrymple as Hydrographer in 1795, with a remit to gather and distribute charts to HM Ships. Within a year existing charts from the previous two centuries had been collated, and the first catalog published. The first chart produced under the direction of the Admiralty, was a chart of Quiberon Bay in Brittany, and it appeared in 1800.
Under Captain Thomas Hurd the department received its first professional guidelines, and the first catalogs were published and made available to the public and to other nations as well. In 1829, Rear-Admiral Sir Francis Beaufort, as Hydrographer, developed the eponymous Scale, and introduced the first official tide tables in 1833 and the first "Notices to Mariners" in 1834. The Hydrographic Office underwent steady expansion throughout the 19th century; by 1855, the Chart Catalogue listed 1,981 charts giving a definitive coverage over the entire world, and produced over 130,000 charts annually, of which about half were sold.
The word hydrography comes from the Ancient Greek ὕδωρ (hydor), "water" and γράφω (graphō), "to write".
Overview
Large-scale hydrography is usually undertaken by national or international organizations which sponsor data collection through precise surveys and publish charts and descriptive material for navigational purposes. The science of oceanography is, in part, an outgrowth of classical hydrography. In many respects the data are interchangeable, but marine hydrographic data will be particularly directed toward marine navigation and safety of that navigation. Marine resource exploration and exploitation is a significant application of hydrography, principally focused on the search for hydrocarbons.
Hydrographical measurements include the tidal, current and wave information of physical oceanography. They include bottom measurements, with particular emphasis on those marine geographical features that pose a hazard to navigation such as rocks, shoals, reefs and other features that obstruct ship passage. Bottom measurements also include collection of the nature of the bottom as it pertains to effective anchoring. Unlike oceanography, hydrography will include shore features, natural and manmade, that aid in navigation. Therefore, a hydrographic survey may include the accurate positions and representations of hills, mountains and even lights and towers that will aid in fixing a ship's position, as well as the physical aspects of the sea and seabed.
Hydrography, mostly for reasons of safety, adopted a number of conventions that have affected its portrayal of the data on nautical charts. For example, hydrographic charts are designed to portray what is safe for navigation, and therefore will usually tend to maintain least depths and occasionally de-emphasize the actual submarine topography that would be portrayed on bathymetric charts. The former are the mariner's tools to avoid accident. The latter are best representations of the actual seabed, as in a topographic map, for scientific and other purposes. Trends in hydrographic practice since c. 2003–2005 have led to a narrowing of this difference, with many more hydrographic offices maintaining "best observed" databases, and then making navigationally "safe" products as required. This has been coupled with a preference for multi-use surveys, so that the same data collected for nautical charting purposes can also be used for bathymetric portrayal.
Even though, in places, hydrographic survey data may be collected in sufficient detail to portray bottom topography in some areas, hydrographic charts only show depth information relevant for safe navigation and should not be considered as a product that accurately portrays the actual shape of the bottom. The soundings selected from the raw source depth data for placement on the nautical chart are selected for safe navigation and are biased to show predominantly the shallowest depths that relate to safe navigation. For instance, if there is a deep area that can not be reached because it is surrounded by shallow water, the deep area may not be shown. The color filled areas that show different ranges of shallow water are not the equivalent of contours on a topographic map since they are often drawn seaward of the actual shallowest depth portrayed. A bathymetric chart does show marine topology accurately. Details covering the above limitations can be found in Part 1 of Bowditch's American Practical Navigator. Another concept that affects safe navigation is the sparsity of detailed depth data from high resolution sonar systems. In more remote areas, the only available depth information has been collected with lead lines. This collection method drops a weighted line to the bottom at intervals and records the depth, often from a rowboat or sail boat. There is no data between soundings or between sounding lines to guarantee that there is not a hazard such as a wreck or a coral head waiting there to ruin a sailor's day. Often, the navigation of the collecting boat does not match today's GPS navigational accuracies. The hydrographic chart will use the best data available and will caveat its nature in a caution note or in the legend of the chart.
A hydrographic survey is quite different from a bathymetric survey in some important respects, particularly in a bias toward least depths due to the safety requirements of the former and geomorphologic descriptive requirements of the latter. Historically, this could include echosoundings being conducted under settings biased toward least depths, but in modern practice hydrographic surveys typically attempt to best measure the depths observed, with the adjustments for navigational safety being applied after the fact.
Hydrography of streams will include information on the stream bed, flows, water quality and surrounding land. Basin or interior hydrography pays special attention to rivers and potable water although if collected data is not for ship navigational uses, and is intended for scientific usage, it is more commonly called hydrometry or hydrology.
Hydrography of rivers and streams is also an integral part of water management. Most reservoirs in the United States use dedicated stream gauging and rating tables to determine inflows into the reservoir and outflows to irrigation districts, water municipalities and other users of captured water. River/stream hydrographers use handheld and bank mounted devices, to capture a sectional flow rate of moving water through a section and or current.
Equipment
Uncrewed Surface Vessels (USVs) and are commonly used for hydrographic surveys - they are often equipped with some sort of sonar. Single-beam echosounders, multibeam echosounders, and side scan sonars are all frequently used in hydrographic applications. The knowledge gained from these surveys aid in disaster planning, port and harbor maintenance, and various other coastal planning activities.
Organizations
Hydrographic services in most countries are carried out by specialized hydrographic offices. The international coordination of hydrographic efforts lies with the International Hydrographic Organization.
The United Kingdom Hydrographic Office is one of the oldest, supplying a wide range of charts covering the globe to other countries, allied military organizations and the public.
In the United States, the hydrographic charting function has been carried out since 1807 by the Office of Coast Survey of the National Oceanic and Atmospheric Administration within the U.S. Department of Commerce and the U.S. Army Corps of Engineers.
See also
Associations focussing on ocean hydrography
International Federation of Hydrographic Societies (formerly The Hydrographic Society)
State Hydrography Service of Georgia
The Hydrographic Society of America
Australasian Hydrographic Society
Associations focussing on river stream and lake hydrography
Australian Hydrographic Association
New Zealand Hydrological Society
American Institute of Hydrology
References
External links
Hydro International Lemmer, the Netherlands: Hydrographic Information ]
What is hydrography? National Ocean Service
Hydrography
Hydrology
Physical geography | Hydrography | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,810 | [
"Hydrography",
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Environmental engineering"
] |
165,067 | https://en.wikipedia.org/wiki/Watt%20steam%20engine | The Watt steam engine design was an invention of James Watt that became synonymous with steam engines during the Industrial Revolution, and it was many years before significantly new designs began to replace the basic Watt design.
The first steam engines, introduced by Thomas Newcomen in 1712, were of the "atmospheric" design. At the end of the power stroke, the weight of the object being moved by the engine pulled the piston to the top of the cylinder as steam was introduced. Then the cylinder was cooled by a spray of water, which caused the steam to condense, forming a partial vacuum in the cylinder. Atmospheric pressure on the top of the piston pushed it down, lifting the work object. James Watt noticed that it required significant amounts of heat to warm the cylinder back up to the point where steam could enter the cylinder without immediately condensing. When the cylinder was warm enough that it became filled with steam the next power stroke could commence.
Watt realised that the heat needed to warm the cylinder could be saved by adding a separate condensing cylinder. After the power cylinder was filled with steam, a valve was opened to the secondary cylinder, allowing the steam to flow into it and be condensed, which drew the steam from the main cylinder causing the power stroke. The condensing cylinder was water cooled to keep the steam condensing. At the end of the power stroke, the valve was closed so the power cylinder could be filled with steam as the piston moved to the top. The result was the same cycle as Newcomen's design, but without any cooling of the power cylinder which was immediately ready for another stroke.
Watt worked on the design over a period of several years, introducing the condenser, and introducing improvements to practically every part of the design. Notably, Watt performed a lengthy series of trials on ways to seal the piston in the cylinder, which considerably reduced leakage during the power stroke, preventing power loss. All of these changes produced a more reliable design which used half as much coal to produce the same amount of power.
The new design was introduced commercially in 1776, with the first example sold to the Carron Company ironworks. Watt continued working to improve the engine, and in 1781 introduced a system using a sun and planet gear to turn the linear motion of the engines into rotary motion. This made it useful not only in the original pumping role, but also as a direct replacement in roles where a water wheel would have been used previously. This was a key moment in the industrial revolution, since power sources could now be located anywhere instead of, as previously, needing a suitable water source and topography. Watt's partner Matthew Boulton began developing a multitude of machines that made use of this rotary power, developing the first modern industrialized factory, the Soho Foundry, which in turn produced new steam engine designs. Watt's early engines were like the original Newcomen designs in that they used low-pressure steam, and all of the power was produced by atmospheric pressure. When, in the early 1800s, other companies introduced high-pressure steam engines, Watt was reluctant to follow suit due to safety concerns. Wanting to improve on the performance of his engines, Watt began considering the use of higher-pressure steam, as well as designs using multiple cylinders in both the double-acting concept and the multiple-expansion concept. These double-acting engines required the invention of the parallel motion, which allowed the piston rods of the individual cylinders to move in straight lines, keeping the piston true in the cylinder, while the walking beam end moved through an arc, somewhat analogous to a crosshead in later steam engines.
Introduction
In 1698, the English mechanical designer Thomas Savery invented a pumping appliance that used steam to draw water directly from a well by means of a vacuum created by condensing steam. The appliance was also proposed for draining mines, but it could only draw fluid up approximately 25 feet, meaning it had to be located within this distance of the mine floor being drained. As mines became deeper, this was often impractical. It also consumed a large amount of fuel compared with later engines.
The solution to draining deep mines was found by Thomas Newcomen who developed an "atmospheric" engine that also worked on the vacuum principle. It employed a cylinder containing a movable piston connected by a chain to one end of a rocking beam that worked a mechanical lift pump from its opposite end. At the bottom of each stroke, steam was allowed to enter the cylinder below the piston. As the piston rose within the cylinder, drawn upward by a counterbalance, it drew in steam at atmospheric pressure. At the top of the stroke the steam valve was closed, and cold water was briefly injected into the cylinder as a means of cooling the steam. This water condensed the steam and created a partial vacuum below the piston. The atmospheric pressure outside the engine was then greater than the pressure within the cylinder, thereby pushing the piston into the cylinder. The piston, attached to a chain and in turn attached to one end of the "rocking beam", pulled down the end of the beam, lifting the opposite end of the beam. Hence, the pump deep in the mine attached to opposite end of the beam via ropes and chains was driven. The pump pushed, rather than pulled the column of water upward, hence it could lift water any distance. Once the piston was at the bottom, the cycle repeated.
The Newcomen engine was more powerful than the Savery engine. For the first time water could be raised from a depth of over 300 feet. The first example from 1712 was able to replace a team of 500 horses that had been used to pump out the mine. Seventy-five Newcomen pumping engines were installed at mines in Britain, France, Holland, Sweden and Russia. In the next fifty years only a few small changes were made to the engine design.
While Newcomen engines brought practical benefits, they were inefficient in terms of the use of energy to power them. The system of alternately sending jets of steam, then cold water into the cylinder meant that the walls of the cylinder were alternately heated, then cooled with each stroke. Each charge of steam introduced would continue condensing until the cylinder approached working temperature once again. So at each stroke part of the potential of the steam was lost.
Separate condenser
In 1763, James Watt was working as instrument maker at the University of Glasgow when he was assigned the job of repairing a model Newcomen engine and noted how inefficient it was.
In 1765, Watt conceived the idea of equipping the engine with a separate condensation chamber, which he called a "condenser". Because the condenser and the working cylinder were separate, condensation occurred without significant loss of heat from the cylinder. The condenser remained cold and below atmospheric pressure at all times, while the cylinder remained hot at all times.
Steam was drawn from the boiler to the cylinder under the piston. When the piston reached the top of the cylinder, the steam inlet valve closed and the valve controlling the passage to the condenser opened. The condenser being at a lower pressure, drew the steam from the cylinder into the condenser where it cooled and condensed from water vapour to liquid water, maintaining a partial vacuum in the condenser that was communicated to the space of the cylinder by the connecting passage. External atmospheric pressure then pushed the piston down the cylinder.
The separation of the cylinder and condenser eliminated the loss of heat that occurred when steam was condensed in the working cylinder of a Newcomen engine. This gave the Watt engine greater efficiency than the Newcomen engine, reducing the amount of coal consumed while doing the same amount of work as a Newcomen engine.
In Watt's design, the cold water was injected only into the condensation chamber. This type of condenser is known as a jet condenser. The condenser is located in a cold water bath below the cylinder. The volume of water entering the condenser as spray absorbed the latent heat of the steam, and was determined as seven times the volume of the condensed steam. The condensate and the injected water was then removed by the air pump, and the surrounding cold water served to absorb the remaining thermal energy to retain a condenser temperature of 30 °C to 45 °C and the equivalent pressure of 0.04 to 0.1 bar
At each stroke the warm condensate was drawn off from the condenser and sent to a hot well by a vacuum pump, which also helped to evacuate the steam from under the power cylinder. The still-warm condensate was recycled as feedwater for the boiler.
Watt's next improvement to the Newcomen design was to seal the top of the cylinder and surround the cylinder with a jacket. Steam was passed through the jacket before being admitted below the piston, keeping the piston and cylinder warm to prevent condensation within it. The second improvement was the utilisation of steam expansion against the vacuum on the other side of the piston. The steam supply was cut during the stroke, and the steam expanded against the vacuum on the other side. This increased the efficiency of the engine, but also created a variable torque on the shaft which was undesirable for many applications, in particular pumping. Watt therefore limited the expansion to a ratio of 1:2 (i.e. the steam supply was cut at half stroke). This increased the theoretical efficiency from 6.4% to 10.6%, with only a small variation in piston pressure. Watt did not use high pressure steam because of safety concerns.
These improvements led to the fully developed version of 1776 that actually went into production.
The partnership of Matthew Boulton and James Watt
The separate condenser showed dramatic potential for improvements on the Newcomen engine but Watt was still discouraged by seemingly insurmountable problems before a marketable engine could be perfected. It was only after entering into partnership with Matthew Boulton that such became reality. Watt told Boulton about his ideas on improving the engine, and Boulton, an avid entrepreneur, agreed to fund development of a test engine at Soho, near Birmingham. At last Watt had access to facilities and the practical experience of craftsmen who were soon able to get the first engine working. As fully developed, it used about 75% less fuel than a similar Newcomen one.
In 1775, Watt designed two large engines: one for the Bloomfield Colliery at Tipton, completed in March 1776, and one for John Wilkinson's ironworks at Broseley in Shropshire, which was at work the following month. A third engine, at Stratford-le-Bow in east London, was also working that summer.
Watt had tried unsuccessfully for several years to obtain an accurately bored cylinder for his steam engines, and was forced to use hammered iron, which was out of round and caused leakage past the piston. Joseph Wickham Roe stated in 1916: "When [John] Smeaton saw the first engine he reported to the Society of Engineers that 'Neither the tools nor the workmen existed who could manufacture such a complex machine with sufficient precision.
In 1774, John Wilkinson invented a boring machine in which the shaft that held the cutting tool was supported on both ends and extended through the cylinder, unlike the cantilevered borers then in use. Boulton wrote in 1776 that "Mr. Wilkinson has bored us several cylinders almost without error; that of 50 inches diameter, which we have put up at Tipton, does not err on the thickness of an old shilling in any part".
Boulton and Watt's practice was to help mine-owners and other customers to build engines, supplying men to erect them and some specialised parts. However, their main profit from their patent was derived from charging a licence fee to the engine owners, based on the cost of the fuel they saved. The greater fuel efficiency of their engines meant that they were most attractive in areas where fuel was expensive, particularly Cornwall, for which three engines were ordered in 1777, for the Wheal Busy, Ting Tang, and Chacewater mines.
Later improvements
The first Watt engines were atmospheric pressure engines, like the Newcomen engine but with the condensation taking place separate from the cylinder. Driving the engines using both low pressure steam and a partial vacuum raised the possibility of reciprocating engine development. An arrangement of valves could alternately admit low pressure steam to the cylinder and then connect with the condenser. Consequently, the direction of the power stroke might be reversed, making it easier to obtain rotary motion. Additional benefits of the double acting engine were increased efficiency, higher speed (greater power) and more regular motion.
Before the development of the double acting piston, the linkage to the beam and the piston rod had been by means of a chain, which meant that power could only be applied in one direction, by pulling. This was effective in engines that were used for pumping water, but the double action of the piston meant that it could push as well as pull. This was not possible as long as the beam and the rod were connected by a chain. Furthermore, it was not possible to connect the piston rod of the sealed cylinder directly to the beam, because while the rod moved vertically in a straight line, the beam was pivoted at its centre, with each side inscribing an arc. To bridge the conflicting actions of the beam and the piston, Watt developed his parallel motion. This device used a four bar linkage coupled with a pantograph to produce the required straight line motion much more cheaply than if he had used a slider type of linkage. He was very proud of his solution.
Having the beam connected to the piston shaft by a means that applied force alternately in both directions also meant that it was possible to use the motion of the beam to turn a wheel. The simplest solution to transforming the action of the beam into a rotating motion was to connect the beam to a wheel by a crank, but because another party had patent rights on the use of the crank, Watt was obliged to come up with another solution. He adopted the epicyclic sun and planet gear system suggested by an employee William Murdoch, only later reverting, once the patent rights had expired, to the more familiar crank seen on most engines today. The main wheel attached to the crank was large and heavy, serving as a flywheel which, once set in motion, by its momentum maintained a constant power and smoothed the action of the alternating strokes. To its rotating central shaft, belts and gears could be attached to drive a great variety of machinery.
Because factory machinery needed to operate at a constant speed, Watt linked a steam regulator valve to a centrifugal governor which he adapted from those used to automatically control the speed of windmills. The centrifugal was not a true speed controller because it could not hold a set speed in response to a change in load.
These improvements allowed the steam engine to replace the water wheel and horses as the main sources of power for British industry, thereby freeing it from geographical constraints and becoming one of the main drivers in the Industrial Revolution.
Watt was also concerned with fundamental research on the functioning of the steam engine. His most notable measuring device, still in use today, is the Watt indicator incorporating a manometer to measure steam pressure within the cylinder according to the position of the piston, enabling a diagram to be produced representing the pressure of the steam as a function of its volume throughout the cycle.
Preserved Watt engines
The oldest surviving Watt engine is Old Bess of 1777, now in the Science Museum, London.
The oldest working engine in the world is the Smethwick Engine, brought into service in May 1779 and now at Thinktank in Birmingham (formerly at the now defunct Museum of Science and Industry, Birmingham).
The oldest still in its original engine house and still capable of doing the job for which it was installed is the 1812 Boulton and Watt engine at the Crofton Pumping Station in Wiltshire. This was used to pump water for the Kennet and Avon Canal; on certain weekends throughout the year the modern pumps are switched off and the two steam engines at Crofton still perform this function.
The oldest extant rotative steam engine, the Whitbread Engine (from 1785, the third rotative engine ever built), is located in the Powerhouse Museum in Sydney, Australia.
A Boulton-Watt engine of 1788 may be found in the Science Museum, London, while an 1817 blowing engine, formerly used at the Netherton ironworks of M W Grazebrook now decorates Dartmouth Circus, a traffic island at the start of the A38(M) motorway in Birmingham.
The Henry Ford Museum in Dearborn, Michigan houses a replica of a 1788 Watt rotative engine. It is a full-scale working model of a Boulton-Watt engine. The American industrialist Henry Ford commissioned the replica engine from the English manufacturer Charles Summerfield in 1932. The museum also holds an original Boulton and Watt atmospheric pump engine, originally used for canal pumping in Birmingham, illustrated below, and in use in situ at the Bowyer Street pumping station, from 1796 until 1854, and afterwards removed to Dearborn in 1929.
An other one is preserved at Fumel factory, France.
Watt engine produced by Hathorn, Davey and Co
In the 1880s, Hathorn Davey and Co / Leeds produced a 1 hp / 125 rpm atmospheric engine with external condenser but without steam expansion. It has been argued that this was probably the last commercial atmospheric engine to be manufactured. As an atmospheric engine, it did not have a pressurized boiler. It was intended for small businesses.
Recent developments
Watt's Expansion Engine is generally considered as of historic interest only. There are however some recent developments which may lead to a renaissance of the technology. Today, there is an enormous amount of waste steam and waste heat with temperatures between 100 and 150 °C generated by industry. In addition, solarthermal collectors, geothermal energy sources and biomass reactors produce heat in this temperature range. There are technologies to utilise this energy, in particular the Organic Rankine Cycle. In principle, these are steam turbines which do not use water but a fluid (a refrigerant) which evaporates at temperatures below 100 °C. Such systems are however fairly complex. They work with pressures of 6 to 20 bars, so that the whole system has to be completely sealed.
The Expansion Engine can offer significant advantages here, in particular for lower power ratings of 2 to 100 kW: with expansion ratios of 1:5, the theoretical efficiency reaches 15%, which is in the range of ORC systems. The Expansion Engine uses water as working fluid which is simple, cheap, non-toxic, non-flammable and non-corrosive. It works at pressure near and below atmospheric, so that sealing is not a problem. And it is a simple machine, implying cost effectiveness. Researchers from the University of Southampton / UK are currently developing a modern version of Watt's engine in order to generate energy from waste steam and waste heat. They improved the theory, demonstrating that theoretical efficiencies of up to 17.4% (and actual efficiencies of 11%) are possible.
In order to demonstrate the principle, a 25 watt experimental model engine was built and tested. The engine incorporates steam expansion as well as new features such as electronic control. The picture shows the model built and tested in 2016. Currently, a project to build and test a scaled-up 2 kW engine is under preparation.
See also
Carnot cycle
Corliss steam engine
Heat engine
Thermodynamics
Preserved beam engines
Ivan Polzunov made a dual-piston steam engine in 1766, but died before he could mass-produce it
References
External links
Watt atmospheric engine – Michigan State University, Chemical Engineering
Watt's 'perfect engine' – excerpts from Transactions of the Newcomen Society.
Boulton & Watt engine at the National Museum of Scotland
Boulton and Watt Steam Engine at the Powerhouse Museum, Sydney
James Watt Steam Engine Act on the UK Parliament website
Industrial Revolution
Scottish inventions
Steam Engine
History of the steam engine
Beam engines
Stationary steam engines
Thermodynamics | Watt steam engine | [
"Physics",
"Chemistry",
"Mathematics"
] | 4,107 | [
"Thermodynamics",
"Dynamical systems"
] |
165,180 | https://en.wikipedia.org/wiki/Software%20configuration%20management | Software configuration management (SCM), a.k.a.
software change and configuration management (SCCM), is the software engineering practice of tracking and controlling changes to a software system; part of the larger cross-disciplinary field of configuration management (CM). SCM includes version control and the establishment of baselines.
Goals
The goals of SCM include:
Configuration identification - Identifying configurations, configuration items and baselines.
Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.
Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.
Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.
Build management - Managing the process and tools used for builds.
Process management - Ensuring adherence to the organization's development process.
Environment management - Managing the software and hardware that host the system.
Teamwork - Facilitate team interactions related to the process.
Defect tracking - Making sure every defect has traceability back to the source.
With the introduction of cloud computing and DevOps the purposes of SCM tools have become merged in some cases. The SCM tools themselves have become virtual appliances that can be instantiated as virtual machines and saved with state and version. The tools can model and manage cloud-based virtual resources, including virtual appliances, storage units, and software bundles. The roles and responsibilities of the actors have become merged as well with developers now being able to dynamically instantiate virtual servers and related resources.
History
Examples
See also
References
Further reading
Aiello, R. (2010). Configuration Management Best Practices: Practical Methods that Work in the Real World (1st ed.). Addison-Wesley. .
Babich, W.A. (1986). Software Configuration Management, Coordination for Team Productivity. 1st edition. Boston: Addison-Wesley
Berczuk, Appleton; (2003). Software Configuration Management Patterns: Effective TeamWork, Practical Integration (1st ed.). Addison-Wesley. .
Bersoff, E.H. (1997). Elements of Software Configuration Management. IEEE Computer Society Press, Los Alamitos, CA, 1-32
Dennis, A., Wixom, B.H. & Tegarden, D. (2002). System Analysis & Design: An Object-Oriented Approach with UML. Hoboken, New York: John Wiley & Sons, Inc.
Department of Defense, USA (2001). Military Handbook: Configuration management guidance (rev. A) (MIL-HDBK-61A). Retrieved January 5, 2010, from http://www.everyspec.com/MIL-HDBK/MIL-HDBK-0001-0099/MIL-HDBK-61_11531/
Futrell, R.T. et al. (2002). Quality Software Project Management. 1st edition. Prentice-Hall.
International Organization for Standardization (2003). ISO 10007: Quality management systems – Guidelines for configuration management.
Saeki M. (2003). Embedding Metrics into Information Systems Development Methods: An Application of Method Engineering Technique. CAiSE 2003, 374–389.
Scott, J.A. & Nisse, D. (2001). Software configuration management. In: Guide to Software Engineering Body of Knowledge. Retrieved January 5, 2010, from http://www.computer.org/portal/web/swebok/htmlformat
Paul M. Duvall, Steve Matyas, and Andrew Glover (2007). Continuous Integration: Improving Software Quality and Reducing Risk. (1st ed.). Addison-Wesley Professional. .
External links
SCM and ISO 9001 by Robert Bamford and William Deibler, SSQC
Use Cases and Implementing Application Lifecycle Management
Parallel Development Strategies for Software Configuration Management
Configuration management
Software engineering
IEEE standards
Types of tools used in software development | Software configuration management | [
"Technology",
"Engineering"
] | 835 | [
"Systems engineering",
"Computer engineering",
"Computer standards",
"Configuration management",
"Software engineering",
"Information technology",
"IEEE standards"
] |
165,423 | https://en.wikipedia.org/wiki/Digestion | Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food. The saliva also contains mucus, which lubricates the food; the electrolyte hydrogencarbonate (), which provides the ideal conditions of pH for amylase to work; and other electrolytes (, , ). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damaging effects of chemicals like concentrated hydrochloric acid while also aiding lubrication. Hydrochloric acid provides acidic pH for pepsin. At the same time protein digestion is occurring, mechanical mixing occurs by peristalsis, which is waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Pepsin breaks down proteins into peptides or proteoses, which is further broken down into dipeptides and amino acids by enzymes in the small intestine. Studies suggest that increasing the number of chews per bite increases relevant gut hormones and may decrease self-reported hunger and food intake.
When the pyloric sphincter valve opens, partially digested food (chyme) enters the duodenum where it mixes with digestive enzymes from the pancreas and bile juice from the liver and then passes through the small intestine, in which digestion continues. When the chyme is fully digested, it is absorbed into the blood. 95% of nutrient absorption occurs in the small intestine. Water and minerals are reabsorbed back into the blood in the colon (large intestine) where the pH is slightly acidic (about 5.6 ~ 6.9). Some vitamins, such as biotin and vitamin K (K2MK7) produced by bacteria in the colon are also absorbed into the blood in the colon. Absorption of water, simple sugar and alcohol also takes place in stomach. Waste material (feces) is eliminated from the rectum during defecation.
Digestive system
Digestive systems take many forms. There is a fundamental distinction between internal and external digestion. External digestion developed earlier in evolutionary history, and most fungi still rely on it. In this process, enzymes are secreted into the environment surrounding the organism, where they break down an organic material, and some of the products diffuse back to the organism. Animals have a tube (gastrointestinal tract) in which internal digestion occurs, which is more efficient because more of the broken down products can be captured, and the internal chemical environment can be more efficiently controlled.
Some organisms, including nearly all spiders, secrete biotoxins and digestive chemicals (e.g., enzymes) into the extracellular environment prior to ingestion of the consequent "soup". In others, once potential nutrients or food is inside the organism, digestion can be conducted to a vesicle or a sac-like structure, through a tube, or through several specialized organs aimed at making the absorption of nutrients more efficient.
Secretion systems
Bacteria use several systems to obtain nutrients from other organisms in the environments.
Channel transport system
In a channel transport system, several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria. It is a simple system, which consists of only three protein subunits: the ABC protein, membrane fusion protein (MFP), and outer membrane protein. This secretion system transports various chemical species, from ions, drugs, to proteins of various sizes (20–900 kDa). The chemical species secreted vary in size from the small Escherichia coli peptide colicin V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 900 kDa.
Molecular syringe
A type III secretion system means that a molecular syringe is used through which a bacterium (e.g. certain types of Salmonella, Shigella, Yersinia) can inject nutrients into protist cells. One such mechanism was first discovered in Y. pestis and showed that toxins could be injected directly from the bacterial cytoplasm into the cytoplasm of its host's cells rather than be secreted into the extracellular medium.
Conjugation machinery
The conjugation machinery of some bacteria (and archaeal flagella) is capable of transporting both DNA and proteins. It was discovered in Agrobacterium tumefaciens, which uses this system to introduce the Ti plasmid and proteins into the host, which develops the crown gall (tumor). The VirB complex of Agrobacterium tumefaciens is the prototypic system.
In the nitrogen-fixing Rhizobia, conjugative elements naturally engage in inter-kingdom conjugation. Such elements as the Agrobacterium Ti or Ri plasmids contain elements that can transfer to plant cells. Transferred genes enter the plant cell nucleus and effectively transform the plant cells into factories for the production of opines, which the bacteria use as carbon and energy sources. Infected plant cells form crown gall or root tumors. The Ti and Ri plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant.
The Ti and Ri plasmids are themselves conjugative. Ti and Ri transfer between bacteria uses an independent system (the tra, or transfer, operon) from that for inter-kingdom transfer (the vir, or virulence, operon). Such transfer creates virulent strains from previously avirulent Agrobacteria.
Release of outer membrane vesicles
In addition to the use of the multiprotein complexes listed above, gram-negative bacteria possess another method for release of material: the formation of outer membrane vesicles. Portions of the outer membrane pinch off, forming spherical structures made of a lipid bilayer enclosing periplasmic materials. Vesicles from a number of bacterial species have been found to contain virulence factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. While release of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins seems to be selective.
Gastrovascular cavity
The gastrovascular cavity functions as a stomach in both digestion and the distribution of nutrients to all parts of the body. Extracellular digestion takes place within this central cavity, which is lined with the gastrodermis, the internal layer of epithelium. This cavity has only one opening to the outside that functions as both a mouth and an anus: waste and undigested matter is excreted through the mouth/anus, which can be described as an incomplete gut.
In a plant such as the Venus flytrap that can make its own food through photosynthesis, it does not eat and digest its prey for the traditional objectives of harvesting energy and carbon, but mines prey primarily for essential nutrients (nitrogen and phosphorus in particular) that are in short supply in its boggy, acidic habitat.
Phagosome
A phagosome is a vacuole formed around a particle absorbed by phagocytosis. The vacuole is formed by the fusion of the cell membrane around the particle. A phagosome is a cellular compartment in which pathogenic microorganisms can be killed and digested. Phagosomes fuse with lysosomes in their maturation process, forming phagolysosomes. In humans, Entamoeba histolytica can phagocytose red blood cells.
Specialised organs and behaviours
To aid in the digestion of their food, animals evolved organs such as beaks, tongues, radulae, teeth, crops, gizzards, and others.
Beaks
Birds have bony beaks that are specialised according to the bird's ecological niche. For example, macaws primarily eat seeds, nuts, and fruit, using their beaks to open even the toughest seed. First they scratch a thin line with the sharp point of the beak, then they shear the seed open with the sides of the beak.
The mouth of the squid is equipped with a sharp horny beak mainly made of cross-linked proteins. It is used to kill and tear prey into manageable pieces. The beak is very robust, but does not contain any minerals, unlike the teeth and jaws of many other organisms, including marine species. The beak is the only indigestible part of the squid.
Tongue
The tongue is skeletal muscle on the floor of the mouth of most vertebrates, that manipulates food for chewing (mastication) and swallowing (deglutition). It is sensitive and kept moist by saliva. The underside of the tongue is covered with a smooth mucous membrane. The tongue also has a touch sense for locating and positioning food particles that require further chewing. The tongue is used to roll food particles into a bolus before being transported down the esophagus through peristalsis.
The sublingual region underneath the front of the tongue is a location where the oral mucosa is very thin, and underlain by a plexus of veins. This is an ideal location for introducing certain medications to the body. The sublingual route takes advantage of the highly vascular quality of the oral cavity, and allows for the speedy application of medication into the cardiovascular system, bypassing the gastrointestinal tract.
Teeth
Teeth (singular tooth) are small whitish structures found in the jaws (or mouths) of many vertebrates that are used to tear, scrape, milk and chew food. Teeth are not made of bone, but rather of tissues of varying density and hardness, such as enamel, dentine and cementum. Human teeth have a blood and nerve supply which enables proprioception. This is the ability of sensation when chewing, for example if we were to bite into something too hard for our teeth, such as a chipped plate mixed in food, our teeth send a message to our brain and we realise that it cannot be chewed, so we stop trying.
The shapes, sizes and numbers of types of animals' teeth are related to their diets. For example, herbivores have a number of molars which are used to grind plant matter, which is difficult to digest. Carnivores have canine teeth which are used to kill and tear meat.
Crop
A crop, or croup, is a thin-walled expanded portion of the alimentary tract used for the storage of food prior to digestion. In some birds it is an expanded, muscular pouch near the gullet or throat. In adult doves and pigeons, the crop can produce crop milk to feed newly hatched birds.
Certain insects may have a crop or enlarged esophagus.
Abomasum
Herbivores have evolved cecums (or an abomasum in the case of ruminants). Ruminants have a fore-stomach with four chambers. These are the rumen, reticulum, omasum, and abomasum. In the first two chambers, the rumen and the reticulum, the food is mixed with saliva and separates into layers of solid and liquid material. Solids clump together to form the cud (or bolus). The cud is then regurgitated, chewed slowly to completely mix it with saliva and to break down the particle size.
Fibre, especially cellulose and hemi-cellulose, is primarily broken down into the volatile fatty acids, acetic acid, propionic acid and butyric acid in these chambers (the reticulo-rumen) by microbes: (bacteria, protozoa, and fungi). In the omasum, water and many of the inorganic mineral elements are absorbed into the blood stream.
The abomasum is the fourth and final stomach compartment in ruminants. It is a close equivalent of a monogastric stomach (e.g., those in humans or pigs), and digesta is processed here in much the same way. It serves primarily as a site for acid hydrolysis of microbial and dietary protein, preparing these protein sources for further digestion and absorption in the small intestine. Digesta is finally moved into the small intestine, where the digestion and absorption of nutrients occurs. Microbes produced in the reticulo-rumen are also digested in the small intestine.
Specialised behaviours
Regurgitation has been mentioned above under abomasum and crop, referring to crop milk, a secretion from the lining of the crop of pigeons and doves with which the parents feed their young by regurgitation.
Many sharks have the ability to turn their stomachs inside out and evert it out of their mouths in order to get rid of unwanted contents (perhaps developed as a way to reduce exposure to toxins).
Other animals, such as rabbits and rodents, practise coprophagia behaviours – eating specialised faeces in order to re-digest food, especially in the case of roughage. Capybara, rabbits, hamsters and other related species do not have a complex digestive system as do, for example, ruminants. Instead they extract more nutrition from grass by giving their food a second pass through the gut. Soft faecal pellets of partially digested food are excreted and generally consumed immediately. They also produce normal droppings, which are not eaten.
Young elephants, pandas, koalas, and hippos eat the faeces of their mother, probably to obtain the bacteria required to properly digest vegetation. When they are born, their intestines do not contain these bacteria (they are completely sterile). Without them, they would be unable to get any nutritional value from many plant components.
In earthworms
An earthworm's digestive system consists of a mouth, pharynx, esophagus, crop, gizzard, and intestine. The mouth is surrounded by strong lips, which act like a hand to grab pieces of dead grass, leaves, and weeds, with bits of soil to help chew. The lips break the food down into smaller pieces. In the pharynx, the food is lubricated by mucus secretions for easier passage. The esophagus adds calcium carbonate to neutralize the acids formed by food matter decay. Temporary storage occurs in the crop where food and calcium carbonate are mixed. The powerful muscles of the gizzard churn and mix the mass of food and dirt. When the churning is complete, the glands in the walls of the gizzard add enzymes to the thick paste, which helps chemically breakdown the organic matter. By peristalsis, the mixture is sent to the intestine where friendly bacteria continue chemical breakdown. This releases carbohydrates, protein, fat, and various vitamins and minerals for absorption into the body.
Overview of vertebrate digestion
In most vertebrates, digestion is a multistage process in the digestive system, starting from ingestion of raw materials, most often other organisms. Ingestion usually involves some type of mechanical and chemical processing. Digestion is separated into four steps:
Ingestion: placing food into the mouth (entry of food in the digestive system),
Mechanical and chemical breakdown: mastication and the mixing of the resulting bolus with water, acids, bile and enzymes in the stomach and intestine to break down complex chemical species into simple structures,
Absorption: of nutrients from the digestive system to the circulatory and lymphatic capillaries through osmosis, active transport, and diffusion, and
Egestion (Excretion): Removal of undigested materials from the digestive tract through defecation.
Underlying the process is muscle movement throughout the system through swallowing and peristalsis. Each step in digestion requires energy, and thus imposes an "overhead charge" on the energy made available from absorbed substances. Differences in that overhead cost are important influences on lifestyle, behavior, and even physical structures. Examples may be seen in humans, who differ considerably from other hominids (lack of hair, smaller jaws and musculature, different dentition, length of intestines, cooking, etc.).
The major part of digestion takes place in the small intestine. The large intestine primarily serves as a site for fermentation of indigestible matter by gut bacteria and for resorption of water from digests before excretion.
In mammals, preparation for digestion begins with the cephalic phase in which saliva is produced in the mouth and digestive enzymes are produced in the stomach. Mechanical and chemical digestion begin in the mouth where food is chewed, and mixed with saliva to begin enzymatic processing of starches. The stomach continues to break food down mechanically and chemically through churning and mixing with both acids and enzymes. Absorption occurs in the stomach and gastrointestinal tract, and the process finishes with defecation.
Human digestion process
The human gastrointestinal tract is around long. Food digestion physiology varies between individuals and upon other factors such as the characteristics of the food and size of the meal, and the process of digestion normally takes between 24 and 72 hours.
Digestion begins in the mouth with the secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric acid and pepsin which could damage the stomach lining, but mucus and bicarbonates are secreted for protection. In the stomach further release of enzymes break down the food further and this is combined with the churning action of the stomach. Mainly proteins are digested in stomach. The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice. The intestinal walls are lined with villi, and their epithelial cells are covered with numerous microvilli to improve the absorption of nutrients by increasing the surface area of the intestine. Bile helps in emulsification of fats and also activates lipases.
In the large intestine, the passage of food is slower to enable fermentation by the gut flora to take place. Here, water is absorbed and waste material stored as feces to be removed by defecation via the anal canal and anus.
Neural and biochemical control mechanisms
Different phases of digestion take place including: the cephalic phase, gastric phase, and intestinal phase.
The cephalic phase occurs at the sight, thought and smell of food, which stimulate the cerebral cortex. Taste and smell stimuli are sent to the hypothalamus and medulla oblongata. After this it is routed through the vagus nerve and release of acetylcholine. Gastric secretion at this phase rises to 40% of maximum rate. Acidity in the stomach is not buffered by food at this point and thus acts to inhibit parietal (secretes acid) and G cell (secretes gastrin) activity via D cell secretion of somatostatin.
The gastric phase takes 3 to 4 hours. It is stimulated by distension of the stomach, presence of food in stomach and decrease in pH. Distention activates long and myenteric reflexes. This activates the release of acetylcholine, which stimulates the release of more gastric juices. As protein enters the stomach, it binds to hydrogen ions, which raises the pH of the stomach. Inhibition of gastrin and gastric acid secretion is lifted. This triggers G cells to release gastrin, which in turn stimulates parietal cells to secrete gastric acid. Gastric acid is about 0.5% hydrochloric acid, which lowers the pH to the desired pH of 1–3. Acid release is also triggered by acetylcholine and histamine.
The intestinal phase has two parts, the excitatory and the inhibitory. Partially digested food fills the duodenum. This triggers intestinal gastrin to be released. Enterogastric reflex inhibits vagal nuclei, activating sympathetic fibers causing the pyloric sphincter to tighten to prevent more food from entering, and inhibits local reflexes.
Breakdown into nutrients
Protein digestion
Protein digestion occurs in the stomach and duodenum in which 3 main enzymes, pepsin secreted by the stomach and trypsin and chymotrypsin secreted by the pancreas, break down food proteins into polypeptides that are then broken down by various exopeptidases and dipeptidases into amino acids. The digestive enzymes however are mostly secreted as their inactive precursors, the zymogens. For example, trypsin is secreted by pancreas in the form of trypsinogen, which is activated in the duodenum by enterokinase to form trypsin. Trypsin then cleaves proteins to smaller polypeptides.
Fat digestion
Digestion of some fats can begin in the mouth where lingual lipase breaks down some short chain lipids into diglycerides. However fats are mainly digested in the small intestine. The presence of fat in the small intestine produces hormones that stimulate the release of pancreatic lipase from the pancreas and bile from the liver which helps in the emulsification of fats for absorption of fatty acids. Complete digestion of one molecule of fat (a triglyceride) results a mixture of fatty acids, mono- and di-glycerides, but no glycerol.
Carbohydrate digestion
In humans, dietary starches are composed of glucose units arranged in long chains called amylose, a polysaccharide. During digestion, bonds between glucose molecules are broken by salivary and pancreatic amylase, resulting in progressively smaller chains of glucose. This results in simple sugars glucose and maltose (2 glucose molecules) that can be absorbed by the small intestine.
Lactase is an enzyme that breaks down the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies widely by genetic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast to about 5 percent of people of northern European descent.
Sucrase is an enzyme that breaks down the disaccharide sucrose, commonly known as table sugar, cane sugar, or beet sugar. Sucrose digestion yields the sugars fructose and glucose which are readily absorbed by the small intestine.
DNA and RNA digestion
DNA and RNA are broken down into mononucleotides by the nucleases deoxyribonuclease and ribonuclease (DNase and RNase) from the pancreas.
Non-destructive digestion
Some nutrients are complex molecules (for example vitamin B12) which would be destroyed if they were broken down into their functional groups. To digest vitamin B12 non-destructively, haptocorrin in saliva strongly binds and protects the B12 molecules from stomach acid as they enter the stomach and are cleaved from their protein complexes.
After the B12-haptocorrin complexes pass from the stomach via the pylorus to the duodenum, pancreatic proteases cleave haptocorrin from the B12 molecules which rebind to intrinsic factor (IF). These B12-IF complexes travel to the ileum portion of the small intestine where cubilin receptors enable assimilation and circulation of B12-IF complexes in the blood.
Digestive hormones
There are at least five hormones that aid and regulate the digestive system in mammals. There are variations across the vertebrates, as for instance in birds. Arrangements are complex and additional details are regularly discovered. Connections to metabolic control (largely the glucose-insulin system) have been uncovered.
Gastrin – is in the stomach and stimulates the gastric glands to secrete pepsinogen (an inactive form of the enzyme pepsin) and hydrochloric acid. Secretion of gastrin is stimulated by food arriving in stomach. The secretion is inhibited by low pH.
Secretin – is in the duodenum and signals the secretion of sodium bicarbonate in the pancreas and it stimulates the bile secretion in the liver. This hormone responds to the acidity of the chyme.
Cholecystokinin (CCK) – is in the duodenum and stimulates the release of digestive enzymes in the pancreas and stimulates the emptying of bile in the gall bladder. This hormone is secreted in response to fat in chyme.
Gastric inhibitory peptide (GIP) – is in the duodenum and decreases the stomach churning in turn slowing the emptying in the stomach. Another function is to induce insulin secretion.
Motilin – is in the duodenum and increases the migrating myoelectric complex component of gastrointestinal motility and stimulates the production of pepsin.
Significance of pH
Digestion is a complex process controlled by several factors. pH plays a crucial role in a normally functioning digestive tract. In the mouth, pharynx and esophagus, pH is typically about 6.8, very weakly acidic. Saliva controls pH in this region of the digestive tract. Salivary amylase is contained in saliva and starts the breakdown of carbohydrates into monosaccharides. Most digestive enzymes are sensitive to pH and will denature in a high or low pH environment.
The stomach's high acidity inhibits the breakdown of carbohydrates within it. This acidity confers two benefits: it denatures proteins for further digestion in the small intestines, and provides non-specific immunity, damaging or eliminating various pathogens.
In the small intestines, the duodenum provides critical pH balancing to activate digestive enzymes. The liver secretes bile into the duodenum to neutralize the acidic conditions from the stomach, and the pancreatic duct empties into the duodenum, adding bicarbonate to neutralize the acidic chyme, thus creating a neutral environment. The mucosal tissue of the small intestines is alkaline with a pH of about 8.5.
See also
Digestive system of gastropods
Digestive system of humpback whales
Evolution of the mammalian digestive system
Discovery and development of proton pump inhibitors
Erepsin
Gastroesophageal reflux disease
References
External links
Human Physiology – Digestion
NIH guide to digestive system
The Digestive System
How does the Digestive System Work?
Digestive system
Metabolism | Digestion | [
"Chemistry",
"Biology"
] | 6,046 | [
"Digestive system",
"Organ systems",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
166,010 | https://en.wikipedia.org/wiki/Vorticity | In continuum mechanics, vorticity is a pseudovector (or axial vector) field that describes the local spinning motion of a continuum near some point (the tendency of something to rotate), as would be seen by an observer located at that point and traveling along with the flow. It is an important quantity in the dynamical theory of fluids and provides a convenient framework for understanding a variety of complex flow phenomena, such as the formation and motion of vortex rings.
Mathematically, the vorticity is the curl of the flow velocity :
where is the nabla operator. Conceptually, could be determined by marking parts of a continuum in a small neighborhood of the point in question, and watching their relative displacements as they move along the flow. The vorticity would be twice the mean angular velocity vector of those particles relative to their center of mass, oriented according to the right-hand rule. By its own definition, the vorticity vector is a solenoidal field since
In a two-dimensional flow, is always perpendicular to the plane of the flow, and can therefore be considered a scalar field.
Mathematical definition and properties
Mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted by , defined as the curl of the velocity field describing the continuum motion. In Cartesian coordinates:
In words, the vorticity tells how the velocity vector changes when one moves by an infinitesimal distance in a direction perpendicular to it.
In a two-dimensional flow where the velocity is independent of the -coordinate and has no -component, the vorticity vector is always parallel to the -axis, and therefore can be expressed as a scalar field multiplied by a constant unit vector :
The vorticity is also related to the flow's circulation (line integral of the velocity) along a closed path by the (classical) Stokes' theorem. Namely, for any infinitesimal surface element with normal direction and area , the circulation along the perimeter of is the dot product where is the vorticity at the center of .
Since vorticity is an axial vector, it can be associated with a second-order antisymmetric tensor (the so-called vorticity or rotation tensor), which is said to be the dual of . The relation between the two quantities, in index notation, are given by
where is the three-dimensional Levi-Civita tensor. The vorticity tensor is simply the antisymmetric part of the tensor , i.e.,
Examples
In a mass of continuum that is rotating like a rigid body, the vorticity is twice the angular velocity vector of that rotation. This is the case, for example, in the central core of a Rankine vortex.
The vorticity may be nonzero even when all particles are flowing along straight and parallel pathlines, if there is shear (that is, if the flow speed varies across streamlines). For example, in the laminar flow within a pipe with constant cross section, all particles travel parallel to the axis of the pipe; but faster near that axis, and practically stationary next to the walls. The vorticity will be zero on the axis, and maximum near the walls, where the shear is largest.
Conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the ideal irrotational vortex, where most particles rotate about some straight axis, with speed inversely proportional to their distances to that axis. A small parcel of continuum that does not straddle the axis will be rotated in one sense but sheared in the opposite sense, in such a way that their mean angular velocity about their center of mass is zero.
{| border="0"
|-
| style="text-align:center;" colspan=3 | Example flows:
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" | Rigid-body-like vortex
| style="text-align:center;" | Parallel flow with shear
| style="text-align:center;" | Irrotational vortex
|-
| style="text-align:center;" colspan=3 | where is the velocity of the flow, is the distance to the center of the vortex and ∝ indicates proportionality.Absolute velocities around the highlighted point:
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" colspan=3 | Relative velocities (magnified) around the highlighted point
|-
| valign="top" |
| valign="top" |
| valign="top" |
|-
| style="text-align:center;" | Vorticity ≠ 0
| style="text-align:center;" | Vorticity ≠ 0
| style="text-align:center;" | Vorticity = 0
|}
Another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the continuum becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, then there is vorticity in the flow. In the figure below, the left subfigure demonstrates no vorticity, and the right subfigure demonstrates existence of vorticity.
Evolution
The evolution of the vorticity field in time is described by the vorticity equation, which can be derived from the Navier–Stokes equations.
In many real flows where the viscosity can be neglected (more precisely, in flows with high Reynolds number), the vorticity field can be modeled by a collection of discrete vortices, the vorticity being negligible everywhere except in small regions of space surrounding the axes of the vortices. This is true in the case of two-dimensional potential flow (i.e. two-dimensional zero viscosity flow), in which case the flowfield can be modeled as a complex-valued field on the complex plane.
Vorticity is useful for understanding how ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes a diffusion of vorticity away from the vortex cores into the general flow field; this flow is accounted for by a diffusion term in the vorticity transport equation.
Vortex lines and vortex tubes
A vortex line or vorticity line is a line which is everywhere tangent to the local vorticity vector. Vortex lines are defined by the relation
where is the vorticity vector in Cartesian coordinates.
A vortex tube is the surface in the continuum formed by all vortex lines passing through a given (reducible) closed curve in the continuum. The 'strength' of a vortex tube (also called vortex flux) is the integral of the vorticity across a cross-section of the tube, and is the same everywhere along the tube (because vorticity has zero divergence). It is a consequence of Helmholtz's theorems (or equivalently, of Kelvin's circulation theorem) that in an inviscid fluid the 'strength' of the vortex tube is also constant with time. Viscous effects introduce frictional losses and time dependence.
In a three-dimensional flow, vorticity (as measured by the volume integral of the square of its magnitude) can be intensified when a vortex line is extended — a phenomenon known as vortex stretching. This phenomenon occurs in the formation of a bathtub vortex in outflowing water, and the build-up of a tornado by rising air currents.
Vorticity meters
Rotating-vane vorticity meter
A rotating-vane vorticity meter was invented by Russian hydraulic engineer A. Ya. Milovich (1874–1958). In 1913 he proposed a cork with four blades attached as a device qualitatively showing the magnitude of the vertical projection of the vorticity and demonstrated a motion-picture photography of the float's motion on the water surface in a model of a river bend.
Rotating-vane vorticity meters are commonly shown in educational films on continuum mechanics (famous examples include the NCFMF's "Vorticity" and "Fundamental Principles of Flow" by Iowa Institute of Hydraulic Research).
Specific sciences
Aeronautics
In aerodynamics, the lift distribution over a finite wing may be approximated by assuming that each spanwise segment of the wing has a semi-infinite trailing vortex behind it. It is then possible to solve for the strength of the vortices using the criterion that there be no flow induced through the surface of the wing. This procedure is called the vortex panel method of computational fluid dynamics. The strengths of the vortices are then summed to find the total approximate circulation about the wing. According to the Kutta–Joukowski theorem, lift per unit of span is the product of circulation, airspeed, and air density.
Atmospheric sciences
The relative vorticity is the vorticity relative to the Earth induced by the air velocity field. This air velocity field is often modeled as a two-dimensional flow parallel to the ground, so that the relative vorticity vector is generally scalar rotation quantity perpendicular to the ground. Vorticity is positive when – looking down onto the Earth's surface – the wind turns counterclockwise. In the northern hemisphere, positive vorticity is called cyclonic rotation, and negative vorticity is anticyclonic rotation; the nomenclature is reversed in the Southern Hemisphere.
The absolute vorticity is computed from the air velocity relative to an inertial frame, and therefore includes a term due to the Earth's rotation, the Coriolis parameter.
The potential vorticity is absolute vorticity divided by the vertical spacing between levels of constant (potential) temperature (or entropy). The absolute vorticity of an air mass will change if the air mass is stretched (or compressed) in the vertical direction, but the potential vorticity is conserved in an adiabatic flow. As adiabatic flow predominates in the atmosphere, the potential vorticity is useful as an approximate tracer of air masses in the atmosphere over the timescale of a few days, particularly when viewed on levels of constant entropy.
The barotropic vorticity equation is the simplest way for forecasting the movement of Rossby waves (that is, the troughs and ridges of 500 hPa geopotential height) over a limited amount of time (a few days). In the 1950s, the first successful programs for numerical weather forecasting utilized that equation.
In modern numerical weather forecasting models and general circulation models (GCMs), vorticity may be one of the predicted variables, in which case the corresponding time-dependent equation is a prognostic equation.
Related to the concept of vorticity is the helicity , defined as
where the integral is over a given volume . In atmospheric science, helicity of the air motion is important in forecasting supercells and the potential for tornadic activity.
See also
Barotropic vorticity equation
D'Alembert's paradox
Enstrophy
Palinstrophy
Velocity potential
Vortex
Vortex tube
Vortex stretching
Horseshoe vortex
Wingtip vortices
Fluid dynamics
Biot–Savart law
Circulation
Vorticity equations
Kutta–Joukowski theorem
Atmospheric sciences
Prognostic equation
Carl-Gustaf Rossby
Hans Ertel
References
Bibliography
Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London
"Weather Glossary"' The Weather Channel Interactive, Inc.. 2004.
"Vorticity". Integrated Publishing.
Further reading
Ohkitani, K., "Elementary Account Of Vorticity And Related Equations". Cambridge University Press. January 30, 2005.
Chorin, Alexandre J., "Vorticity and Turbulence". Applied Mathematical Sciences, Vol 103, Springer-Verlag. March 1, 1994.
Majda, Andrew J., Andrea L. Bertozzi, "Vorticity and Incompressible Flow". Cambridge University Press; 2002.
Tritton, D. J., "Physical Fluid Dynamics". Van Nostrand Reinhold, New York. 1977.
Arfken, G., "Mathematical Methods for Physicists", 3rd ed. Academic Press, Orlando, Florida. 1985.
External links
Weisstein, Eric W., "Vorticity". Scienceworld.wolfram.com.
Doswell III, Charles A., "A Primer on Vorticity for Application in Supercells and Tornadoes". Cooperative Institute for Mesoscale Meteorological Studies, Norman, Oklahoma.
Cramer, M. S., "Navier–Stokes Equations -- Vorticity Transport Theorems: Introduction". Foundations of Fluid Mechanics.
Parker, Douglas, "ENVI 2210 – Atmosphere and Ocean Dynamics, 9: Vorticity". School of the Environment, University of Leeds. September 2001.
Graham, James R., "Astronomy 202: Astrophysical Gas Dynamics". Astronomy Department, UC Berkeley.
"The vorticity equation: incompressible and barotropic fluids".
"Interpretation of the vorticity equation".
"Kelvin's vorticity theorem for incompressible or barotropic flow".
"Spherepack 3.1 ". (includes a collection of FORTRAN vorticity program)
"Mesoscale Compressible Community (MC2) Real-Time Model Predictions". (Potential vorticity analysis)
Continuum mechanics
Fluid dynamics
Meteorological quantities
Rotation
fr:Tourbillon (physique) | Vorticity | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,859 | [
"Physical phenomena",
"Physical quantities",
"Continuum mechanics",
"Chemical engineering",
"Quantity",
"Classical mechanics",
"Rotation",
"Meteorological quantities",
"Motion (physics)",
"Piping",
"Fluid dynamics"
] |
166,084 | https://en.wikipedia.org/wiki/Compressibility | In thermodynamics and fluid mechanics, the compressibility (also known as the coefficient of compressibility or, if the temperature is held constant, the isothermal compressibility) is a measure of the instantaneous relative volume change of a fluid or solid as a response to a pressure (or mean stress) change. In its simple form, the compressibility (denoted in some fields) may be expressed as
,
where is volume and is pressure. The choice to define compressibility as the negative of the fraction makes compressibility positive in the (usual) case that an increase in pressure induces a reduction in volume. The reciprocal of compressibility at fixed temperature is called the isothermal bulk modulus.
Definition
The specification above is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is isentropic or isothermal. Accordingly, isothermal compressibility is defined:
where the subscript indicates that the partial differential is to be taken at constant temperature.
Isentropic compressibility is defined:
where is entropy. For a solid, the distinction between the two is usually negligible.
Since the density of a material is inversely proportional to its volume, it can be shown that in both cases
For instance, for an ideal gas,
. Hence .
Consequently, the isothermal
compressibility of an ideal gas is
.
The ideal gas (where the particles do not interact with each other) is an abstraction. The particles in real materials interact with each other. Then, the relation between the pressure, density and temperature is known as the equation of state denoted by some function . The Van der Waals equation is an example of an equation of state for a realistic gas.
.
Knowing the equation of state, the compressibility can be determined for any substance.
Relation to speed of sound
The speed of sound is defined in classical mechanics as:
It follows, by replacing partial derivatives, that the isentropic compressibility can be expressed as:
Relation to bulk modulus
The inverse of the compressibility is called the bulk modulus, often denoted (sometimes or ).).
The compressibility equation relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid.
Thermodynamics
The isothermal compressibility is generally related to the isentropic (or adiabatic) compressibility by a few relations:
where is the heat capacity ratio, is the volumetric coefficient of thermal expansion, is the particle density, and is the thermal pressure coefficient.
In an extensive thermodynamic system, the application of statistical mechanics shows that the isothermal compressibility is also related to the relative size of fluctuations in particle density:
where is the chemical potential.
The term "compressibility" is also used in thermodynamics to describe deviations of the thermodynamic properties of a real gas from those expected from an ideal gas.
The compressibility factor is defined as
where is the pressure of the gas, is its temperature, and is its molar volume, all measured independently of one another. In the case of an ideal gas, the compressibility factor is equal to unity, and the familiar ideal gas law is recovered:
can, in general, be either greater or less than unity for a real gas.
The deviation from ideal gas behavior tends to become particularly significant (or, equivalently, the compressibility factor strays far from unity) near the critical point, or in the case of high pressure or low temperature. In these cases, a generalized compressibility chart or an alternative equation of state better suited to the problem must be utilized to produce accurate results.
Earth science
The Earth sciences use compressibility to quantify the ability of a soil or rock to reduce in volume under applied pressure. This concept is important for specific storage, when estimating groundwater reserves in confined aquifers. Geologic materials are made up of two portions: solids and voids (or same as porosity). The void space can be full of liquid or gas. Geologic materials reduce in volume only when the void spaces are reduced, which expel the liquid or gas from the voids. This can happen over a period of time, resulting in settlement.
It is an important concept in geotechnical engineering in the design of certain structural foundations. For example, the construction of high-rise structures over underlying layers of highly compressible bay mud poses a considerable design constraint, and often leads to use of driven piles or other innovative techniques.
Fluid dynamics
The degree of compressibility of a fluid has strong implications for its dynamics. Most notably, the propagation of sound is dependent on the compressibility of the medium.
Aerodynamics
Compressibility is an important factor in aerodynamics. At low speeds, the compressibility of air is not significant in relation to aircraft design, but as the airflow nears and exceeds the speed of sound, a host of new aerodynamic effects become important in the design of aircraft. These effects, often several of them at a time, made it very difficult for World War II era aircraft to reach speeds much beyond .
Many effects are often mentioned in conjunction with the term "compressibility", but regularly have little to do with the compressible nature of air. From a strictly aerodynamic point of view, the term should refer only to those side-effects arising as a result of the changes in airflow from an incompressible fluid (similar in effect to water) to a compressible fluid (acting as a gas) as the speed of sound is approached. There are two effects in particular, wave drag and critical mach.
One complication occurs in hypersonic aerodynamics, where dissociation causes an increase in the "notional" molar volume because a mole of oxygen, as O2, becomes 2 moles of monatomic oxygen and N2 similarly dissociates to 2 N. Since this occurs dynamically as air flows over the aerospace object, it is convenient to alter the compressibility factor , defined for an initial 30 gram moles of air, rather than track the varying mean molecular weight, millisecond by millisecond. This pressure dependent transition occurs for atmospheric oxygen in the 2,500–4,000 K temperature range, and in the 5,000–10,000 K range for nitrogen.
In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure differential ratio) and the differential, constant pressure heat capacity greatly increases. For moderate pressures, above 10,000 K the gas further dissociates into free electrons and ions. for the resulting plasma can similarly be computed for a mole of initial air, producing values between 2 and 4 for partially or singly ionized gas. Each dissociation absorbs a great deal of energy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas decelerated near the aerospace object. Ions or free radicals transported to the object surface by diffusion may release this extra (nonthermal) energy if the surface catalyzes the slower recombination process.
Negative compressibility
For ordinary materials, the bulk compressibility (sum of the linear compressibilities on the three axes) is positive, that is, an increase in pressure squeezes the material to a smaller volume. This condition is required for mechanical stability. However, under very specific conditions, materials can exhibit a compressibility that can be negative.
See also
Mach number
Mach tuck
Poisson ratio
Prandtl–Glauert singularity, associated with supersonic flight
Shear strength
References
Thermodynamic properties
Fluid dynamics
Mechanical quantities | Compressibility | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,584 | [
"Thermodynamic properties",
"Mechanical quantities",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Mechanics",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
166,365 | https://en.wikipedia.org/wiki/Vorticity%20equation | The vorticity equation of fluid dynamics describes the evolution of the vorticity of a particle of a fluid as it moves with its flow; that is, the local rotation of the fluid (in terms of vector calculus this is the curl of the flow velocity). The governing equation is:where is the material derivative operator, is the flow velocity, is the local fluid density, is the local pressure, is the viscous stress tensor and represents the sum of the external body forces. The first source term on the right hand side represents vortex stretching.
The equation is valid in the absence of any concentrated torques and line forces for a compressible, Newtonian fluid. In the case of incompressible flow (i.e., low Mach number) and isotropic fluids, with conservative body forces, the equation simplifies to the vorticity transport equation:
where is the kinematic viscosity and is the Laplace operator. Under the further assumption of two-dimensional flow, the equation simplifies to:
Physical interpretation
The term on the left-hand side is the material derivative of the vorticity vector . It describes the rate of change of vorticity of the moving fluid particle. This change can be attributed to unsteadiness in the flow (, the unsteady term) or due to the motion of the fluid particle as it moves from one point to another (, the convection term).
The term on the right-hand side describes the stretching or tilting of vorticity due to the flow velocity gradients. Note that is a vector quantity, as is a scalar differential operator, while is a nine-element tensor quantity.
The term describes stretching of vorticity due to flow compressibility. It follows from the Navier-Stokes equation for continuity, namely where is the specific volume of the fluid element. One can think of as a measure of flow compressibility. Sometimes the negative sign is included in the term.
The term is the baroclinic term. It accounts for the changes in the vorticity due to the intersection of density and pressure surfaces.
The term , accounts for the diffusion of vorticity due to the viscous effects.
The term provides for changes due to external body forces. These are forces that are spread over a three-dimensional region of the fluid, such as gravity or electromagnetic forces. (As opposed to forces that act only over a surface (like drag on a wall) or a line (like surface tension around a meniscus).
Simplifications
In case of conservative body forces, .
For a barotropic fluid, . This is also true for a constant density fluid (including incompressible fluid) where . Note that this is not the same as an incompressible flow, for which the barotropic term cannot be neglected.
This note seems to be talking about the fact that conservation of momentum says and there's a difference between assuming that ρ=constant (the 'incompressible fluid' option, above) and that (the 'incompressible flow' option, above). With the first assumption, conservation of momentum implies (for non-zero density) that ; whereas the second assumption doesn't necessary imply that ρ is constant. This second assumption only strictly requires that the time rate of change of the density is compensated by the gradient of the density, as in:. You can make sense of this by considering the ideal gas law (which is valid if the Reynolds number is large enough that viscous friction becomes unimportant.) Then, even for an adiabatic, chemically-homogenous fluid, the density can vary when the pressure changes, e.g. with Bernoulli.
For inviscid fluids, the viscosity tensor is zero.
Thus for an inviscid, barotropic fluid with conservative body forces, the vorticity equation simplifies to
Alternately, in case of incompressible, inviscid fluid with conservative body forces,
For a brief review of additional cases and simplifications, see also. For the vorticity equation in turbulence theory, in context of the flows in oceans and atmosphere, refer to.
Derivation
The vorticity equation can be derived from the Navier–Stokes equation for the conservation of angular momentum. In the absence of any concentrated torques and line forces, one obtains:
Now, vorticity is defined as the curl of the flow velocity vector; taking the curl of momentum equation yields the desired equation. The following identities are useful in derivation of the equation:
where is any scalar field.
Tensor notation
The vorticity equation can be expressed in tensor notation using Einstein's summation convention and the Levi-Civita symbol :
In specific sciences
Atmospheric sciences
In the atmospheric sciences, the vorticity equation can be stated in terms of the absolute vorticity of air with respect to an inertial frame, or of the vorticity with respect to the rotation of the Earth. The absolute version is
Here, is the polar () component of the vorticity, is the atmospheric density, , , and w are the components of wind velocity, and is the 2-dimensional (i.e. horizontal-component-only) del.
See also
Vorticity
Barotropic vorticity equation
Vortex stretching
Burgers vortex
References
Further reading
Equations of fluid dynamics
Transport phenomena | Vorticity equation | [
"Physics",
"Chemistry",
"Engineering"
] | 1,115 | [
"Transport phenomena",
"Physical phenomena",
"Equations of fluid dynamics",
"Equations of physics",
"Chemical engineering",
"Fluid dynamics"
] |
166,404 | https://en.wikipedia.org/wiki/First%20law%20of%20thermodynamics | The first law of thermodynamics is a formulation of the law of conservation of energy in the context of thermodynamic processes. The law distinguishes two principal forms of energy transfer, heat and thermodynamic work, that modify a thermodynamic system containing a constant amount of matter. The law also defines the internal energy of a system, an extensive property for taking account of the balance of heat and work in the system. Energy cannot be created or destroyed, but it can be transformed from one form to another. In an isolated system the sum of all forms of energy is constant.
An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surroundings requires that the system's internal energy be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system to sustain the work of the system continuously.
The ideal isolated system, of which the entire universe is an example, is often only used as a model. Many systems in practical applications require the consideration of internal chemical or nuclear reactions, as well as transfers of matter into or out of the system. For such considerations, thermodynamics also defines the concept of open systems, closed systems, and other types.
Definition
For thermodynamic processes of energy transfer without transfer of matter, the first law of thermodynamics is often expressed by the algebraic sum of contributions to the internal energy, , from all work, , done on or by the system, and the quantity of heat, , supplied or withdrawn from the system. The historical sign convention for the terms has been that heat supplied to the system is positive, but work done by the system is subtracted. This was the convention of Rudolf Clausius, so that a change in the internal energy, , is written
.
Modern formulations, such as by Max Planck, and by IUPAC, often replace the subtraction with addition, and consider all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of the use of the system, for example as an engine.
When a system expands in an isobaric process, the thermodynamic work, , done by the system on the surroundings is the product, , of system pressure, , and system volume change, , whereas is said to be the thermodynamic work done on the system by the surroundings. The change in internal energy of the system is:
where denotes the quantity of heat supplied to the system from its surroundings.
Work and heat express physical processes of supply or removal of energy, while the internal energy is a mathematical abstraction that keeps account of the changes of energy that befall the system. The term is the quantity of energy added or removed as heat in the thermodynamic sense, not referring to a form of energy within the system. Likewise, denotes the quantity of energy gained or lost through thermodynamic work. Internal energy is a property of the system, while work and heat describe the process, not the system. Thus, a given internal energy change, , can be achieved by different combinations of heat and work. Heat and work are said to be path dependent, while change in internal energy depends only on the initial and final states of the system, not on the path between. Thermodynamic work is measured by change in the system, and is not necessarily the same as work measured by forces and distances in the surroundings, though, ideally, such can sometimes be arranged; this distinction is noted in the term 'isochoric work', at constant system volume, with , which is not a form of thermodynamic work.
History
In the first half of the eighteenth century, French philosopher and mathematician Émilie du Châtelet made notable contributions to the emerging theoretical framework of energy, for example by emphasising Leibniz's concept of ' vis viva ', mv2, as distinct from Newton's momentum, mv.
Empirical developments of the early ideas, in the century following, wrestled with contravening concepts such as the caloric theory of heat.
In the few years of his life (1796–1832) after the 1824 publication of his book Reflections on the Motive Power of Fire, Sadi Carnot came to understand that the caloric theory of heat was restricted to mere calorimetry, and that heat and "motive power" are interconvertible. This is known only from his posthumously published notes. He wrote:
At that time, the concept of mechanical work had not been formulated. Carnot was aware that heat could be produced by friction and by percussion, as forms of dissipation of "motive power". As late as 1847, Lord Kelvin believed in the caloric theory of heat, being unaware of Carnot's notes.
In 1840, Germain Hess stated a conservation law (Hess's law) for the heat of reaction during chemical transformations. This law was later recognized as a consequence of the first law of thermodynamics, but Hess's statement was not explicitly concerned with the relation between energy exchanges by heat and work.
In 1842, Julius Robert von Mayer made a statement that was rendered by Clifford Truesdell (1980) as "in a process at constant pressure, the heat used to produce expansion is universally interconvertible with work", but this is not a general statement of the first law, for it does not express the concept of the thermodynamic state variable, the internal energy. Also in 1842, Mayer measured a temperature rise caused by friction in a body of paper pulp. This was near the time of the 1842–1845 work of James Prescott Joule, measuring the mechanical equivalent of heat. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water.
The first full statements of the law came in 1850 from Rudolf Clausius, and from William Rankine. Some scholars consider Rankine's statement less distinct than that of Clausius.
Original statements: the "thermodynamic approach"
The original 19th-century statements of the first law appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, defined by calorimetry. It was presupposed as logically prior to the theoretical development of thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic" approach.
The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes, and to the existence of a function of state of the system, the internal energy. He expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may be described as follows:
Reflecting the experimental work of Mayer and of Joule, Clausius wrote:
Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels. This non-uniqueness is in keeping with the abstract mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system.
The concept of internal energy is considered by Bailyn to be of "enormous interest". Its quantity cannot be immediately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr's energy relation . In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is revealed by considering the difference of measured quantities (increments of internal energy, quantities of emitted or absorbed radiative energy).
Conceptual revision: the "mechanical approach"
In 1907, George H. Bryan wrote about systems between which there is no transfer of matter (closed systems): "Definition. When energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat." This definition may be regarded as expressing a conceptual revision, as follows. This reinterpretation was systematically expounded in 1909 by Constantin Carathéodory, whose attention had been drawn to it by Max Born. Largely through Born's influence, this revised conceptual approach to the definition of heat came to be preferred by many twentieth-century writers. It might be called the "mechanical approach".
Energy can also be transferred from one thermodynamic system to another in association with transfer of matter. Born points out that in general such energy transfer is not resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter transfer, work and heat transfers can be distinguished only when they pass through walls physically separate from those for matter transfer.
The "mechanical" approach postulates the law of conservation of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatically as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that energy can be transferred from one thermodynamic system to another by a path that is non-adiabatic, and is unaccompanied by matter transfer. Initially, it "cleverly" (according to Martin Bailyn) refrains from labelling as 'heat' such non-adiabatic, unaccompanied transfer of energy. It rests on the primitive notion of walls, especially adiabatic walls and non-adiabatic walls, defined as follows. Temporarily, only for purpose of this definition, one can prohibit transfer of energy as work across a wall of interest. Then walls of interest fall into two classes, (a) those such that arbitrary systems separated by them remain independently in their own previously established respective states of internal thermodynamic equilibrium; they are defined as adiabatic; and (b) those without such independence; they are defined as non-adiabatic.
This approach derives the notions of transfer of energy as heat, and of temperature, as theoretical developments, not taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Hermann von Helmholtz, but also in the work of many others.
Conceptually revised statement, according to the mechanical approach
The revised statement of the first law postulates that a change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial thermodynamic state to a given final equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that occurs purely through stages of adiabatic work.
The revised statement is then
For a closed system, in any arbitrary process of interest that takes it from an initial to a final state of internal thermodynamic equilibrium, the change of internal energy is the same as that for a reference adiabatic work process that links those two states. This is so regardless of the path of the process of interest, and regardless of whether it is an adiabatic or a non-adiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all such processes.
This statement is much less close to the empirical basis than are the original statements, but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the "imported engineering" concept of heat engines.
Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat. In particular, he referred to the work of Constantin Carathéodory, who had in 1909 stated the first law without defining quantity of heat. Born's definition was specifically for transfers of energy without transfer of matter, and it has been widely followed in textbooks (examples:). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that cannot be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers.
Description
Cyclic processes
The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to an incremental change in the internal state of the system, and did not expect the process to be cyclic.
A cyclic process is one that can be repeated indefinitely often, returning the system to its initial state. Of particular interest for single cycle of a cyclic process are the net work done, and the net heat taken in (or 'consumed', in Clausius' statement), by the system.
In a cyclic process in which the system does net work on its surroundings, it is observed to be physically necessary not only that heat be taken into the system, but also, importantly, that some heat leave the system. The difference is the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system, measured in mechanical units, is proportional to the heat consumed, measured in calorimetric units.
The constant of proportionality is universal and independent of the system and in 1845 and 1847 was measured by James Joule, who described it as the mechanical equivalent of heat.
Various statements of the law for closed systems
The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author.
For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments are made on it in the section below headed 'First law of thermodynamics for open systems'.
There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically coherent and consistent with one another.
An example of a physical statement is that of Planck (1897/1903):
It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to construct an engine which will work in a cycle and produce continuous work, or kinetic energy, from nothing.
This physical statement is restricted neither to closed systems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for open systems and for systems with states that are not in thermodynamic equilibrium.
An example of a mathematical statement is that of Crawford (1963):
For a given system we let large-scale mechanical energy, large-scale potential energy, and total energy. The first two quantities are specifiable in terms of appropriate mechanical variables, and by definition
For any finite process, whether reversible or irreversible,
The first law in a form that involves the principle of conservation of energy more generally is
Here and are heat and work added, with no restrictions as to whether the process is reversible, quasistatic, or irreversible.[Warner, Am. J. Phys., 29, 124 (1961)]
This statement by Crawford, for , uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems. Internal energy is evaluated for bodies in states of thermodynamic equilibrium, which possess well-defined temperatures, relative to a reference state.
The history of statements of the law for closed systems has two main periods, before and after the work of George H. Bryan (1907), of Carathéodory (1909), and the approval of Carathéodory's work given by Born (1921). The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date.
Carathéodory's celebrated presentation of equilibrium thermodynamics refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathéodory's 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defining or mentioning temperature or quantity of heat transferred. That axiom stated that the internal energy of a phase in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors.
Such statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work. Thus heat is not defined calorimetrically or as due to temperature difference. It is defined as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated.
The 1909 Carathéodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include "non-deformation variables", such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures, and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat.
According to A. Münster (1970), "A somewhat unsatisfactory aspect of Carathéodory's theory is that a consequence of the Second Law must be considered at this point [in the statement of the first law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process." Münster instances that no adiabatic process can reduce the internal energy of a system at constant volume. Carathéodory's paper asserts that its statement of the first law corresponds exactly to Joule's experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule's experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which cannot penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude.
Sometimes the concept of internal energy is not made explicit in the statement.
Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process.
A respected modern author states the first law of thermodynamics as "Heat is a form of energy", which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature. A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference." The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature.
A respected text disregards the Carathéodory's exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy. Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is "completely rigorous". These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897/1903).
Evidence for the first law of thermodynamics for closed systems
The first law of thermodynamics for closed systems was originally induced from empirically observed evidence, including calorimetric evidence. It is nowadays, however, taken to provide the definition of heat via the law of conservation of energy and the definition of work in terms of changes in the external parameters of a system. The original discovery of the law was gradual over a period of perhaps half a century or more, and some early studies were in terms of cyclic processes.
The following is an account in terms of changes of state of a closed system through compound processes that are not necessarily cyclic. This account first considers processes for which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is no transfer as heat) and adynamic processes (in which there is no transfer as work).
Adiabatic processes
In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures.
For instance, in Joule's experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature with the distance descended by the mass. Next, the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system.
Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease the temperature of the water in the tank.
A change from one state to another, for example an increase of both temperature and volume, may be conducted in several stages, for example by externally supplied electrical work on a resistor in the body, and adiabatic expansion allowing the body to do work on the surroundings. It needs to be shown that the time order of the stages, and their relative magnitudes, does not affect the amount of adiabatic work that needs to be done for the change of state. According to one respected scholar: "Unfortunately, it does not seem that experiments of this kind have ever been carried out carefully. ... We must therefore admit that the statement which we have enunciated here, and which is equivalent to the first law of thermodynamics, is not well founded on direct experimental evidence." Another expression of this view is "no systematic precise experiments to verify this generalization directly have ever been attempted".
This kind of evidence, of independence of sequence of stages, combined with the above-mentioned evidence, of independence of qualitative kind of work, would show the existence of an important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below.
That important state variable was first recognized and denoted by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it ; and in 1851 by Kelvin who then called it "mechanical energy", and later "intrinsic energy". In 1865, after some hesitation, Clausius began calling his state function "energy". In 1882 it was named as the internal energy by Helmholtz. If only adiabatic processes were of interest, and heat could be ignored, the concept of internal energy would hardly arise or be needed. The relevant physics would be largely covered by the concept of potential energy, as was intended in the 1847 paper of Helmholtz on the principle of conservation of energy, though that did not deal with forces that cannot be described by a potential, and thus did not fully justify the principle. Moreover, that paper was critical of the early work of Joule that had by then been performed. A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and allows a treatment in terms of thermodynamic states.
In an adiabatic process, adiabatic work takes the system either from a reference state with internal energy to an arbitrary one with internal energy , or from the state to the state :
Except under the special, and strictly speaking, fictional, condition of reversibility, only one of the processes or is empirically feasible by a simple application of externally supplied work. The reason for this is given as the second law of thermodynamics and is not considered in the present article.
The fact of such irreversibility may be dealt with in two main ways, according to different points of view:
Since the work of Bryan (1907), the most accepted way to deal with it nowadays, followed by Carathéodory, is to rely on the previously established concept of quasi-static processes,Planck, M. (1897/1903), Section 71, p. 52. as follows. Actual physical processes of transfer of energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more slowly, the frictional or viscous dissipation is less. In the limit of infinitely slow performance, the dissipation tends to zero and then the limiting process, though fictional rather than actual, is notionally reversible, and is called quasi-static. Throughout the course of the fictional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings. This can be taken to justify the formula
Another way to deal with it is to allow that experiments with processes of heat transfer to or from the system may be used to justify the formula () above. Moreover, it deals to some extent with the problem of lack of direct experimental evidence that the time order of stages of a process does not matter in the determination of internal energy. This way does not provide theoretical purity in terms of adiabatic work processes, but is empirically feasible, and is in accord with experiments actually done, such as the Joule experiments mentioned just above, and with older traditions.
The formula () above allows that to go by processes of quasi-static adiabatic work from the state to the state we can take a path that goes through the reference state , since the quasi-static adiabatic work is independent of the path
This kind of empirical evidence, coupled with theory of this kind, largely justifies the following statement:
For all adiabatic processes between two specified states of a closed system of any nature, the net work done is the same regardless the details of the process, and determines a state function called internal energy, .
Adynamic processes
A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by transferring an externally determined amount of heat into it, for instance from a resistive electrical heater inside the calorimeter through which a precisely known electric current is passed at a precisely known voltage for a precisely measured period of time. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as (surroundings-based) work. According to one textbook, "The most common device for measuring is an adiabatic bomb calorimeter." According to another textbook, "Calorimetry is widely used in present day laboratories." According to one opinion, "Most thermodynamic data come from calorimetry...".
When the system evolves with transfer of energy as heat, without energy being transferred as work, in an adynamic process, the heat transferred to the system is equal to the increase in its internal energy:
General case for reversible processes
Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be reversible in the strict thermodynamic sense. For a particular reversible process in general, the work done reversibly on the system, , and the heat transferred reversibly to the system, are not required to occur respectively adiabatically or adynamically, but they must belong to the same particular process defined by its particular reversible path, , through the space of thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously.
Putting the two complementary aspects together, the first law for a particular reversible process can be written
This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems.
In particular, if no work is done on a thermally isolated closed system we have
.
This is one aspect of the law of conservation of energy and can be stated:
The internal energy of an isolated system remains constant.
General case for irreversible processes
If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient, practically frictionless, and with nearly balanced forces, then the process is irreversible. Then the heat and work transfers may be difficult to calculate with high accuracy, although the simple equations for reversible processes still hold to a good approximation in the absence of composition changes. Importantly, the first law still holds and provides a check on the measurements and calculations of the work done irreversibly on the system, , and the heat transferred irreversibly to the system, , which belong to the same particular process defined by its particular irreversible path, , through the space of thermodynamic states.
This means that the internal energy is a function of state and that the internal energy change between two states is a function only of the two states.
Overview of the weight of evidence for the law
The first law of thermodynamics is so general that its predictions cannot all be directly tested. In many properly conducted experiments it has been precisely supported, and never violated. Indeed, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is more practical and realistic to think of the law as testing the accuracy of experiment. An experimental result that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to account for an important physical factor. Thus, some may regard it as a principle more abstract than a law.
State functional formulation for infinitesimal processes
When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by , rather than exact differentials denoted by , as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process.
The first law for a closed homogeneous system may be stated in terms that include concepts that are established in the second law. The internal energy may then be expressed as a function of the system's defining state variables , entropy, and , volume: . In these terms, , the system's temperature, and , its pressure, are partial derivatives of with respect to and . These variables are important throughout thermodynamics, though not necessary for the statement of the first law. Rigorously, they are defined only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the concepts provide good approximations for scenarios sufficiently near to the system's internal thermodynamic equilibrium.
The first law requires that:
Then, for the fictive case of a reversible process, can be written in terms of exact differentials. One may imagine reversible changes, such that there is at each instant negligible departure from thermodynamic equilibrium within the system and between system and surroundings. Then, mechanical work is given by and the quantity of heat added can be expressed as . For these conditions
While this has been shown here for reversible changes, it is valid more generally in the absence of chemical reactions or phase transitions, as can be considered as a thermodynamic state function of the defining state variables and :
Equation () is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are and , with respect to which and are partial derivatives of . It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by and .
In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes:
where dNi is the (small) increase in number of type-i particles in the reaction, and μi is known as the chemical potential of the type-i particles in the system. If dNi is expressed in mol then μi is expressed in J/mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to:
Here the Xi are the generalized forces corresponding to the external variables xi. The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters.
For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems.
A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = −P dV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dV is the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system.
It is useful to view the T dS term in the same light: here the temperature is known as a "generalized" force (rather than an actual mechanical force) and the entropy is a generalized displacement.
Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force" of evaporation that drives water molecules out of the liquid. There is a generalized "force" of condensation that drives vapor molecules out of the vapor. Only when these two "forces" (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero.
The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy.
Fluid dynamics
In fluid dynamics, the first law of thermodynamics reads .
Spatially inhomogeneous systems
Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if denotes the total energy of that component system, one may write
where and denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and denotes its internal energy.
Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system.
A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction between the subsystems. Thus, in an obvious notation, one may write
The quantity in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments.
The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy. The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy, whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inhomogeneous systems.
First law of thermodynamics for open systems
For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view. For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy cannot permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed.
There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system.
Internal energy for an open system
Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics". In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies. The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems.
In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system. The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires that
where and denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above.
For the thermodynamic operation of adding two systems with internal energies and , to produce a new system with internal energy , one may write ; the reference states for , and should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables.
There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors.
Also of course
where and denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass.
Process of transfer of matter between an open system and its surroundings
A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem.
An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature.
A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium.
Open system with multiple contacts
An open system can be in contact equilibrium with several other systems at once.
This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work.
With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics:
where ΔU0 denotes the change of internal energy of the system, and denotes the change of internal energy of the of the surrounding subsystems that are in open contact with the system, due to transfer between the system and that surrounding subsystem, and denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here.
Combination of first and second laws
If the system is described by the energetic fundamental equation, U0 = U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula
where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above.
For a general natural process, there is no immediate term-wise correspondence between equations () and (), because they describe the process in different conceptual frames.
Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely.
For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write
where is the added amount of species and is the corresponding molar entropy.
For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield
where is the molar enthalpy of species .
Non-equilibrium transfers
The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined.
The first law of thermodynamics for any process on the specification of equation (3) can be defined as
where ΔU denotes the change of internal energy of the system, denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, denotes the work of the system and is the molar enthalpy of species , coming into the system from the surrounding that is in contact with the system.
Formula (6) is valid in general case, both for quasi-static and for irreversible processes. The situation of the quasi-static process is considered in the previous Section, which in our terms defines
To describe deviation of the thermodynamic system from equilibrium, in addition to fundamental variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced, which allows to formulate for the general case
Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient.
Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer" for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical.
The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity.
Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics". Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term "heat flux" in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow and a conduction flow. This conduction flow is by definition the heat flow . Therefore: where denotes the [internal] energy per unit mass. [These authors actually use the symbols and to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol to refer to total energy, including kinetic energy of bulk flow.]" This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vásquez, and de Groot and Mazur. This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases. This is not the ad hoc definition of "reduced heat flux" of Rolf Haase.
In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.
See also
Laws of thermodynamics
Perpetual motion
Microstate (statistical mechanics) – includes microscopic definitions of internal energy, heat and work
Entropy production
Relativistic heat conduction
References
Cited sources
Adkins, C. J. (1968/1983). Equilibrium Thermodynamics, (first edition 1968), third edition 1983, Cambridge University Press, .
Aston, J. G., Fritz, J. J. (1959). Thermodynamics and Statistical Thermodynamics, John Wiley & Sons, New York.
Balian, R. (1991/2007). From Microphysics to Macrophysics: Methods and Applications of Statistical Physics, volume 1, translated by D. ter Haar, J.F. Gregg, Springer, Berlin, .
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London.
Bryan, G. H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B. G. Teubner, Leipzig.
Balescu, R. (1997). Statistical Dynamics; Matter out of Equilibrium, Imperial College Press, London, .
Buchdahl, H. A. (1966), The Concepts of Classical Thermodynamics, Cambridge University Press, London.
Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, .
A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
. See English Translation: On the Moving Force of Heat, and the Laws regarding the Nature of Heat itself which are deducible therefrom. Phil. Mag. (1851), series 4, 2, 1–21, 102–119. Also available on Google Books.
Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
de Groot, S. R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, .
Denbigh, K. G. (1951). The Thermodynamics of the Steady State, Methuen, London, Wiley, New York.
Denbigh, K. (1954/1981). The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering, fourth edition, Cambridge University Press, Cambridge UK, .
Eckart, C. (1940). The thermodynamics of irreversible processes. The simple fluid, Phys. Rev. 58: 267–269.
Fitts, D. D. (1962). Nonequilibrium Thermodynamics. Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York.
Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley, London, .
Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the 1967 Hungarian by E. Gyarmati and W. F. Heinz, Springer-Verlag, New York.
Haase, R. (1963/1969). Thermodynamics of Irreversible Processes, English translation, Addison-Wesley Publishing, Reading MA.
Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081.
Helmholtz, H. (1847). Ueber die Erhaltung der Kraft. Eine physikalische Abhandlung, G. Reimer (publisher), Berlin, read on 23 July in a session of the Physikalischen Gesellschaft zu Berlin. Reprinted in Helmholtz, H. von (1882), Wissenschaftliche Abhandlungen, Band 1, J. A. Barth, Leipzig. Translated and edited by J. Tyndall, in Scientific Memoirs, Selected from the Transactions of Foreign Academies of Science and from Foreign Journals. Natural Philosophy (1853), volume 7, edited by J. Tyndall, W. Francis, published by Taylor and Francis, London, pp. 114–162, reprinted as volume 7 of Series 7, The Sources of Science, edited by H. Woolf, (1966), Johnson Reprint Corporation, New York, and again in Brush, S. G., The Kinetic Theory of Gases. An Anthology of Classic Papers with Historical Commentary, volume 1 of History of Modern Physical Sciences, edited by N. S. Hall, Imperial College Press, London, , pp. 89–110.
Kestin, J. (1966). A Course in Thermodynamics, Blaisdell Publishing Company, Waltham MA.
Kirkwood, J. G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York.
Landsberg, P. T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York.
Landsberg, P. T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, .
Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer, Berlin, .
Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, .
Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London.
Pippard, A. B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK.
Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London.
Prigogine, I. (1947). Étude Thermodynamique des Phénomènes irréversibles, Dunod, Paris, and Desoers, Liège.
Prigogine, I., (1955/1967). Introduction to Thermodynamics of Irreversible Processes, third edition, Interscience Publishers, New York.
Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York.
Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA.
Truesdell, C. A. (1980). The Tragicomical History of Thermodynamics, 1822–1854, Springer, New York, .
Truesdell, C. A., Muncaster, R. G. (1980). Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a branch of Rational Mechanics, Academic Press, New York, .
Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, .
Further reading
Chpts. 2 and 3 contain a nontechnical treatment of the first law.
Chapter 2.
External links
MISN-0-158, The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for Project PHYSNET.
First law of thermodynamics in the MIT Course Unified Thermodynamics and Propulsion from Prof. Z. S. Spakovszky
Equations of physics
1
de:Thermodynamik#Erster Hauptsatz | First law of thermodynamics | [
"Physics",
"Chemistry",
"Mathematics"
] | 14,298 | [
"Equations of physics",
"Mathematical objects",
"Equations",
"Thermodynamics",
"Laws of thermodynamics"
] |
13,404,205 | https://en.wikipedia.org/wiki/Hofstadter%20sequence | In mathematics, a Hofstadter sequence is a member of a family of related integer sequences defined by non-linear recurrence relations.
Sequences presented in Gödel, Escher, Bach: an Eternal Golden Braid
The first Hofstadter sequences were described by Douglas Richard Hofstadter in his book Gödel, Escher, Bach. In order of their presentation in chapter III on figures and background (Figure-Figure sequence) and chapter V on recursive structures and processes (remaining sequences), these sequences are:
Hofstadter Figure-Figure sequences
The Hofstadter Figure-Figure (R and S) sequences are a pair of complementary integer sequences defined as follows:
with the sequence defined as a strictly increasing series of positive integers not present in . The first few terms of these sequences are
R: 1, 3, 7, 12, 18, 26, 35, 45, 56, 69, 83, 98, 114, 131, 150, 170, 191, 213, 236, 260, ...
S: 2, 4, 5, 6, 8, 9, 10, 11, 13, 14, 15, 16, 17, 19, 20, 21, 22, 23, 24, 25, ...
Hofstadter G sequence
The Hofstadter G sequence is defined as follows:
The first few terms of this sequence are
0, 1, 1, 2, 3, 3, 4, 4, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 12, 12, ...
Hofstadter H sequence
The Hofstadter H sequence is defined as follows:
The first few terms of this sequence are
0, 1, 1, 2, 3, 4, 4, 5, 5, 6, 7, 7, 8, 9, 10, 10, 11, 12, 13, 13, 14, ...
Hofstadter Female and Male sequences
The Hofstadter Female (F) and Male (M) sequences are defined as follows:
The first few terms of these sequences are
F: 1, 1, 2, 2, 3, 3, 4, 5, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 12, 13, ...
M: 0, 0, 1, 2, 2, 3, 4, 4, 5, 6, 6, 7, 7, 8, 9, 9, 10, 11, 11, 12, 12, ...
Hofstadter Q sequence
The Hofstadter Q sequence is defined as follows:
The first few terms of the sequence are
1, 1, 2, 3, 3, 4, 5, 5, 6, 6, 6, 8, 8, 8, 10, 9, 10, 11, 11, 12, ...
Hofstadter named the terms of the sequence "Q numbers"; thus the Q number of 6 is 4. The presentation of the Q sequence in Hofstadter's book is actually the first known mention of a meta-Fibonacci sequence in literature.
While the terms of the Fibonacci sequence are determined by summing the two preceding terms, the two preceding terms of a Q number determine how far to go back in the Q sequence to find the two terms to be summed. The indices of the summation terms thus depend on the Q sequence itself.
Q(1), the first element of the sequence, is never one of the two terms being added to produce a later element; it is involved only within an index in the calculation of Q(3).
Although the terms of the Q sequence seem to flow chaotically, like many meta-Fibonacci sequences, its terms can be grouped into blocks of successive generations. In case of the Q sequence, the k-th generation has 2k members. Furthermore, with g being the generation that a Q number belongs to, the two terms to be summed to calculate the Q number, called its parents, reside by far mostly in generation g − 1 and only a few in generation g − 2, but never in an even older generation.
Most of these findings are empirical observations, since virtually nothing has been proved about the Q sequence so far. It is specifically unknown whether the sequence is well-defined for all n; that is, whether the sequence "dies" at some point because its generation rule tries to refer to terms which would conceptually sit left of the first term Q(1).
Generalizations of the Q sequence
Hofstadter–Huber Qr,s(n) family
20 years after Hofstadter first described the Q sequence, he and Greg Huber used the character Q to name the generalization of the Q sequence toward a family of sequences, and renamed the original Q sequence of his book to U sequence.
The original Q sequence is generalized by replacing n − 1 and n − 2 by n − r and n − s, respectively.
This leads to the sequence family
where s ≥ 2 and r < s.
With (r,s) = (1,2), the original Q sequence is a member of this family. So far, only three sequences of the family Qr,s are known, namely the U sequence with (r,s) = (1,2) (which is the original Q sequence); the V sequence with (r,s) = (1,4); and the W sequence with (r,s) = (2,4). Only the V sequence, which does not behave as chaotically as the others, is proven not to "die". Similar to the original Q sequence, virtually nothing has been proved rigorously about the W sequence to date.
The first few terms of the V sequence are
1, 1, 1, 1, 2, 3, 4, 5, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 11, ...
The first few terms of the W sequence are
1, 1, 1, 1, 2, 4, 6, 7, 7, 5, 3, 8, 9, 11, 12, 9, 9, 13, 11, 9, ...
For other values (r,s) the sequences sooner or later "die" i.e. there exists an n for which Qr,s(n) is undefined because n − Qr,s(n − r) < 1.
Pinn Fi,j(n) family
In 1998, Klaus Pinn, scientist at University of Münster (Germany) and in close communication with Hofstadter, suggested another generalization of Hofstadter's Q sequence which Pinn called F sequences.
The family of Pinn Fi,j sequences is defined as follows:
Thus Pinn introduced additional constants i and j which shift the index of the terms of the summation conceptually to the left (that is, closer to start of the sequence).
Only F sequences with (i,j) = (0,0), (0,1), (1,0), and (1,1), the first of which represents the original Q sequence, appear to be well-defined. Unlike Q(1), the first elements of the Pinn Fi,j(n) sequences are terms of summations in calculating later elements of the sequences when any of the additional constants is 1.
The first few terms of the Pinn F0,1 sequence are
1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 6, 7, 8, 8, 8, 8, 9, 10, 10, 11, ...
Hofstadter–Conway $10,000 sequence
The Hofstadter–Conway $10,000 sequence is defined as follows
The first few terms of this sequence are
1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 7, 7, 8, 8, 8, 8, 9, 10, 11, 12, ...
The values converge to 1/2, and this sequence acquired its name because John Horton Conway offered a prize of $10,000 to anyone who could determine its rate of convergence. The prize, since reduced to $1,000, was claimed by Collin Mallows, who proved that
In private communication with Klaus Pinn, Hofstadter later claimed that he had found the sequence and its structure about 10–15 years before Conway posed his challenge.
References
Sources
.
.
.
.
.
Integer sequences | Hofstadter sequence | [
"Mathematics"
] | 1,787 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
13,406,872 | https://en.wikipedia.org/wiki/Acousto-optical%20spectrometer | An acousto-optical spectrometer (AOS) is based on the diffraction of light by ultrasonic waves. A piezoelectric transducer, driven by the RF signal (from the receiver), generates an acoustic wave in a crystal (the so-called Bragg-cell). This acoustic wave modulates the refractive index and induces a phase grating. The Bragg-cell is illuminated by a collimated laser beam. The angular dispersion of the diffracted light represents a true image of the IF-spectrum according to the amplitude and wavelengths of the acoustic waves in the crystal. The spectrum is detected by using a single linear diode array (CCD), which is placed in the focal plane of an imaging optics. Depending on the crystal and the focal length of the imaging optics, the resolution of this type of spectrometer can be varied.
See also
Acousto-optics
Acousto-optic deflector
Acousto-optic modulator
Nonlinear optics
References
Spectrometers | Acousto-optical spectrometer | [
"Physics",
"Chemistry"
] | 216 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
13,408,015 | https://en.wikipedia.org/wiki/Heavy%20baryon%20chiral%20perturbation%20theory | Heavy baryon chiral perturbation theory (HBChPT) is an effective quantum field theory used to describe the interactions of pions and nucleons/baryons. It is somewhat an extension of chiral perturbation theory (ChPT) which just describes the low-energy interactions of pions. In a richer theory one would also like to describe the interactions of baryons with pions. A fully relativistic Lagrangian of nucleons is non-predictive as the quantum corrections, or loop diagrams can count as quantities and therefore do not describe higher-order corrections.
Because the baryons are much heavier than the pions, HBChPT rests on the use of a nonrelativistic description of baryons compared to that of the pions. Therefore, higher order terms in the HBChPT Lagrangian come in at higher orders of where is the baryon mass.
Quantum chromodynamics | Heavy baryon chiral perturbation theory | [
"Physics"
] | 204 | [
"Particle physics stubs",
"Particle physics"
] |
13,409,547 | https://en.wikipedia.org/wiki/Asymmetric%20carbon | In stereochemistry, an asymmetric carbon is a carbon atom that is bonded to four different types of atoms or groups of atoms. The four atoms and/or groups attached to the carbon atom can be arranged in space in two different ways that are mirror images of each other, and which lead to so-called left-handed and right-handed versions (stereoisomers) of the same molecule. Molecules that cannot be superimposed on their own mirror image are said to be chiral; as the asymmetric carbon is the center of this chirality, it is also known as a chiral carbon.
As an example, malic acid () has 4 carbon atoms but just one of them is asymmetric. The asymmetric carbon atom, bolded in the formula, is the one attached to two carbon atoms, an oxygen atom, and a hydrogen atom. One may initially be inclined to think this atom is not asymmetric because it is attached to two carbon atoms, but because those two carbon atoms are not attached to exactly the same things, there are two different groups of atoms that the carbon atom in question is attached to, therefore making it an asymmetric carbon atom:
Knowing the number of asymmetric carbon atoms, one can calculate the maximum possible number of stereoisomers for any given molecule as follows:
If is the number of asymmetric carbon atoms then the maximum number of isomers = (Le Bel-van't Hoff rule)
This is a corollary of Le Bel and van't Hoff's simultaneously announced conclusions, in 1874, that the most probable orientation of the bonds of a carbon atom linked to four groups or atoms is toward the apexes of a tetrahedron, and that this accounted for all then-known phenomena of molecular asymmetry (which involved a carbon atom bearing four different atoms or groups).
A tetrose with 2 asymmetric carbon atoms has 22 = 4 stereoisomers:
An aldopentose with 3 asymmetric carbon atoms has 23 = 8 stereoisomers:
An aldohexose with 4 asymmetric carbon atoms has 24 = 16 stereoisomers:
References
Stereochemistry
Jacobus Henricus van 't Hoff
de:Asymmetrisches Kohlenstoffatom | Asymmetric carbon | [
"Physics",
"Chemistry"
] | 470 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
13,409,671 | https://en.wikipedia.org/wiki/Turpan%20water%20system | The Turpan water system, also called the Turfan kārēz system, is used for water supply via a vertical tunnel in the Turpan Depression of Xinjiang, China. "Karez" () is a word in the local Uyghur language that is derived from the word in the Persian language for the system from which it is derived: the 3000-year-old qanāt. Turpan has the Turpan Karez Paradise (a Protected Area of the People's Republic of China), which is dedicated to demonstrating its karez water system, as well as exhibiting other historical artifacts.
Turpan's karez well system was crucial in Turpan's development as an important oasis stopover on the Silk Road, which skirted the barren and hostile Taklamakan Desert.
Description
Turpan's karez water system is made up of a horizontal series of vertically dug wells that are then linked by underground water canals to collect water from the watershed surface runoff from the base of the Tian Shan Mountains and the nearby Flaming Mountains. The canals channel the water to the surface, taking advantage of the current provided by the gravity of the downward slope of the Turpan Depression. The canals are mostly underground to reduce water evaporation and to make the slope long enough to reach far distances being only gravity fed.
The system has wells, dams and underground canals built to store the water and control the amount of water flow. Vertical wells are dug at various points to tap into the groundwater flowing down sloping land from the source, the mountain runoff. The water is then channeled through underground canals dug from the bottom of one well to the next well and then to the desired destination. Turpan's karez irrigation system of special connected wells is believed to be of indigenous origin in China, perhaps combined with technology arriving from more western regions.
In Xinjiang, the greatest number of karez wells are in the Turpan Depression, where today there remain over 1100 karez wells and channels having a total length of over . The local geography makes karez wells practical for agricultural irrigation and other uses. Turpan is located in the second deepest geographical depression in the world, with over of land below sea level and with soil that forms a sturdy basin. Water naturally flows down from the nearby mountains during the rainy season in an underground current to the low depression basin under the desert. The Turpan summer is very hot and dry with periods of wind and blowing sand.
Importance
Ample water was crucial to Turpan, so that the oasis city could service the many caravans on the Silk Route resting there near a route skirting the Taklamakan Desert. The caravans included merchant traders and missionaries with their armed escorts, animals including camels, sometimes numbering into the thousands, along with camel drivers, agents and other personnel, all of whom might stay for a week or more. The caravans needed pastures for their animals, resting facilities, trading bazaars for conducting business and replenishment of food and water.
Potential UNESCO World Heritage Site
Karez wells in the Turfan area are on the UNESCO World Heritage Sites Tentative List for China.
Threatened by global warming
There are 20,000 glaciers in Xinjiang – nearly half of all the glaciers in China. The water from the glaciers via the underground channels has provided a stable water source year round, independent of season, for thousands of years. But since the 1950s, Xinjiang's glaciers have retreated by between 21 percent to 27 percent due to global warming, threatening the agricultural productivity of the region.
See also
References
External links
Satellite map showing deep basin from Google
Link to Silk Road map
Turpan – Ancient Stop on the Silk Road
Karez close to Turfan
Turpan
Water supply
Water wells
Chinese architectural history
Sites along the Silk Road
Major National Historical and Cultural Sites in Xinjiang
Irrigation projects
Irrigation in China | Turpan water system | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 797 | [
"Hydrology",
"Water wells",
"Irrigation projects",
"Environmental engineering",
"Water supply"
] |
1,000,609 | https://en.wikipedia.org/wiki/Dallasite | Dallasite is a breccia made of subequant to rectangular or distinctly elongate, curvilinear shards that represent the spalled rims of pillow basalt (see: Hyaloclastite). This material is commonly partly altered to chlorite, epidote, quartz and carbonate, for which the local term 'dallasite' has been coined. The stone dallasite is named after Dallas Road, Victoria, British Columbia. It is considered the unofficial stone of British Columbia's capital city. Dallasite is found in Triassic volcanic rocks of Vancouver Island and is considered the third most important gem material in British Columbia.
References
Rocks
Gemstones
Geology of British Columbia
Breccias | Dallasite | [
"Physics",
"Materials_science"
] | 146 | [
"Breccias",
"Fracture mechanics",
"Materials",
"Physical objects",
"Gemstones",
"Rocks",
"Matter"
] |
1,001,628 | https://en.wikipedia.org/wiki/Power%20electronics | Power electronics is the application of electronics to the control and conversion of electric power.
The first high-power electronic devices were made using mercury-arc valves. In modern systems, the conversion is performed with semiconductor switching devices such as diodes, thyristors, and power transistors such as the power MOSFET and IGBT. In contrast to electronic systems concerned with the transmission and processing of signals and data, substantial amounts of electrical energy are processed in power electronics. An AC/DC converter (rectifier) is the most typical power electronics device found in many consumer electronic devices, e.g. television sets, personal computers, battery chargers, etc. The power range is typically from tens of watts to several hundred watts. In industry, a common application is the variable speed drive (VSD) that is used to control an induction motor. The power range of VSDs starts from a few hundred watts and ends at tens of megawatts.
The power conversion systems can be classified according to the type of the input and output power:
AC to DC (rectifier)
DC to AC (inverter)
DC to DC (DC-to-DC converter)
AC to AC (AC-to-AC converter)
History
Power electronics started with the development of the mercury arc rectifier. Invented by Peter Cooper Hewitt in 1902, it was used to convert alternating current (AC) into direct current (DC). From the 1920s on, research continued on applying thyratrons and grid-controlled mercury arc valves to power transmission. Uno Lamm developed a mercury valve with grading electrodes making them suitable for high voltage direct current power transmission. In 1933 selenium rectifiers were invented.
Julius Edgar Lilienfeld proposed the concept of a field-effect transistor in 1926, but it was not possible to actually construct a working device at that time. In 1947, the bipolar point-contact transistor was invented by Walter H. Brattain and John Bardeen under the direction of William Shockley at Bell Labs. In 1948 Shockley's invention of the bipolar junction transistor (BJT) improved the stability and performance of transistors, and reduced costs. By the 1950s, higher power semiconductor diodes became available and started replacing vacuum tubes. In 1956, the silicon controlled rectifier (SCR) was introduced by General Electric, greatly increasing the range of power electronics applications. By the 1960s, the improved switching speed of bipolar junction transistors had allowed for high frequency DC/DC converters.
R. D. Middlebrook made important contributions to power electronics. In 1970, he founded the Power Electronics Group at Caltech. He developed the state-space averaging method of analysis and other tools crucial to modern power electronics design.
Power MOSFET
In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, Dawon Kahng led a paper demonstrating a working MOSFET with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device.
In 1969, Hitachi introduced the first vertical power MOSFET, which would later be known as the VMOS (V-groove MOSFET). From 1974, Yamaha, JVC, Pioneer Corporation, Sony and Toshiba began manufacturing audio amplifiers with power MOSFETs. International Rectifier introduced a 25 A, 400 V power MOSFET in 1978. This device allows operation at higher frequencies than a bipolar transistor, but is limited to low voltage applications.
The power MOSFET is the most common power device in the world, due to its low gate drive power, fast switching speed, easy advanced paralleling capability, wide bandwidth, ruggedness, easy drive, simple biasing, ease of application, and ease of repair. It has a wide range of power electronic applications, such as portable information appliances, power integrated circuits, cell phones, notebook computers, and the communications infrastructure that enables the Internet.
In 1982, the insulated-gate bipolar transistor (IGBT) was introduced. It became widely available in the 1990s. This component has the power handling capability of the bipolar transistor and the advantages of the isolated gate drive of the power MOSFET.
Devices
The capabilities and economy of power electronics system are determined by the active devices that are available. Their characteristics and limitations are a key element in the design of power electronics systems. Formerly, the mercury arc valve, the high-vacuum and gas-filled diode thermionic rectifiers, and triggered devices such as the thyratron and ignitron were widely used in power electronics. As the ratings of solid-state devices improved in both voltage and current-handling capacity, vacuum devices have been nearly entirely replaced by solid-state devices.
Power electronic devices may be used as switches, or as amplifiers. An ideal switch is either open or closed and so dissipates no power; it withstands an applied voltage and passes no current or passes any amount of current with no voltage drop. Semiconductor devices used as switches can approximate this ideal property and so most power electronic applications rely on switching devices on and off, which makes systems very efficient as very little power is wasted in the switch. By contrast, in the case of the amplifier, the current through the device varies continuously according to a controlled input. The voltage and current at the device terminals follow a load line, and the power dissipation inside the device is large compared with the power delivered to the load.
Several attributes dictate how devices are used. Devices such as diodes conduct when a forward voltage is applied and have no external control of the start of conduction. Power devices such as silicon controlled rectifiers and thyristors (as well as the mercury valve and thyratron) allow control of the start of conduction but rely on periodic reversal of current flow to turn them off. Devices such as gate turn-off thyristors, BJT and MOSFET transistors provide full switching control and can be turned on or off without regard to the current flow through them. Transistor devices also allow proportional amplification, but this is rarely used for systems rated more than a few hundred watts. The control input characteristics of a device also significantly affect design; sometimes, the control input is at a very high voltage with respect to ground and must be driven by an isolated source.
As efficiency is at a premium in a power electronic converter, the losses generated by a power electronic device should be as low as possible.
Devices vary in switching speed. Some diodes and thyristors are suited for relatively slow speed and are useful for power frequency switching and control; certain thyristors are useful at a few kilohertz. Devices such as MOSFETS and BJTs can switch at tens of kilohertz up to a few megahertz in power applications, but with decreasing power levels. Vacuum tube devices dominate high power (hundreds of kilowatts) at very high frequency (hundreds or thousands of megahertz) applications. Faster switching devices minimize energy lost in the transitions from on to off and back but may create problems with radiated electromagnetic interference. Gate drive (or equivalent) circuits must be designed to supply sufficient drive current to achieve the full switching speed possible with a device. A device without sufficient drive to switch rapidly may be destroyed by excess heating.
Practical devices have a non-zero voltage drop and dissipate power when on, and take some time to pass through an active region until they reach the "on" or "off" state. These losses are a significant part of the total lost power in a converter.
Power handling and dissipation of devices is also critical factor in design. Power electronic devices may have to dissipate tens or hundreds of watts of waste heat, even switching as efficiently as possible between conducting and non-conducting states. In the switching mode, the power controlled is much larger than the power dissipated in the switch. The forward voltage drop in the conducting state translates into heat that must be dissipated. High power semiconductors require specialized heat sinks or active cooling systems to manage their junction temperature; exotic semiconductors such as silicon carbide have an advantage over straight silicon in this respect, and germanium, once the main-stay of solid-state electronics is now little used due to its unfavorable high-temperature properties.
Semiconductor devices exist with ratings up to a few kilovolts in a single device. Where very high voltage must be controlled, multiple devices must be used in series, with networks to equalize voltage across all devices. Again, switching speed is a critical factor since the slowest-switching device will have to withstand a disproportionate share of the overall voltage. Mercury valves were once available with ratings to 100 kV in a single unit, simplifying their application in HVDC systems.
The current rating of a semiconductor device is limited by the heat generated within the dies and the heat developed in the resistance of the interconnecting leads. Semiconductor devices must be designed so that current is evenly distributed within the device across its internal junctions (or channels); once a "hot spot" develops, breakdown effects can rapidly destroy the device. Certain SCRs are available with current ratings to 3000 amperes in a single unit.
DC/AC converters (inverters)
DC to AC converters produce an AC output waveform from a DC source. Applications include adjustable speed drives (ASD), uninterruptible power supplies (UPS), Flexible AC transmission systems (FACTS), voltage compensators, and photovoltaic inverters. Topologies for these converters can be separated into two distinct categories: voltage source inverters and current source inverters. Voltage source inverters (VSIs) are named so because the independently controlled output is a voltage waveform. Similarly, current source inverters (CSIs) are distinct in that the controlled AC output is a current waveform.
DC to AC power conversion is the result of power switching devices, which are commonly fully controllable semiconductor power switches. The output waveforms are therefore made up of discrete values, producing fast transitions rather than smooth ones. For some applications, even a rough approximation of the sinusoidal waveform of AC power is adequate. Where a near sinusoidal waveform is required, the switching devices are operated much faster than the desired output frequency, and the time they spend in either state is controlled so the averaged output is nearly sinusoidal. Common modulation techniques include the carrier-based technique, or Pulse-width modulation, space-vector technique, and the selective-harmonic technique.
Voltage source inverters have practical uses in both single-phase and three-phase applications. Single-phase VSIs utilize half-bridge and full-bridge configurations, and are widely used for power supplies, single-phase UPSs, and elaborate high-power topologies when used in multicell configurations. Three-phase VSIs are used in applications that require sinusoidal voltage waveforms, such as ASDs, UPSs, and some types of FACTS devices such as the STATCOM. They are also used in applications where arbitrary voltages are required, as in the case of active power filters and voltage compensators.
Current source inverters are used to produce an AC output current from a DC current supply. This type of inverter is practical for three-phase applications in which high-quality voltage waveforms are required.
A relatively new class of inverters, called multilevel inverters, has gained widespread interest. The normal operation of CSIs and VSIs can be classified as two-level inverters, due to the fact that power switches connect to either the positive or to the negative DC bus. If more than two voltage levels were available to the inverter output terminals, the AC output could better approximate a sine wave. It is for this reason that multilevel inverters, although more complex and costly, offer higher performance.
Each inverter type differs in the DC links used, and in whether or not they require freewheeling diodes. Either can be made to operate in square-wave or pulse-width modulation (PWM) mode, depending on its intended usage. Square-wave mode offers simplicity, while PWM can be implemented in several different ways and produces higher quality waveforms.
Voltage Source Inverters (VSI) feed the output inverter section from an approximately constant-voltage source.
The desired quality of the current output waveform determines which modulation technique needs to be selected for a given application. The output of a VSI is composed of discrete values. In order to obtain a smooth current waveform, the loads need to be inductive at the select harmonic frequencies. Without some sort of inductive filtering between the source and load, a capacitive load will cause the load to receive a choppy current waveform, with large and frequent current spikes.
There are three main types of VSIs:
Single-phase half-bridge inverter
Single-phase full-bridge inverter
Three-phase voltage source inverter
Single-phase half-bridge inverter
The single-phase voltage source half-bridge inverters are meant for lower voltage applications and are commonly used in power supplies. Figure 9 shows the circuit schematic of this inverter.
Low-order current harmonics get injected back to the source voltage by the operation of the inverter. This means that two large capacitors are needed for filtering purposes in this design. As Figure 9 illustrates, only one switch can be on at a time in each leg of the inverter. If both switches in a leg were on at the same time, the DC source would be shorted out.
Inverters can use several modulation techniques to control their switching schemes. The carrier-based PWM technique compares the AC output waveform, vc, to a carrier voltage signal, vΔ. When vc is greater than vΔ, S+ is on, and when vc is less than vΔ, S− is on. When the AC output is at frequency fc with its amplitude at vc, and the triangular carrier signal is at frequency fΔ with its amplitude at vΔ, the PWM becomes a special sinusoidal case of the carrier based PWM. This case is dubbed sinusoidal pulse-width modulation (SPWM).For this, the modulation index, or amplitude-modulation ratio, is defined as .
The normalized carrier frequency, or frequency-modulation ratio, is calculated using the equation .
If the over-modulation region, ma, exceeds one, a higher fundamental AC output voltage will be observed, but at the cost of saturation. For SPWM, the harmonics of the output waveform are at well-defined frequencies and amplitudes. This simplifies the design of the filtering components needed for the low-order current harmonic injection from the operation of the inverter. The maximum output amplitude in this mode of operation is half of the source voltage. If the maximum output amplitude, ma, exceeds 3.24, the output waveform of the inverter becomes a square wave.
As was true for Pulse-Width Modulation (PWM), both switches in a leg for square wave modulation cannot be turned on at the same time, as this would cause a short across the voltage source. The switching scheme requires that both S+ and S− be on for a half cycle of the AC output period. The fundamental AC output amplitude is equal to .
Its harmonics have an amplitude of .
Therefore, the AC output voltage is not controlled by the inverter, but rather by the magnitude of the DC input voltage of the inverter.
Using selective harmonic elimination (SHE) as a modulation technique allows the switching of the inverter to selectively eliminate intrinsic harmonics. The fundamental component of the AC output voltage can also be adjusted within a desirable range. Since the AC output voltage obtained from this modulation technique has odd half and odd quarter-wave symmetry, even harmonics do not exist. Any undesirable odd (N-1) intrinsic harmonics from the output waveform can be eliminated.
Single-phase full-bridge inverter
The full-bridge inverter is similar to the half bridge-inverter, but it has an additional leg to connect the neutral point to the load. Figure 3 shows the circuit schematic of the single-phase voltage source full-bridge inverter.
To avoid shorting out the voltage source, S1+, and S1− cannot be on at the same time, and S2+ and S2− also cannot be on at the same time. Any modulating technique used for the full-bridge configuration should have either the top or the bottom switch of each leg on at any given time. Due to the extra leg, the maximum amplitude of the output waveform is Vi, and is twice as large as the maximum achievable output amplitude for the half-bridge configuration.
States 1 and 2 from Table 2 are used to generate the AC output voltage with bipolar SPWM. The AC output voltage can take on only two values, either Vi or −Vi. To generate these same states using a half-bridge configuration, a carrier based technique can be used. S+ being on for the half-bridge corresponds to S1+ and S2− being on for the full-bridge. Similarly, S− being on for the half-bridge corresponds to S1− and S2+ being on for the full bridge. The output voltage for this modulation technique is more or less sinusoidal, with a fundamental component that has an amplitude in the linear region of less than or equal to one .
Unlike the bipolar PWM technique, the unipolar approach uses states 1, 2, 3, and 4 from Table 2 to generate its AC output voltage. Therefore, the AC output voltage can take on the values Vi, 0 or −V [1]i. To generate these states, two sinusoidal modulating signals, Vc and −Vc, are needed, as seen in Figure 4.
Vc is used to generate VaN, while –Vc is used to generate VbN. The following relationship is called unipolar carrier-based SPWM .
The phase voltages VaN and VbN are identical, but 180 degrees out of phase with each other. The output voltage is equal to the difference of the two-phase voltages, and do not contain any even harmonics. Therefore, if mf is taken, even the AC output voltage harmonics will appear at normalized odd frequencies, fh. These frequencies are centered on double the value of the normalized carrier frequency. This particular feature allows for smaller filtering components when trying to obtain a higher quality output waveform.
As was the case for the half-bridge SHE, the AC output voltage contains no even harmonics due to its odd half and odd quarter-wave symmetry.
Three-phase voltage source inverter
Single-phase VSIs are used primarily for low power range applications, while three-phase VSIs cover both medium and high power range applications. Figure 5 shows the circuit schematic for a three-phase VSI.
Switches in any of the three legs of the inverter cannot be switched off simultaneously due to this resulting in the voltages being dependent on the respective line current's polarity. States 7 and 8 produce zero AC line voltages, which result in AC line currents freewheeling through either the upper or the lower components. However, the line voltages for states 1 through 6 produce an AC line voltage consisting of the discrete values of Vi, 0 or −Vi.
For three-phase SPWM, three modulating signals that are 120 degrees out of phase with one another are used in order to produce out-of-phase load voltages. In order to preserve the PWM features with a single carrier signal, the normalized carrier frequency, mf, needs to be a multiple of three. This keeps the magnitude of the phase voltages identical, but out of phase with each other by 120 degrees. The maximum achievable phase voltage amplitude in the linear region, ma less than or equal to one, is . The maximum achievable line voltage amplitude is
The only way to control the load voltage is by changing the input DC voltage.
Current source inverters
Current source inverters convert DC current into an AC current waveform. In applications requiring sinusoidal AC waveforms, magnitude, frequency, and phase should all be controlled. CSIs have high changes in current over time, so capacitors are commonly employed on the AC side, while inductors are commonly employed on the DC side. Due to the absence of freewheeling diodes, the power circuit is reduced in size and weight, and tends to be more reliable than VSIs. Although single-phase topologies are possible, three-phase CSIs are more practical.
In its most generalized form, a three-phase CSI employs the same conduction sequence as a six-pulse rectifier. At any time, only one common-cathode switch and one common-anode switch are on.
As a result, line currents take discrete values of –ii, 0 and ii. States are chosen such that a desired waveform is output and only valid states are used. This selection is based on modulating techniques, which include carrier-based PWM, selective harmonic elimination, and space-vector techniques.
Carrier-based techniques used for VSIs can also be implemented for CSIs, resulting in CSI line currents that behave in the same way as VSI line voltages. The digital circuit utilized for modulating signals contains a switching pulse generator, a shorting pulse generator, a shorting pulse distributor, and a switching and shorting pulse combiner. A gating signal is produced based on a carrier current and three modulating signals.
A shorting pulse is added to this signal when no top switches and no bottom switches are gated, causing the RMS currents to be equal in all legs. The same methods are utilized for each phase, however, switching variables are 120 degrees out of phase relative to one another, and the current pulses are shifted by a half-cycle with respect to output currents. If a triangular carrier is used with sinusoidal modulating signals, the CSI is said to be utilizing synchronized-pulse-width-modulation (SPWM). If full over-modulation is used in conjunction with SPWM the inverter is said to be in square-wave operation.
The second CSI modulation category, SHE is also similar to its VSI counterpart. Utilizing the gating signals developed for a VSI and a set of synchronizing sinusoidal current signals, results in symmetrically distributed shorting pulses and, therefore, symmetrical gating patterns. This allows any arbitrary number of harmonics to be eliminated. It also allows control of the fundamental line current through the proper selection of primary switching angles. Optimal switching patterns must have quarter-wave and half-wave symmetry, as well as symmetry about 30 degrees and 150 degrees. Switching patterns are never allowed between 60 degrees and 120 degrees. The current ripple can be further reduced with the use of larger output capacitors, or by increasing the number of switching pulses.
The third category, space-vector-based modulation, generates PWM load line currents that equal load line currents, on average. Valid switching states and time selections are made digitally based on space vector transformation. Modulating signals are represented as a complex vector using a transformation equation. For balanced three-phase sinusoidal signals, this vector becomes a fixed module, which rotates at a frequency, ω. These space vectors are then used to approximate the modulating signal. If the signal is between arbitrary vectors, the vectors are combined with the zero vectors I7, I8, or I9. The following equations are used to ensure that the generated currents and the current vectors are on the average equivalent.
Multilevel inverters
A relatively new class called multilevel inverters has gained widespread interest. Normal operation of CSIs and VSIs can be classified as two-level inverters because the power switches connect to either the positive or the negative DC bus. If more than two voltage levels were available to the inverter output terminals, the AC output could better approximate a sine wave. For this reason multilevel inverters, although more complex and costly, offer higher performance. A three-level neutral-clamped inverter is shown in Figure 10.
Control methods for a three-level inverter only allow two switches of the four switches in each leg to simultaneously change conduction states. This allows smooth commutation and avoids shoot through by only selecting valid states. It may also be noted that since the DC bus voltage is shared by at least two power valves, their voltage ratings can be less than a two-level counterpart.
Carrier-based and space-vector modulation techniques are used for multilevel topologies. The methods for these techniques follow those of classic inverters, but with added complexity. Space-vector modulation offers a greater number of fixed voltage vectors to be used in approximating the modulation signal, and therefore allows more effective space vector PWM strategies to be accomplished at the cost of more elaborate algorithms. Due to added complexity and the number of semiconductor devices, multilevel inverters are currently more suitable for high-power high-voltage applications.
This technology reduces the harmonics hence improves overall efficiency of the scheme.
AC/AC converters
Converting AC power to AC power allows control of the voltage, frequency, and phase of the waveform applied to a load from a supplied AC system . The two main categories that can be used to separate the types of converters are whether the frequency of the waveform is changed. AC/AC converter that don't allow the user to modify the frequencies are known as AC Voltage Controllers, or AC Regulators. AC converters that allow the user to change the frequency are simply referred to as frequency converters for AC to AC conversion. Under frequency converters there are three different types of converters that are typically used: cycloconverter, matrix converter, DC link converter (aka AC/DC/AC converter).
AC voltage controller: The purpose of an AC Voltage Controller, or AC Regulator, is to vary the RMS voltage across the load while at a constant frequency. Three control methods that are generally accepted are ON/OFF Control, Phase-Angle Control, and Pulse-Width Modulation AC Chopper Control (PWM AC Chopper Control). All three of these methods can be implemented not only in single-phase circuits, but three-phase circuits as well.
ON/OFF Control: Typically used for heating loads or speed control of motors, this control method involves turning the switch on for n integral cycles and turning the switch off for m integral cycles. Because turning the switches on and off causes undesirable harmonics to be created, the switches are turned on and off during zero-voltage and zero-current conditions (zero-crossing), effectively reducing the distortion.
Phase-Angle Control: Various circuits exist to implement a phase-angle control on different waveforms, such as half-wave or full-wave voltage control. The power electronic components that are typically used are diodes, SCRs, and Triacs. With the use of these components, the user can delay the firing angle in a wave, which will only cause part of the wave to be in output.
PWM AC Chopper Control: The other two control methods often have poor harmonics, output current quality, and input power factor. In order to improve these values PWM can be used instead of the other methods. What PWM AC Chopper does is have switches that turn on and off several times within alternate half-cycles of input voltage.
Matrix converters and cycloconverters: Cycloconverters are widely used in industry for ac to ac conversion, because they are able to be used in high-power applications. They are commutated direct frequency converters that are synchronised by a supply line. The cycloconverters output voltage waveforms have complex harmonics with the higher-order harmonics being filtered by the machine inductance. Causing the machine current to have fewer harmonics, while the remaining harmonics causes losses and torque pulsations. Note that in a cycloconverter, unlike other converters, there are no inductors or capacitors, i.e. no storage devices. For this reason, the instantaneous input power and the output power are equal.
Single-Phase to Single-Phase Cycloconverters: Single-Phase to Single-Phase Cycloconverters started drawing more interest recently because of the decrease in both size and price of the power electronics switches. The single-phase high frequency ac voltage can be either sinusoidal or trapezoidal. These might be zero voltage intervals for control purpose or zero voltage commutation.
Three-Phase to Single-Phase Cycloconverters: There are two kinds of three-phase to single-phase cycloconverters: 3φ to 1φ half wave cycloconverters and 3φ to 1φ bridge cycloconverters. Both positive and negative converters can generate voltage at either polarity, resulting in the positive converter only supplying positive current, and the negative converter only supplying negative current.
With recent device advances, newer forms of cycloconverters are being developed, such as matrix converters. The first change that is first noticed is that matrix converters utilize bi-directional, bipolar switches. A single phase to a single phase matrix converter consists of a matrix of 9 switches connecting the three input phases to the tree output phase. Any input phase and output phase can be connected together at any time without connecting any two switches from the same phase at the same time; otherwise this will cause a short circuit of the input phases. Matrix converters are lighter, more compact and versatile than other converter solutions. As a result, they are able to achieve higher levels of integration, higher temperature operation, broad output frequency and natural bi-directional power flow suitable to regenerate energy back to the utility.
The matrix converters are subdivided into two types: direct and indirect converters. A direct matrix converter with three-phase input and three-phase output, the switches in a matrix converter must be bi-directional, that is, they must be able to block voltages of either polarity and to conduct current in either direction. This switching strategy permits the highest possible output voltage and reduces the reactive line-side current. Therefore, the power flow through the converter is reversible. Because of its commutation problem and complex control keep it from being broadly utilized in industry.
Unlike the direct matrix converters, the indirect matrix converters has the same functionality, but uses separate input and output sections that are connected through a dc link without storage elements. The design includes a four-quadrant current source rectifier and a voltage source inverter. The input section consists of bi-directional bipolar switches. The commutation strategy can be applied by changing the switching state of the input section while the output section is in a freewheeling mode. This commutation algorithm is significantly less complex, and has higher reliability as compared to a conventional direct matrix converter.
DC link converters: DC Link Converters, also referred to as AC/DC/AC converters, convert an AC input to an AC output with the use of a DC link in the middle. Meaning that the power in the converter is converted to DC from AC with the use of a rectifier, and then it is converted back to AC from DC with the use of an inverter. The end result is an output with a lower voltage and variable (higher or lower) frequency. Due to their wide area of application, the AC/DC/AC converters are the most common contemporary solution. Other advantages to AC/DC/AC converters is that they are stable in overload and no-load conditions, as well as they can be disengaged from a load without damage.
Hybrid matrix converter: Hybrid matrix converters are relatively new for AC/AC converters. These converters combine the AC/DC/AC design with the matrix converter design. Multiple types of hybrid converters have been developed in this new category, an example being a converter that uses uni-directional switches and two converter stages without the dc-link; without the capacitors or inductors needed for a dc-link, the weight and size of the converter is reduced. Two sub-categories exist from the hybrid converters, named hybrid direct matrix converter (HDMC) and hybrid indirect matrix converter (HIMC). HDMC convert the voltage and current in one stage, while the HIMC utilizes separate stages, like the AC/DC/AC converter, but without the use of an intermediate storage element.
Applications: Below is a list of common applications that each converter is used in.
AC voltage controller: Lighting control; domestic and industrial heating; speed control of fan, pump or hoist drives, soft starting of induction motors, static AC switches (temperature control, transformer tap changing, etc.)
Cycloconverter: High-power low-speed reversible AC motor drives; constant frequency power supply with variable input frequency; controllable VAR generators for power factor correction; AC system interties linking two independent power systems.
Matrix converter: Currently the application of matrix converters are limited due to the non-availability of bilateral monolithic switches capable of operating at high frequency, complex control law implementation, commutation, and other reasons. With these developments, matrix converters could replace cycloconverters in many areas.
DC link: Can be used for individual or multiple load applications of machine building and construction.
Simulations of power electronic systems
Power electronic circuits are simulated using computer simulation programs such as SIMBA, PLECS, PSIM, SPICE, MATLAB/simulink, and OpenModelica. Circuits are simulated before they are produced to test how the circuits respond under certain conditions. Also, creating a simulation is both cheaper and faster than creating a prototype to use for testing.
Applications
Applications of power electronics range in size from a switched mode power supply in an AC adapter, battery chargers, audio amplifiers, fluorescent lamp ballasts, through variable frequency drives and DC motor drives used to operate pumps, fans, and manufacturing machinery, up to gigawatt-scale high voltage direct current power transmission systems used to interconnect electrical grids. Power electronic systems are found in virtually every electronic device. For example:
DC/DC converters are used in most mobile devices (mobile phones, PDA etc.) to maintain the voltage at a fixed value whatever the voltage level of the battery is. These converters are also used for electronic isolation and power factor correction. A power optimizer is a type of DC/DC converter developed to maximize the energy harvest from solar photovoltaic or wind turbine systems.
AC/DC converters (rectifiers) are used every time an electronic device is connected to the mains (computer, television etc.). These may simply change AC to DC or can also change the voltage level as part of their operation.
AC/AC converters are used to change either the voltage level or the frequency (international power adapters, light dimmer). In power distribution networks, AC/AC converters may be used to exchange power between utility frequency 50 Hz and 60 Hz power grids.
DC/AC converters (inverters) are used primarily in UPS or renewable energy systems or emergency lighting systems. Mains power charges the DC battery. If the mains fails, an inverter produces AC electricity at mains voltage from the DC battery. Solar inverter, both smaller string and larger central inverters, as well as solar micro-inverter are used in photovoltaics as a component of a PV system.
Motor drives are found in pumps, blowers, and mill drives for textile, paper, cement and other such facilities. Drives may be used for power conversion and for motion control. For AC motors, applications include variable-frequency drives, motor soft starters and excitation systems.
In hybrid electric vehicles (HEVs), power electronics are used in two formats: series hybrid and parallel hybrid. The difference between a series hybrid and a parallel hybrid is the relationship of the electric motor to the internal combustion engine (ICE). Devices used in electric vehicles consist mostly of dc/dc converters for battery charging and dc/ac converters to power the propulsion motor. Electric trains use power electronic devices to obtain power, as well as for vector control using pulse-width modulation (PWM) rectifiers. The trains obtain their power from power lines. Another new usage for power electronics is in elevator systems. These systems may use thyristors, inverters, permanent magnet motors, or various hybrid systems that incorporate PWM systems and standard motors.
Inverters
In general, inverters are utilized in applications requiring direct conversion of electrical energy from DC to AC or indirect conversion from AC to AC. DC to AC conversion is useful for many fields, including power conditioning, harmonic compensation, motor drives, renewable energy grid integration, and spacecraft solar power systems.
In power systems it is often desired to eliminate harmonic content found in line currents. VSIs can be used as active power filters to provide this compensation. Based on measured line currents and voltages, a control system determines reference current signals for each phase. This is fed back through an outer loop and subtracted from actual current signals to create current signals for an inner loop to the inverter. These signals then cause the inverter to generate output currents that compensate for the harmonic content. This configuration requires no real power consumption, as it is fully fed by the line; the DC link is simply a capacitor that is kept at a constant voltage by the control system. In this configuration, output currents are in phase with line voltages to produce a unity power factor. Conversely, VAR compensation is possible in a similar configuration where output currents lead line voltages to improve the overall power factor.
In facilities that require energy at all times, such as hospitals and airports, UPS systems are utilized. In a standby system, an inverter is brought online when the normally supplying grid is interrupted. Power is instantaneously drawn from onsite batteries and converted into usable AC voltage by the VSI, until grid power is restored, or until backup generators are brought online. In an online UPS system, a rectifier-DC-link-inverter is used to protect the load from transients and harmonic content. A battery in parallel with the DC-link is kept fully charged by the output in case the grid power is interrupted, while the output of the inverter is fed through a low pass filter to the load. High power quality and independence from disturbances is achieved.
Various AC motor drives have been developed for speed, torque, and position control of AC motors. These drives can be categorized as low-performance or as high-performance, based on whether they are scalar-controlled or vector-controlled, respectively. In scalar-controlled drives, fundamental stator current, or voltage frequency and amplitude, are the only controllable quantities. Therefore, these drives are employed in applications where high quality control is not required, such as fans and compressors. On the other hand, vector-controlled drives allow for instantaneous current and voltage values to be controlled continuously. This high performance is necessary for applications such as elevators and electric cars.
Inverters are also vital to many renewable energy applications. In photovoltaic purposes, the inverter, which is usually a PWM VSI, gets fed by the DC electrical energy output of a photovoltaic module or array. The inverter then converts this into an AC voltage to be interfaced with either a load or the utility grid. Inverters may also be employed in other renewable systems, such as wind turbines. In these applications, the turbine speed usually varies, causing changes in voltage frequency and sometimes in the magnitude. In this case, the generated voltage can be rectified and then inverted to stabilize frequency and magnitude.
Smart grid
A smart grid is a modernized electrical grid that uses information and communications technology to gather and act on information, such as information about the behaviors of suppliers and consumers, in an automated fashion to improve the efficiency, reliability, economics, and sustainability of the production and distribution of electricity.
Electric power generated by wind turbines and hydroelectric turbines by using induction generators can cause variances in the frequency at which power is generated. Power electronic devices are utilized in these systems to convert the generated ac voltages into high-voltage direct current (HVDC). The HVDC power can be more easily converted into three phase power that is coherent with the power associated to the existing power grid. Through these devices, the power delivered by these systems is cleaner and has a higher associated power factor. Wind power systems optimum torque is obtained either through a gearbox or direct drive technologies that can reduce the size of the power electronics device.
Electric power can be generated through photovoltaic cells by using power electronic devices. The produced power is usually then transformed by solar inverters. Inverters are divided into three different types: central, module-integrated, and string. Central converters can be connected either in parallel or in series on the DC side of the system. For photovoltaic "farms", a single central converter is used for the entire system. Module-integrated converters are connected in series on either the DC or AC side. Normally several modules are used within a photovoltaic system, since the system requires these converters on both DC and AC terminals. A string converter is used in a system that utilizes photovoltaic cells that are facing different directions. It is used to convert the power generated to each string, or line, in which the photovoltaic cells are interacting.
Power electronics can be used to help utilities adapt to the rapid increase in distributed residential/commercial solar power generation. Germany and parts of Hawaii, California, and New Jersey require costly studies to be conducted before approving new solar installations. Relatively small-scale ground- or pole-mounted devices create the potential for a distributed control infrastructure to monitor and manage the flow of power. Traditional electromechanical systems, such as capacitor banks or voltage regulators at substations, can take minutes to adjust voltage and can be distant from the solar installations where the problems originate. If voltage on a neighborhood circuit goes too high, it can endanger utility crews and cause damage to both utility and customer equipment. Further, a grid fault causes photovoltaic generators to shut down immediately, spiking the demand for grid power. Smart grid-based regulators are more controllable than far more numerous consumer devices.
In another approach, a group of 16 western utilities called the Western Electric Industry Leaders called for the mandatory use of "smart inverters." These devices convert DC to household AC and can also help with power quality. Such devices could eliminate the need for expensive utility equipment upgrades at a much lower total cost.
See also
Multi-port power electronic interface
FET amplifier
Power management integrated circuit
RF power amplifier
Notes
References
*
External links
Electronics industry | Power electronics | [
"Technology",
"Engineering"
] | 9,088 | [
"Information and communications technology",
"Electronic engineering",
"Power electronics",
"Electronics industry"
] |
1,001,846 | https://en.wikipedia.org/wiki/Trehalose | Trehalose (from Turkish tıgala – a sugar derived from insect cocoons + -ose) is a sugar consisting of two molecules of glucose. It is also known as mycose or tremalose. Some bacteria, fungi, plants and invertebrate animals synthesize it as a source of energy, and to survive freezing and lack of water.
Extracting trehalose was once a difficult and costly process, but around 2000, the Hayashibara company (Okayama, Japan) discovered an inexpensive extraction technology from starch. Trehalose has high water retention capabilities, and is used in food, cosmetics and as a drug. A procedure developed in 2017 using trehalose allows sperm storage at room temperatures.
Structure
Trehalose is a disaccharide formed by a bond between two α-glucose units. It is found in nature as a disaccharide and also as a monomer in some polymers. Two other stereoisomers exist: α,β-trehalose, also called neotrehalose, and β,β-trehalose, also called isotrehalose. Neither of these alternate isomers has been isolated from living organisms, but isotrehalose has been was found in starch hydroisolates.
Synthesis
At least three biological pathways support trehalose biosynthesis. An industrial process can derive trehalose from corn starch.
Properties
Chemical
Trehalose is a nonreducing sugar formed from two glucose units joined by a 1–1 alpha bond, giving it the name . The bonding makes trehalose very resistant to acid hydrolysis, and therefore is stable in solution at high temperatures, even under acidic conditions. The bonding keeps nonreducing sugars in closed-ring form, such that the aldehyde or ketone end groups do not bind to the lysine or arginine residues of proteins (a process called glycation). Trehalose is less soluble than sucrose, except at high temperatures (>80 °C). Trehalose forms a rhomboid crystal as the dihydrate, and has 90% of the calorific content of sucrose in that form. Anhydrous forms of trehalose readily regain moisture to form the dihydrate. Anhydrous forms of trehalose can show interesting physical properties when heat-treated.
Trehalose aqueous solutions show a concentration-dependent clustering tendency. Owing to their ability to form hydrogen bonds, they self-associate in water to form clusters of various sizes. All-atom molecular dynamics simulations showed that concentrations of 1.5–2.2 molar allow trehalose molecular clusters to percolate and form large and continuous aggregates.
Trehalose directly interacts with nucleic acids, facilitates melting of double stranded DNA and stabilizes single-stranded nucleic acids.
Biological
Organisms ranging from bacteria, yeast, fungi, insects, invertebrates, and lower and higher plants have enzymes that can make trehalose.
In nature, trehalose can be found in plants, and microorganisms. In animals, trehalose is prevalent in shrimp, and also in insects, including grasshoppers, locusts, butterflies, and bees, in which trehalose serves as blood-sugar. Trehalase genes are found in tardigrades, the microscopic ecdysozoans found worldwide in diverse extreme environments.
Trehalose is the major carbohydrate energy storage molecule used by insects for flight. One possible reason for this is that the glycosidic linkage of trehalose, when acted upon by an insect trehalase, releases two molecules of glucose, which is required for the rapid energy requirements of flight. This is double the efficiency of glucose release from the storage polymer starch, for which cleavage of one glycosidic linkage releases only one glucose molecule.
The concentrations of both trehalose and glucose in the insect hemolymph are tightly controlled by multiple enzymes and hormones, including trehalase, insulin-like peptides (ILPs and DILPs), adipokinetic hormone (AKH), leucokinin (LK), octopamine and other mediators, thereby maintaining carbohydrate homeostasis by endocrine and metabolic feedback mechanisms.
In plants, trehalose is seen in sunflower seeds, moonwort, Selaginella plants, and sea algae. Within the fungi, it is prevalent in some mushrooms, such as shiitake (Lentinula edodes), oyster, king oyster, and golden needle.
Even within the plant kingdom, Selaginella (sometimes called the resurrection plant), which grows in desert and mountainous areas, may be cracked and dried out, but will turn green again and revive after rain because of the function of trehalose.
The two prevalent theories as to how trehalose works within the organism in the state of cryptobiosis are the vitrification theory, a state that prevents ice formation, or the water displacement theory, whereby water is replaced by trehalose.
In bacterial cell wall, trehalose has a structural role in adaptive responses to stress such as osmotic differences and extreme temperature. Yeast uses trehalose as a carbon source in response to abiotic stresses. In humans, the only known function of trehalose is as a neuroprotective, which it accomplishes by inducing autophagy and thereby clearing protein aggregates.
Trehalose has also been reported for anti-bacterial, anti-biofilm, and anti-inflammatory (in vitro and in vivo) activities, upon its esterification with fatty acids of varying chain lengths.
Nutritional and dietary properties
Trehalose is rapidly broken down into glucose by the enzyme trehalase, which is present in the brush border of the intestinal mucosa of omnivores (including humans) and herbivores. It causes less of a spike in blood sugar than glucose. Trehalose has about 45% the sweetness of sucrose at concentrations above 22%, but when the concentration is reduced, its sweetness decreases more quickly than that of sucrose, so that a 2.3% solution tastes 6.5 times less sweet as the equivalent sugar solution.
It is commonly used in prepared frozen foods, like ice cream, because it lowers the freezing point of foods.
Deficiency of trehalase enzyme is unusual in humans, except in the Greenlandic Inuit, where it is present in only 10–15% of the population.
Metabolism
Five biosynthesis pathways have been reported for trehalose. The most common pathway is TPS/TPP pathway which is used by organisms that synthesize trehalose using the enzyme trehalose-6-phosphate (T6P) synthase (TPS). Second, trehalose synthase (TS) in certain types of bacteria could produce trehalose by using maltose and another disaccharide with two glucose units as substrates. Third, the TreY-TreZ pathway in some bacteria converts starch that contain maltooligosaccharide or glycogen directly into trehalose. Fourth, in primitive bacteria, trehalose glycisyltransferring synthase (TreT) produces trehalose from ADP-glucose and glucose. Fifth, trehalose phosphorylase (TreP) either hydrolyses trehalose into glucose-1-phosphate and glucose or may act reversibly in certain species. Vertebrates do not have the ability to synthesize or store trehalose. Trehalase in humans is found only in specific location such as the intestinal mucosa, renal brush-border, liver and blood. Expression of this enzyme in vertebrates is initially found during the gestation period that is the highest after weaning. Then, the level of trehalase remained constant in the intestine throughout life. Meanwhile, diets consisting of plants and fungi contain trehalose. Moderate amount of trehalose in diet is essential and having low amount of trehalose could result in diarrhea, or other intestinal symptoms.
Medical use
Trehalose is an ingredient, along with hyaluronic acid, in an artificial tears product used to treat dry eye. Outbreaks of Clostridioides difficile were initially associated with trehalose, but this finding was disputed in 2019.
In 2021, the FDA accepted an Investigational New Drug (IND) application and granted fast track status for an injectable form of trehalose (SLS-005) as a potential treatment for spinocerebellar ataxia type 3 (SCA3).
History
In 1832, H.A.L. Wiggers discovered trehalose in an ergot of rye, and in 1859 Marcellin Berthelot isolated it from Trehala manna, a substance made by weevils and named it trehalose.
Trehalose has long been known as an autophagy inducer that acts independently of mTOR. In 2017, research was published showing that trehalose induces autophagy by activating TFEB, a protein that acts as a master regulator of the autophagy-lysosome pathway.
See also
Biostasis
Cryoprotectant
Cryptobiosis
Freeze drying
Lentztrehalose
Trehalosamine
References
External links
Trehalose in sperm preservation
Carbohydrates
Disaccharides
Types of sugar
Orphan drugs | Trehalose | [
"Chemistry"
] | 1,979 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Carbohydrates",
"Carbohydrate chemistry"
] |
1,002,128 | https://en.wikipedia.org/wiki/Giant%20magnetoresistance | Giant magnetoresistance (GMR) is a quantum mechanical magnetoresistance effect observed in multilayers composed of alternating ferromagnetic and non-magnetic conductive layers. The 2007 Nobel Prize in Physics was awarded to Albert Fert and Peter Grünberg for the discovery of GMR, which also sets the foundation for the study of spintronics.
The effect is observed as a significant change in the electrical resistance depending on whether the magnetization of adjacent ferromagnetic layers are in a parallel or an antiparallel alignment. The overall resistance is relatively low for parallel alignment and relatively high for antiparallel alignment. The magnetization direction can be controlled, for example, by applying an external magnetic field. The effect is based on the dependence of electron scattering on spin orientation.
The main application of GMR is in magnetic field sensors, which are used to read data in hard disk drives, biosensors, microelectromechanical systems (MEMS) and other devices. GMR multilayer structures are also used in magnetoresistive random-access memory (MRAM) as cells that store one bit of information.
In literature, the term giant magnetoresistance is sometimes confused with colossal magnetoresistance of ferromagnetic and antiferromagnetic semiconductors, which is not related to a multilayer structure.
Formulation
Magnetoresistance is the dependence of the electrical resistance of a sample on the strength of an external magnetic field. Numerically, it is characterized by the value
where R(H) is the resistance of the sample in a magnetic field H, and R(0) corresponds to H = 0. Alternative forms of this expression may use electrical resistivity instead of resistance, a different sign for δH, and are sometimes normalized by R(H) rather than R(0).
The term "giant magnetoresistance" indicates that the value δH for multilayer structures significantly exceeds the anisotropic magnetoresistance, which has a typical value within a few percent.
History
GMR was discovered in 1988 independently by the groups of Albert Fert of the University of Paris-Sud, France, and Peter Grünberg of Forschungszentrum Jülich, Germany. The practical significance of this experimental discovery was recognized by the Nobel Prize in Physics awarded to Fert and Grünberg in 2007.
Early steps
The first mathematical model describing the effect of magnetization on the mobility of charge carriers in solids, related to the spin of those carriers, was reported in 1936. Experimental evidence of the potential enhancement of δH has been known since the 1960s. By the late 1980s, the anisotropic magnetoresistance had been well explored, but the corresponding value of δH did not exceed a few percent. The enhancement of δH became possible with the advent of sample preparation techniques such as molecular beam epitaxy, which allows manufacturing multilayer thin films with a thickness of several nanometers.
Experiment and its interpretation
Fert and Grünberg studied electrical resistance of structures incorporating ferromagnetic and non-ferromagnetic materials. In particular, Fert worked on multilayer films, and Grünberg in 1986 discovered the antiferromagnetic exchange interaction in Fe/Cr films.
The GMR discovery work was carried out by the two groups on slightly different samples. The Fert group used (001)Fe/(001) Cr superlattices wherein the Fe and Cr layers were deposited in a high vacuum on a (001) GaAs substrate kept at 20 °C and the magnetoresistance measurements were taken at low temperature (typically 4.2 K). The Grünberg work was performed on multilayers of Fe and Cr on (110) GaAs at room temperature.
In Fe/Cr multilayers with 3-nm-thick iron layers, increasing the thickness of the non-magnetic Cr layers from 0.9 to 3 nm weakened the antiferromagnetic coupling between the Fe layers and reduced the demagnetization field, which also decreased when the sample was heated from 4.2 K to room temperature. Changing the thickness of the non-magnetic layers led to a significant reduction of the residual magnetization in the hysteresis loop. Electrical resistance changed by up to 50% with the external magnetic field at 4.2 K. Fert named the new effect giant magnetoresistance, to highlight its difference with the anisotropic magnetoresistance. The
Grünberg experiment made the same discovery but the effect was less pronounced (3% compared to 50%) due to the samples being at room temperature rather than low temperature.
The discoverers suggested that the effect is based on spin-dependent scattering of electrons in the superlattice, particularly on the dependence of resistance of the layers on the relative orientations of magnetization and electron spins. The theory of GMR for different directions of the current was developed in the next few years. In 1989, Camley and Barnaś calculated the "current in plane" (CIP) geometry, where the current flows along the layers, in the classical approximation, whereas Levy et al. used the quantum formalism. The theory of the GMR for the current perpendicular to the layers (current perpendicular to the plane or CPP geometry), known as the Valet-Fert theory, was reported in 1993. Applications favor the CPP geometry because it provides a greater magnetoresistance ratio (δH), thus resulting in a greater device sensitivity.
Theory
Fundamentals
Spin-dependent scattering
In magnetically ordered materials, the electrical resistance is crucially affected by scattering of electrons on the magnetic sublattice of the crystal, which is formed by crystallographically equivalent atoms with nonzero magnetic moments. Scattering depends on the relative orientations of the electron spins and those magnetic moments: it is weakest when they are parallel and strongest when they are antiparallel; it is relatively strong in the paramagnetic state, in which the magnetic moments of the atoms have random orientations.
For good conductors such as gold or copper, the Fermi level lies within the sp band, and the d band is completely filled. In ferromagnets, the dependence of electron-atom scattering on the orientation of their magnetic moments is related to the filling of the band responsible for the magnetic properties of the metal, e.g., 3d band for iron, nickel or cobalt. The d band of ferromagnets is split, as it contains a different number of electrons with spins directed up and down. Therefore, the density of electronic states at the Fermi level is also different for spins pointing in opposite directions. The Fermi level for majority-spin electrons is located within the sp band, and their transport is similar in ferromagnets and non-magnetic metals. For minority-spin electrons the sp and d bands are hybridized, and the Fermi level lies within the d band. The hybridized spd band has a high density of states, which results in stronger scattering and thus shorter mean free path λ for minority-spin than majority-spin electrons. In cobalt-doped nickel, the ratio λ↑/λ↓ can reach 20.
According to the Drude theory, the conductivity is proportional to λ, which ranges from several to several tens of nanometers in thin metal films. Electrons "remember" the direction of spin within the so-called spin relaxation length (or spin diffusion length), which can significantly exceed the mean free path. Spin-dependent transport refers to the dependence of electrical conductivity on the spin direction of the charge carriers. In ferromagnets, it occurs due to electron transitions between the unsplit 4s and split 3d bands.
In some materials, the interaction between electrons and atoms is the weakest when their magnetic moments are antiparallel rather than parallel. A combination of both types of materials can result in a so-called inverse GMR effect.
CIP and CPP geometries
Electric current can be passed through magnetic superlattices in two ways. In the current in plane (CIP) geometry, the current flows along the layers, and the electrodes are located on one side of the structure. In the current perpendicular to plane (CPP) configuration, the current is passed perpendicular to the layers, and the electrodes are located on different sides of the superlattice. The CPP geometry results in more than twice higher GMR, but is more difficult to realize in practice than the CIP configuration.
Carrier transport through a magnetic superlattice
Magnetic ordering differs in superlattices with ferromagnetic and antiferromagnetic interaction between the layers. In the former case, the magnetization directions are the same in different ferromagnetic layers in the absence of applied magnetic field, whereas in the latter case, opposite directions alternate in the multilayer. Electrons traveling through the ferromagnetic superlattice interact with it much weaker when their spin directions are opposite to the magnetization of the lattice than when they are parallel to it. Such anisotropy is not observed for the antiferromagnetic superlattice; as a result, it scatters electrons stronger than the ferromagnetic superlattice and exhibits a higher electrical resistance.
Applications of the GMR effect require dynamic switching between the parallel and antiparallel magnetization of the layers in a superlattice. In first approximation, the energy density of the interaction between two ferromagnetic layers separated by a non-magnetic layer is proportional to the scalar product of their magnetizations:
The coefficient J is an oscillatory function of the thickness of the non-magnetic layer ds; therefore J can change its magnitude and sign. If the ds value corresponds to the antiparallel state then an external field can switch the superlattice from the antiparallel state (high resistance) to the parallel state (low resistance). The total resistance of the structure can be written as
where R0 is the resistance of ferromagnetic superlattice, ΔR is the GMR increment and θ is the angle between the magnetizations of adjacent layers.
Mathematical description
The GMR phenomenon can be described using two spin-related conductivity channels corresponding to the conduction of electrons, for which the resistance is minimum or maximum. The relation between them is often defined in terms of the coefficient of the spin anisotropy β. This coefficient can be defined using the minimum and maximum of the specific electrical resistivity ρF± for the spin-polarized current in the form
where ρF is the average resistivity of the ferromagnet.
Resistor model for CIP and CPP structures
If scattering of charge carriers at the interface between the ferromagnetic and non-magnetic metal is small, and the direction of the electron spins persists long enough, it is convenient to consider a model in which the total resistance of the sample is a combination of the resistances of the magnetic and non-magnetic layers.
In this model, there are two conduction channels for electrons with various spin directions relative to the magnetization of the layers. Therefore, the equivalent circuit of the GMR structure consists of two parallel connections corresponding to each of the channels. In this case, the GMR can be expressed as
Here the subscript of R denote collinear and oppositely oriented magnetization in layers, χ = b/a is the thickness ratio of the magnetic and non-magnetic layers, and ρN is the resistivity of non-magnetic metal. This expression is applicable for both CIP and CPP structures. Under the condition this relationship can be simplified using the coefficient of the spin asymmetry
Such a device, with resistance depending on the orientation of electron spin, is called a spin valve. It is "open", if the magnetizations of its layers are parallel, and "closed" otherwise.
Valet-Fert model
In 1993, Thierry Valet and Albert Fert presented a model for the giant magnetoresistance in the CPP geometry, based on the Boltzmann equations. In this model the chemical potential inside the magnetic layer is split into two functions, corresponding to electrons with spins parallel and antiparallel to the magnetization of the layer. If the non-magnetic layer is sufficiently thin then in the external field E0 the amendments to the electrochemical potential and the field inside the sample will take the form
where ℓs is the average length of spin relaxation, and the z coordinate is measured from the boundary between the magnetic and non-magnetic layers (z < 0 corresponds to the ferromagnetic). Thus electrons with a larger chemical potential will accumulate at the boundary of the ferromagnet. This can be represented by the potential of spin accumulation VAS or by the so-called interface resistance (inherent to the boundary between a ferromagnet and non-magnetic material)
where j is current density in the sample, ℓsN and ℓsF are the length of the spin relaxation in a non-magnetic and magnetic materials, respectively.
Device preparation
Materials and experimental data
Many combinations of materials exhibit GMR; the most common are the following:
FeCr
Co10Cu90: δH = 40% at room temperature
[110]Co95Fe5/Cu: δH = 110% at room temperature.
The magnetoresistance depends on many parameters such as the geometry of the device (CIP or CPP), its temperature, and the thicknesses of ferromagnetic and non-magnetic layers. At a temperature of 4.2 K and a thickness of cobalt layers of 1.5 nm, increasing the thickness of copper layers dCu from 1 to 10 nm decreased δH from 80 to 10% in the CIP geometry. Meanwhile, in the CPP geometry the maximum of δH (125%) was observed for dCu = 2.5 nm, and increasing dCu to 10 nm reduced δH to 60% in an oscillating manner.
When a Co(1.2 nm)/Cu(1.1 nm) superlattice was heated from near zero to 300 K, its δH decreased from 40 to 20% in the CIP geometry, and from 100 to 55% in the CPP geometry.
The non-magnetic layers can be non-metallic. For example, δH up to 40% was demonstrated for organic layers at 11 K. Graphene spin valves of various designs exhibited δH of about 12% at 7 K and 10% at 300 K, far below the theoretical limit of 109%.
The GMR effect can be enhanced by spin filters that select electrons with a certain spin orientation; they are made of metals such as cobalt. For a filter of thickness t the change in conductivity ΔG can be expressed as
where ΔGSV is change in the conductivity of the spin valve without the filter, ΔGf is the maximum increase in conductivity with the filter, and β is a parameter of the filter material.
Types of GMR
GMR is often classed by the type of devices which exhibit the effect.
Films
Antiferromagnetic superlattices
GMR in films was first observed by Fert and Grünberg in a study of superlattices composed of ferromagnetic and non-magnetic layers. The thickness of the non-magnetic layers was chosen such that the interaction between the layers was antiferromagnetic and the magnetization in adjacent magnetic layers was antiparallel. Then an external magnetic field could make the magnetization vectors parallel thereby affecting the electrical resistance of the structure.
Magnetic layers in such structures interact through antiferromagnetic coupling, which results in the oscillating dependence of the GMR on the thickness of the non-magnetic layer. In the first magnetic field sensors using antiferromagnetic superlattices, the saturation field was very large, up to tens of thousands of oersteds, due to the strong antiferromagnetic interaction between their layers (made of chromium, iron or cobalt) and the strong anisotropy fields in them. Therefore, the sensitivity of the devices was very low. The use of permalloy for the magnetic and silver for the non-magnetic layers lowered the saturation field to tens of oersteds.
Spin valves using exchange bias
In the most successful spin valves the GMR effect originates from exchange bias. They comprise a sensitive layer, "fixed" layer and an antiferromagnetic layer. The last layer freezes the magnetization direction in the "fixed" layer. The sensitive and antiferromagnetic layers are made thin to reduce the resistance of the structure. The valve reacts to the external magnetic field by changing the magnetization direction in the sensitive layer relatively to the "fixed" layer.
The main difference of these spin valves from other multilayer GMR devices is the monotonic dependence of the amplitude of the effect on the thickness dN of the non-magnetic layers:
where δH0 is a normalization constant, λN is the mean free path of electrons in the non-magnetic material, d0 is effective thickness that includes interaction between layers. The dependence on the thickness of the ferromagnetic layer can be given as:
The parameters have the same meaning as in the previous equation, but they now refer to the ferromagnetic layer.
Non-interacting multilayers (pseudospin valves)
GMR can also be observed in the absence of antiferromagnetic coupling layers. In this case, the magnetoresistance results from the differences in the coercive forces (for example, it is smaller for permalloy than cobalt). In multilayers such as permalloy/Cu/Co/Cu the external magnetic field switches the direction of saturation magnetization to parallel in strong fields and to antiparallel in weak fields. Such systems exhibit a lower saturation field and a larger δH than superlattices with antiferromagnetic coupling. A similar effect is observed in Co/Cu structures. The existence of these structures means that GMR does not require interlayer coupling, and can originate from a distribution of the magnetic moments that can be controlled by an external field.
Inverse GMR effect
In the inverse GMR, the resistance is minimum for the antiparallel orientation of the magnetization in the layers. Inverse GMR is observed when the magnetic layers are composed of different materials, such as NiCr/Cu/Co/Cu. The resistivity for electrons with opposite spins can be written as ; it has different values, i.e. different coefficients β, for spin-up and spin-down electrons. If the NiCr layer is not too thin, its contribution may exceed that of the Co layer, resulting in inverse GMR. Note that the GMR inversion depends on the sign of the product of the coefficients β in adjacent ferromagnetic layers, but not on the signs of individual coefficients.
Inverse GMR is also observed if NiCr alloy is replaced by vanadium-doped nickel, but not for doping of nickel with iron, cobalt, manganese, gold or copper.
GMR in granular structures
GMR in granular alloys of ferromagnetic and non-magnetic metals was discovered in 1992 and subsequently explained by the spin-dependent scattering of charge carriers at the surface and in the bulk of the grains. The grains form ferromagnetic clusters about 10 nm in diameter embedded in a non-magnetic metal, forming a kind of superlattice. A necessary condition for the GMR effect in such structures is poor mutual solubility in its components (e.g., cobalt and copper). Their properties strongly depend on the measurement and annealing temperature. They can also exhibit inverse GMR.
Applications
Spin-valve sensors
General principle
One of the main applications of GMR materials is in magnetic field sensors, e.g., in hard disk drives and biosensors, as well as detectors of oscillations in MEMS. A typical GMR-based sensor consists of seven layers:
Silicon substrate,
Binder layer,
Sensing (non-fixed) layer,
Non-magnetic layer,
Fixed layer,
Antiferromagnetic (pinning) layer,
Protective layer.
The binder and protective layers are often made of tantalum, and a typical non-magnetic material is copper. In the sensing layer, magnetization can be reoriented by the external magnetic field; it is typically made of NiFe or cobalt alloys. FeMn or NiMn can be used for the antiferromagnetic layer. The fixed layer is made of a magnetic material such as cobalt. Such a sensor has an asymmetric hysteresis loop owing to the presence of the magnetically hard, fixed layer.
Spin valves may exhibit anisotropic magnetoresistance, which leads to an asymmetry in the sensitivity curve.
Hard disk drives
In hard disk drives (HDDs), information is encoded using magnetic domains, and a change in the direction of their magnetization is associated with the logical level 1 while no change represents a logical 0. There are two recording methods: longitudinal and perpendicular.
In the longitudinal method, the magnetization is normal to the surface. A transition region (domain walls) is formed between domains, in which the magnetic field exits the material. If the domain wall is located at the interface of two north-pole domains then the field is directed outward, and for two south-pole domains it is directed inward. To read the direction of the magnetic field above the domain wall, the magnetization direction is fixed normal to the surface in the antiferromagnetic layer and parallel to the surface in the sensing layer. Changing the direction of the external magnetic field deflects the magnetization in the sensing layer. When the field tends to align the magnetizations in the sensing and fixed layers, the electrical resistance of the sensor decreases, and vice versa.
Magnetic RAM
A cell of magnetoresistive random-access memory (MRAM) has a structure similar to the spin-valve sensor. The value of the stored bits can be encoded via the magnetization direction in the sensor layer; it is read by measuring the resistance of the structure. The advantages of this technology are independence of power supply (the information is preserved when the power is switched off owing to the potential barrier for reorienting the magnetization), low power consumption and high speed.
In a typical GMR-based storage unit, a CIP structure is located between two wires oriented perpendicular to each other. These conductors are called lines of rows and columns. Pulses of electric current passing through the lines generate a vortex magnetic field, which affects the GMR structure. The field lines have ellipsoid shapes, and the field direction (clockwise or counterclockwise) is determined by the direction of the current in the line. In the GMR structure, the magnetization is oriented along the line.
The direction of the field produced by the line of the column is almost parallel to the magnetic moments, and it can not reorient them. Line of the row is perpendicular, and regardless of the magnitude of the field can rotate the magnetization by only 90°. With the simultaneous passage of pulses along the row and column lines, of the total magnetic field at the location of the GMR structure will be directed at an acute angle with respect to one point and an obtuse to others. If the value of the field exceeds some critical value, the latter changes its direction.
There are several storage and reading methods for the described cell. In one method, the information is stored in the sensing layer; it is read via resistance measurement and is erased upon reading. In another scheme, the information is kept in the fixed layer, which requires higher recording currents compared to reading currents.
Tunnel magnetoresistance (TMR) is an extension of spin-valve GMR, in which the electrons travel with their spins oriented perpendicularly to the layers across a thin insulating tunnel barrier (replacing the non-ferromagnetic spacer). This allows to achieve a larger impedance, a larger magnetoresistance value (~10× at room temperature) and a negligible temperature dependence. TMR has now replaced GMR in MRAMs and disk drives, in particular for high area densities and perpendicular recording.
Other applications
Magnetoresistive insulators for contactless signal transmission between two electrically isolated parts of electrical circuits were first demonstrated in 1997 as an alternative to opto-isolators. A Wheatstone bridge of four identical GMR devices is insensitive to a uniform magnetic field and reacts only when the field directions are antiparallel in the neighboring arms of the bridge. Such devices were reported in 2003 and may be used as rectifiers with a linear frequency response.
Notes
Citations
Bibliography
External links
Giant Magnetoresistance: The Really Big Idea Behind a Very Tiny Tool National High Magnetic Field Laboratory
Presentation of GMR-technique (IBM Research)
Computer storage technologies
Magnetoresistance
Spintronics | Giant magnetoresistance | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,179 | [
"Magnetoresistance",
"Physical quantities",
"Spintronics",
"Magnetic ordering",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
1,003,410 | https://en.wikipedia.org/wiki/S%20transform | S transform as a time–frequency distribution was developed in 1994 for analyzing geophysics data. In this way, the S transform is a generalization of the short-time Fourier transform (STFT), extending the continuous wavelet transform and overcoming some of its disadvantages. For one, modulation sinusoids are fixed with respect to the time axis; this localizes the scalable Gaussian window dilations and translations in S transform. Moreover, the S transform doesn't have a cross-term problem and yields a better signal clarity than Gabor transform. However, the S transform has its own disadvantages: the clarity is worse than Wigner distribution function and Cohen's class distribution function.
A fast S transform algorithm was invented in 2010. It reduces the computational complexity from O[N2·log(N)] to O[N·log(N)] and makes the transform one-to-one, where the transform has the same number of points as the source signal or image, compared to storage complexity of N2 for the original formulation. An implementation is available to the research community under an open source license.
A general formulation of the S transform makes clear the relationship to other time frequency transforms such as the Fourier, short time Fourier, and wavelet transforms.
Definition
There are several ways to represent the idea of the S transform. In here, S transform is derived as the phase correction of the continuous wavelet transform with window being the Gaussian function.
S-Transform
Inverse S-Transform
Modified form
Spectrum Form
The above definition implies that the s-transform function can be express as the convolution of and .
Applying the Fourier transform to both and gives
.
Discrete-time S-transform
From the spectrum form of S-transform, we can derive the discrete-time S-transform.
Let , where is the sampling interval and is the sampling frequency.
The Discrete time S-transform can then be expressed as:
Implementation of discrete-time S-transform
Below is the Pseudo code of the implementation.
Step1.Compute
loop over m (voices)
Step2.Compute for
Step3.Move to
Step4.Multiply Step2 and Step3
Step5.IDFT().
Repeat.}
Comparison with other time–frequency analysis tools
Comparison with Gabor transform
The only difference between the Gabor transform (GT) and the S transform is the window size. For GT, the windows size is a Gaussian function , meanwhile, the window function for S-Transform is a function of f. With a window function proportional to frequency, S Transform performs well in frequency domain analysis when the input frequency is low. When the input frequency is high, S-Transform has a better clarity in the time domain. As table below.
This kind of property makes S-Transform a powerful tool to analyze sound because human is sensitive to low frequency part in a sound signal.
Comparison with Wigner transform
The main problem with the Wigner Transform is the cross term, which stems from the auto-correlation function in the Wigner Transform function. This cross term may cause noise and distortions in signal analyses. S-transform analyses avoid this issue.
Comparison with the short-time Fourier transform
We can compare the S transform and short-time Fourier transform (STFT). First, a high frequency signal, a low frequency signal, and a high frequency burst signal are used in the experiment to compare the performance. The S transform characteristic of frequency dependent resolution allows the detection of the high frequency burst. On the other hand, as the STFT consists of a constant window width, it leads to the result having poorer definition. In the second experiment, two more high frequency bursts are added to crossed chirps. In the result, all four frequencies were detected by the S transform. On the other hand, the two high frequencies bursts are not detected by STFT. The high frequencies bursts cross term caused STFT to have a single frequency at lower frequency.
Applications
Signal filterings
Magnetic resonance imaging (MRI)
Power system disturbance recognition
S transform has been proven to be able to identify a few types of disturbances, like voltage sag, voltage swell, momentary interruption, and oscillatory transients.
S transform also be applied for other types of disturbances such as notches, harmonics with sag and swells etc.
S transform generates contours which are suitable for simple visual inspection. However, wavelet transform requires specific tools like standard multiresolution analysis.
Geophysical signal analysis
Reflection seismology
Global seismology
See also
Laplace transform
Wavelet transform
Short-time Fourier transform
References
Rocco Ditommaso, Felice Carlo Ponzo, Gianluca Auletta (2015). Damage detection on framed structures: modal curvature evaluation using Stockwell Transform under seismic excitation. Earthquake Engineering and Engineering Vibration. June 2015, Volume 14, Issue 2, pp 265–274.
Rocco Ditommaso, Marco Mucciarelli, Felice C. Ponzo (2010). S-Transform based filter applied to the analysis of non-linear dynamic behaviour of soil and buildings. 14th European Conference on Earthquake Engineering. Proceedings Volume. Ohrid, Republic of Macedonia. August 30 – September 3, 2010. (downloadable from http://roccoditommaso.xoom.it)
M. Mucciarelli, M. Bianca, R. Ditommaso, M.R. Gallipoli, A. Masi, C Milkereit, S. Parolai, M. Picozzi, M. Vona (2011). FAR FIELD DAMAGE ON RC BUILDINGS: THE CASE STUDY OF NAVELLI DURING THE L’AQUILA (ITALY) SEISMIC SEQUENCE, 2009. Bulletin of Earthquake Engineering. .
J. J. Ding, "Time-frequency analysis and wavelet transform course note," the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2007.
Jaya Bharata Reddy, Dusmanta Kumar Mohanta, and B. M. Karan, "Power system disturbance recognition using wavelet and s-transform techniques," Birla institute of Technology, Mesra, Ranchi-835215, 2004.
B. Boashash, "Notes on the use of the wigner distribution for time frequency signal analysis", IEEE Trans. on Acoust. Speech. and Signal Processing, vol. 26, no. 9, 1987
R. N. Bracewell, The Fourier Transform and Its Applications, McGraw Hill Book Company, New York, 1978
E. O. Brigham, The Fast Fourier Transform, Prentice-Hall Inc., Englewood Cliffs, New Jersey, 1974
I. Daubechies, "The wavelet transform, time-frequency localization and signal analysis", IEEE Trans. on Information Theory, vol. 36, no. 5, Sept. 1990
D. Gabor, "Theory of communication", J. Inst. Elect. Eng., vol. 93, no. 3, pp. 429–457, 1946
F. Hlawatsch and G. F. Boudreuax-Bartels, 1992 "Linear and quadratic timefrequency signal representations", IEEE Signal Processing Magazine, pp. 21–67
R. K. Young, Wavelet Theory and its Applications, Kluwer Academic Publishers, Dordrecht,1993
Integral transforms
Fourier analysis
Time–frequency analysis | S transform | [
"Physics"
] | 1,515 | [
"Frequency-domain analysis",
"Spectrum (physical sciences)",
"Time–frequency analysis"
] |
1,004,486 | https://en.wikipedia.org/wiki/Pharmacogenomics | Pharmacogenomics, often abbreviated "PGx," is the study of the role of the genome in drug response. Its name (pharmaco- + genomics) reflects its combining of pharmacology and genomics. Pharmacogenomics analyzes how the genetic makeup of a patient affects their response to drugs. It deals with the influence of acquired and inherited genetic variation on drug response, by correlating DNA mutations (including point mutations, copy number variations, and structural variations) with pharmacokinetic (drug absorption, distribution, metabolism, and elimination), pharmacodynamic (effects mediated through a drug's biological targets), and/or immunogenic endpoints.
Pharmacogenomics aims to develop rational means to optimize drug therapy, with regard to the patients' genotype, to achieve maximum efficiency with minimal adverse effects. It is hoped that by using pharmacogenomics, pharmaceutical drug treatments can deviate from what is dubbed as the "one-dose-fits-all" approach. Pharmacogenomics also attempts to eliminate trial-and-error in prescribing, allowing physicians to take into consideration their patient's genes, the functionality of these genes, and how this may affect the effectiveness of the patient's current or future treatments (and where applicable, provide an explanation for the failure of past treatments). Such approaches promise the advent of precision medicine and even personalized medicine, in which drugs and drug combinations are optimized for narrow subsets of patients or even for each individual's unique genetic makeup.
Whether used to explain a patient's response (or lack of it) to a treatment, or to act as a predictive tool, it hopes to achieve better treatment outcomes and greater efficacy, and reduce drug toxicities and adverse drug reactions (ADRs). For patients who do not respond to a treatment, alternative therapies can be prescribed that would best suit their requirements. In order to provide pharmacogenomic recommendations for a given drug, two possible types of input can be used: genotyping, or exome or whole genome sequencing. Sequencing provides many more data points, including detection of mutations that prematurely terminate the synthesized protein (early stop codon).
Pharmacogenetics vs. pharmacogenomics
The term pharmacogenomics is often used interchangeably with pharmacogenetics. Although both terms relate to drug response based on genetic influences, there are differences between the two. Pharmacogenetics is limited to monogenic phenotypes (i.e., single gene-drug interactions). Pharmacogenomics refers to polygenic drug response phenotypes and encompasses transcriptomics, proteomics, and metabolomics.
Mechanisms of pharmacogenetic interactions
Pharmacokinetics
Pharmacokinetics involves the absorption, distribution, metabolism, and elimination of pharmaceutics. These processes are often facilitated by enzymes such as drug transporters or drug metabolizing enzymes (discussed in-depth below). Variation in DNA loci responsible for producing these enzymes can alter their expression or activity so that their functional status changes. An increase, decrease, or loss of function for transporters or metabolizing enzymes can ultimately alter the amount of medication in the body and at the site of action. This may result in deviation from the medication's therapeutic window and result in either toxicity or loss of effectiveness.
Drug-metabolizing enzymes
The majority of clinically actionable pharmacogenetic variation occurs in genes that code for drug-metabolizing enzymes, including those involved in both phase I and phase II metabolism. The cytochrome P450 enzyme family is responsible for metabolism of 70-80% of all medications used clinically. CYP3A4, CYP2C9, CYP2C19, and CYP2D6 are major CYP enzymes involved in drug metabolism and are all known to be highly polymorphic. Additional drug-metabolizing enzymes that have been implicated in pharmacogenetic interactions include UGT1A1 (a UDP-glucuronosyltransferase), DPYD, and TPMT.
Drug transporters
Many medications rely on transporters to cross cellular membranes in order to move between body fluid compartments such as the blood, gut lumen, bile, urine, brain, and cerebrospinal fluid. The major transporters include the solute carrier, ATP-binding cassette, and organic anion transporters. Transporters that have been shown to influence response to medications include OATP1B1 (SLCO1B1) and breast cancer resistance protein (BCRP) (ABCG2).
Pharmacodynamics
Pharmacodynamics refers to the impact a medication has on the body, or its mechanism of action.
Drug targets
Drug targets are the specific sites where a medication carries out its pharmacological activity. The interaction between the drug and this site results in a modification of the target that may include inhibition or potentiation. Most of the pharmacogenetic interactions that involve drug targets are within the field of oncology and include targeted therapeutics designed to address somatic mutations (see also Cancer Pharmacogenomics). For example, EGFR inhibitors like gefitinib (Iressa) or erlotinib (Tarceva) are only indicated in patients carrying specific mutations to EGFR.
Germline mutations in drug targets can also influence response to medications, though this is an emerging subfield within pharmacogenomics. One well-established gene-drug interaction involving a germline mutation to a drug target is warfarin (Coumadin) and VKORC1, which codes for vitamin K epoxide reductase (VKOR). Warfarin binds to and inhibits VKOR, which is an important enzyme in the vitamin K cycle. Inhibition of VKOR prevents reduction of vitamin K, which is a cofactor required in the formation of coagulation factors II, VII, IX and X, and inhibitors protein C and S.
Off-target sites
Medications can have off-target effects (typically unfavorable) that arise from an interaction between the medication and/or its metabolites and a site other than the intended target. Genetic variation in the off-target sites can influence this interaction. The main example of this type of pharmacogenomic interaction is glucose-6-phosphate-dehydrogenase (G6PD). G6PD is the enzyme involved in the first step of the pentose phosphate pathway which generates NADPH (from NADP). NADPH is required for the production of reduced glutathione in erythrocytes and it is essential for the function of catalase. Glutathione and catalase protect cells from oxidative stress that would otherwise result in cell lysis. Certain variants in G6PD result in G6PD deficiency, in which cells are more susceptible to oxidative stress. When medications that have a significant oxidative effect are administered to individuals who are G6PD deficient, they are at an increased risk of erythrocyte lysis that presents as hemolytic anemia.
Immunologic
The human leukocyte antigen (HLA) system, also referred to as the major histocompatibility complex (MHC), is a complex of genes important for the adaptive immune system. Mutations in the HLA complex have been associated with an increased risk of developing hypersensitivity reactions in response to certain medications.
Clinical pharmacogenomics resources
Clinical Pharmacogenetics Implementation Consortium (CPIC)
The Clinical Pharmacogenetics Implementation Consortium (CPIC) is "an international consortium of individual volunteers and a small dedicated staff who are interested in facilitating use of pharmacogenetic tests for patient care. CPIC’s goal is to address barriers to clinical implementation of pharmacogenetic tests by creating, curating, and posting freely available, peer-reviewed, evidence-based, updatable, and detailed gene/drug clinical practice guidelines. CPIC guidelines follow standardized formats, include systematic grading of evidence and clinical recommendations, use standardized terminology, are peer-reviewed, and are published in a journal (in partnership with Clinical Pharmacology and Therapeutics) with simultaneous posting to cpicpgx.org, where they are regularly updated."
The CPIC guidelines are "designed to help clinicians understand HOW available genetic test results should be used to optimize drug therapy, rather than WHETHER tests should be ordered. A key assumption underlying the CPIC guidelines is that clinical high-throughput and pre-emptive (pre-prescription) genotyping will become more widespread, and that clinicians will be faced with having patients’ genotypes available even if they have not explicitly ordered a test with a specific drug in mind. CPIC's guidelines, processes and projects have been endorsed by several professional societies."
U.S. Food and Drug Administration
Table of Pharmacogenetic Associations
In February 2020 the FDA published the Table of Pharmacogenetic Associations. For the gene-drug pairs included in the table, "the FDA has evaluated and believes there is sufficient scientific evidence to suggest that subgroups of patients with certain genetic variants, or genetic variant-inferred phenotypes (such as affected subgroup in the table below), are likely to have altered drug metabolism, and in certain cases, differential therapeutic effects, including differences in risks of adverse events."
"The information in this Table is intended primarily for prescribers, and patients should not adjust their medications without consulting their prescriber. This version of the table is limited to pharmacogenetic associations that are related to drug metabolizing enzyme gene variants, drug transporter gene variants, and gene variants that have been related to a predisposition for certain adverse events. The FDA recognizes that various other pharmacogenetic associations exist that are not listed here, and this table will be updated periodically with additional pharmacogenetic associations supported by sufficient scientific evidence."
Table of Pharmacogenomic Biomarkers in Drug Labeling
The FDA Table of Pharmacogenomic Biomarkers in Drug Labeling lists FDA-approved drugs with pharmacogenomic information found in the drug labeling. "Biomarkers in the table include but are not limited to germline or somatic gene variants (polymorphisms, mutations), functional deficiencies with a genetic etiology, gene expression differences, and chromosomal abnormalities; selected protein biomarkers that are used to select treatments for patients are also included."
PharmGKB
The Pharmacogenomics Knowledgebase (PharmGKB) is an "NIH-funded resource that provides information about how human genetic variation affects response to medications. PharmGKB collects, curates and disseminates knowledge about clinically actionable gene-drug associations and genotype-phenotype relationships."
Commercial Pharmacogenetic Testing Laboratories
There are many commercial laboratories around the world who offer pharmacogenomic testing as a laboratory developed test (LDTs). The tests offered can vary significantly from one lab to another, including genes and alleles tested for, phenotype assignment, and any clinical annotations provided. With the exception of a few direct-to-consumer tests, all pharmacogenetic testing requires an order from an authorized healthcare professional. In order for the results to be used in a clinical setting in the United States, the laboratory performing the test much be CLIA-certified. Other regulations may vary by country and state.
Direct-to-Consumer Pharmacogenetic Testing
Direct-to-consumer (DTC) pharmacogenetic tests allow consumers to obtain pharmacogenetic testing without an order from a prescriber. DTC pharmacogenetic tests are generally reviewed by the FDA to determine the validity of test claims. The FDA maintains a list of DTC genetic tests that have been approved.
Common Pharmacogenomic-Specific Nomenclature
Genotype
There are multiple ways to represent a pharmacogenomic genotype. A commonly used nomenclature system is to report haplotypes using a star (*) allele (e.g., CYP2C19 *1/*2). Single-nucleotide polymorphisms (SNPs) may be described using their assignment reference SNP cluster ID (rsID) or based on the location of the base pair or amino acid impacted.
Phenotype
In 2017 CPIC published results of an expert survey to standardize terms related to clinical pharmacogenetic test results. Consensus for terms to describe allele functional status, phenotype for drug metabolizing enzymes, phenotype for drug transporters, and phenotype for high-risk genotype status was reached.
Applications
The list below provides a few more commonly known applications of pharmacogenomics:
Improve drug safety, and reduce ADRs;
Tailor treatments to meet patients' unique genetic pre-disposition, identifying optimal dosing;
Improve drug discovery targeted to human disease; and
Improve proof of principle for efficacy trials.
Pharmacogenomics may be applied to several areas of medicine, including pain management, cardiology, oncology, and psychiatry. A place may also exist in forensic pathology, in which pharmacogenomics can be used to determine the cause of death in drug-related deaths where no findings emerge using autopsy.
In cancer treatment, pharmacogenomics tests are used to identify which patients are most likely to respond to certain cancer drugs. In behavioral health, pharmacogenomic tests provide tools for physicians and care givers to better manage medication selection and side effect amelioration. Pharmacogenomics is also known as companion diagnostics, meaning tests being bundled with drugs. Examples include KRAS test with cetuximab and EGFR test with gefitinib. Beside efficacy, germline pharmacogenetics can help to identify patients likely to undergo severe toxicities when given cytotoxics showing impaired detoxification in relation with genetic polymorphism, such as canonical 5-FU. In particular, genetic deregulations affecting genes coding for DPD, UGT1A1, TPMT, CDA and CYP2D6 are now considered as critical issues for patients treated with 5-FU/capecitabine, irinotecan, mercaptopurine/azathioprine, gemcitabine/capecitabine/AraC and tamoxifen, respectively.
In cardiovascular disorders, the main concern is response to drugs including warfarin, clopidogrel, beta blockers, and statins. In patients with CYP2C19, who take clopidogrel, cardiovascular risk is elevated, leading to medication package insert updates by regulators. In patients with type 2 diabetes, haptoglobin (Hp) genotyping shows an effect on cardiovascular disease, with Hp2-2 at higher risk and supplemental vitamin E reducing risk by affecting HDL.
In psychiatry, as of 2010, research has focused particularly on 5-HTTLPR and DRD2.
Clinical implementation
Initiatives to spur adoption by clinicians include the Ubiquitous Pharmacogenomics (U-PGx) program in Europe and the Clinical Pharmacogenetics Implementation Consortium (CPIC) in the United States. In a 2017 survey of European clinicians, in the prior year two-thirds had not ordered a pharmacogenetic test.
In 2010, Vanderbilt University Medical Center launched Pharmacogenomic Resource for Enhanced Decisions in Care and Treatment (PREDICT); in 2015 survey, two-thirds of the clinicians had ordered a pharmacogenetic test.
In 2019, the largest private health insurer, UnitedHealthcare, announced that it would pay for genetic testing to predict response to psychiatric drugs.
In 2020, Canada's 4th largest health and dental insurer, Green Shield Canada, announced that it would pay for pharmacogenetic testing and its associated clinical decision support software to optimize and personalize mental health prescriptions.
Reduction of polypharmacy
A potential role for pharmacogenomics is to reduce the occurrence of polypharmacy: it is theorized that with tailored drug treatments, patients will not need to take several medications to treat the same condition. Thus they could potentially reduce the occurrence of adverse drug reactions, improve treatment outcomes, and save costs by avoiding purchase of some medications. For example, maybe due to inappropriate prescribing, psychiatric patients tend to receive more medications than age-matched non-psychiatric patients.
The need for pharmacogenomically tailored drug therapies may be most evident in a survey conducted by the Slone Epidemiology Center at Boston University from February 1998 to April 2007. The study elucidated that an average of 82% of adults in the United States are taking at least one medication (prescription or nonprescription drug, vitamin/mineral, herbal/natural supplement), and 29% are taking five or more. The study suggested that those aged 65 years or older continue to be the biggest consumers of medications, with 17-19% in this age group taking at least ten medications in a given week. Polypharmacy has also shown to have increased since 2000 from 23% to 29%.
Example case studies
Case A – Antipsychotic adverse reaction
Patient A has schizophrenia. Their treatment included a combination of ziprasidone, olanzapine, trazodone and benztropine. The patient experienced dizziness and sedation, so they were tapered off ziprasidone and olanzapine, and transitioned to quetiapine. Trazodone was discontinued. The patient then experienced excessive sweating, tachycardia and neck pain, gained considerable weight and had hallucinations. Five months later, quetiapine was tapered and discontinued, with ziprasidone re-introduced into their treatment, due to the excessive weight gain. Although the patient lost the excessive weight they had gained, they then developed muscle stiffness, cogwheeling, tremors and night sweats. When benztropine was added they experienced blurry vision. After an additional five months, the patient was switched from ziprasidone to aripiprazole. Over the course of 8 months, patient A gradually experienced more weight gain and sedation, and developed difficulty with their gait, stiffness, cogwheeling and dyskinetic ocular movements. A pharmacogenomics test later proved the patient had a CYP2D6 *1/*41, which has a predicted phenotype of IM and CYP2C19 *1/*2 with a predicted phenotype of IM as well.
Case B – Pain Management
Patient B is a woman who gave birth by caesarian section. Her physician prescribed codeine for post-caesarian pain. She took the standard prescribed dose, but she experienced nausea and dizziness while she was taking codeine. She also noticed that her breastfed infant was lethargic and feeding poorly. When the patient mentioned these symptoms to her physician, they recommended that she discontinue codeine use. Within a few days, both the patient's and her infant's symptoms were no longer present. It is assumed that if the patient had undergone a pharmacogenomic test, it would have revealed she may have had a duplication of the gene CYP2D6, placing her in the Ultra-rapid metabolizer (UM) category, explaining her reactions to codeine use.
Case C – FDA Warning on Codeine Overdose for Infants
On February 20, 2013, the FDA released a statement addressing a serious concern regarding the connection between children who are known as CYP2D6 UM, and fatal reactions to codeine following tonsillectomy and/or adenoidectomy (surgery to remove the tonsils and/or adenoids). They released their strongest Boxed Warning to elucidate the dangers of CYP2D6 UMs consuming codeine. Codeine is converted to morphine by CYP2D6, and those who have UM phenotypes are in danger of producing large amounts of morphine due to the increased function of the gene. The morphine can elevate to life-threatening or fatal amounts, as became evident with the death of three children in August 2012.
Challenges
Although there appears to be a general acceptance of the basic tenet of pharmacogenomics amongst physicians and healthcare professionals, several challenges exist that slow the uptake, implementation, and standardization of pharmacogenomics. Some of the concerns raised by physicians include:
Limitation on how to apply the test into clinical practices and treatment;
A general feeling of lack of availability of the test;
The understanding and interpretation of evidence-based research;
Combining test results with other patient data for prescription optimization; and
Ethical, legal and social issues.
Issues surrounding the availability of the test include:
The lack of availability of scientific data: Although there are a considerable number of drug-metabolizing enzymes involved in the metabolic pathways of drugs, only a fraction have sufficient scientific data to validate their use within a clinical setting; and
Demonstrating the cost-effectiveness of pharmacogenomics: Publications for the pharmacoeconomics of pharmacogenomics are scarce, therefore sufficient evidence does not at this time exist to validate the cost-effectiveness and cost-consequences of the test.
Although other factors contribute to the slow progression of pharmacogenomics (such as developing guidelines for clinical use), the above factors appear to be the most prevalent. Increasingly substantial evidence and industry body guidelines for clinical use of pharmacogenetics have made it a population wide approach to precision medicine. Cost, reimbursement, education, and easy use at the point of care remain significant barriers to widescale adoption.
Controversies
Race-based medicine
There has been call to move away from race and ethnicity in medicine and instead use genetic ancestry as a way to categorize patients. Some alleles that vary in frequency between specific populations have been shown to be associated with differential responses to specific drugs. As a result, some disease-specific guidelines only recommend pharmacogenetic testing for populations where high-risk alleles are more common and, similarly, certain insurance companies will only pay for pharmacogenetic testing for beneficiaries of high-risk populations.
Genetic exceptionalism
In the early 2000s, handling genetic information as exceptional, including legal or regulatory protections, garnered strong support. It was argued that genomic information may need special policy and practice protections within the context of electronic health records (EHRs). In 2008, the Genetic Information Nondiscrimination Act (GINA) was enacted to protect patients from health insurance companies discriminating against an individual based on genetic information.
More recently it has been argued that genetic exceptionalism is past its expiration date as we move into a blended genomic/big data era of medicine, yet exceptionalism practices continue to permeate clinical healthcare today. Garrison et al. recently relayed a call to action to update verbiage from genetic exceptionalism to genomic contextualism in that we recognize a fundamental duality of genetic information. This allows room in the argument for different types of genetic information to be handled differently while acknowledging that genomic information is similar and yet distinct from other health-related information. Genomic contextualism would allow for a case-by-case analysis of the technology and the context of its use (e.g., clinical practice, research, secondary findings).
Others argue that genetic information is indeed distinct from other health-related information but not to the extent of requiring legal/regulatory protections, similar to other sensitive health-related data such as HIV status. Additionally, Evans et al. argue that the EHR has sufficient privacy standards to hold other sensitive information such as social security numbers and that the fundamental nature of an EHR is to house highly personal information. Similarly, a systematic review reported that the public had concern over privacy of genetic information, with 60% agreeing that maintaining privacy was not possible; however, 96% agreed that a direct-to-consumer testing company had protected their privacy, with 74% saying their information would be similarly or better protected in an EHR. With increasing technological capabilities in EHRs, it is possible to mask or hide genetic data from subsets of providers and there is not consensus on how, when, or from whom genetic information should be masked. Rigorous protection and masking of genetic information is argued to impede further scientific progress and clinical translation into routine clinical practices.
History
Pharmacogenomics was first recognized by Pythagoras around 510 BC when he made a connection between the dangers of fava bean ingestion with hemolytic anemia and oxidative stress. In the 1950s, this identification was validated and attributed to deficiency of G6PD and is called favism. Although the first official publication was not until 1961, the unofficial beginnings of this science were around the 1950s. Reports of prolonged paralysis and fatal reactions linked to genetic variants in patients who lacked butyrylcholinesterase ('pseudocholinesterase') following succinylcholine injection during anesthesia were first reported in 1956. The term pharmacogenetics was first coined in 1959 by Friedrich Vogel of Heidelberg, Germany (although some papers suggest it was 1957 or 1958). In the late 1960s, twin studies supported the inference of genetic involvement in drug metabolism, with identical twins sharing remarkable similarities in drug response compared to fraternal twins. The term pharmacogenomics first began appearing around the 1990s.
The first FDA approval of a pharmacogenetic test was in 2005 (for alleles in CYP2D6 and CYP2C19)
Future
Computational advances have enabled cheaper and faster sequencing. Research has focused on combinatorial chemistry, genomic mining, omic technologies, and high throughput screening.
As the cost per genetic test decreases, the development of personalized drug therapies will increase. Technology now allows for genetic analysis of hundreds of target genes involved in medication metabolism and response in less than 24 hours for under $1,000. This a huge step towards bringing pharmacogenetic technology into everyday medical decisions. Likewise, companies like deCODE genetics, MD Labs Pharmacogenetics, Navigenics and 23andMe offer genome scans. The companies use the same genotyping chips that are used in GWAS studies and provide customers with a write-up of individual risk for various traits and diseases and testing for 500,000 known SNPs. Costs range from $995 to $2500 and include updates with new data from studies as they become available. The more expensive packages even included a telephone session with a genetics counselor to discuss the results.
Ethics
Pharmacogenetics has become a controversial issue in the area of bioethics. Privacy and confidentiality are major concerns. The evidence of benefit or risk from a genetic test may only be suggestive, which could cause dilemmas for providers. Drug development may be affected, with rare genetic variants possibly receiving less research. Access and patient autonomy are also open to discussion.
Web-based resources
See also
Genomics
Chemogenomics
Clinomics
Genetic engineering
Toxicogenomics
Cancer pharmacogenomics
Metabolomics
Pharmacovigilance
Population groups in biomedicine
Toxgnostics
Medical terminology
LOINC
SNOMED CT
HPO
HGVS
HL7
FHIR
Genetic testing
References
Further reading
External links
Journals:
Genomics
Pharmacology
Pharmacy | Pharmacogenomics | [
"Chemistry"
] | 5,756 | [
"Pharmacology",
"Pharmacogenomics",
"Medicinal chemistry",
"Pharmacy"
] |
1,004,679 | https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch%20algorithm | The Needleman–Wunsch algorithm is an algorithm used in bioinformatics to align protein or nucleotide sequences. It was one of the first applications of dynamic programming to compare biological sequences. The algorithm was developed by Saul B. Needleman and Christian D. Wunsch and published in 1970. The algorithm essentially divides a large problem (e.g. the full sequence) into a series of smaller problems, and it uses the solutions to the smaller problems to find an optimal solution to the larger problem. It is also sometimes referred to as the optimal matching algorithm and the global alignment technique. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. The algorithm assigns a score to every possible alignment, and the purpose of the algorithm is to find all possible alignments having the highest score.
Introduction
This algorithm can be used for any two strings. This guide will use two small DNA sequences as examples as shown in Figure 1:
GCATGCG
GATTACA
Constructing the grid
First construct a grid such as one shown in Figure 1 above. Start the first string in the top of the third column and start the other string at the start of the third row. Fill out the rest of the column and row headers as in Figure 1. There should be no numbers in the grid yet.
Choosing a scoring system
Next, decide how to score each individual pair of letters. Using the example above, one possible alignment candidate might be:
12345678
The letters may match, mismatch, or be matched to a gap (a deletion or insertion (indel)):
Match: The two letters at the current index are the same.
Mismatch: The two letters at the current index are different.
Indel (Insertion or Deletion): The best alignment involves one letter aligning to a gap in the other string.
Each of these scenarios is assigned a score and the sum of the scores of all the pairings is the score of the whole alignment candidate. Different systems exist for assigning scores; some have been outlined in the Scoring systems section below. For now, the system used by Needleman and Wunsch will be used:
Match: +1
Mismatch or Indel: −1
For the Example above, the score of the alignment would be 0:
+−++−−+− −> 1*4 + (−1)*4 = 0
Filling in the table
Start with a zero in the first row, first column (not including the cells containing nucleotides). Move through the cells row by row, calculating the score for each cell. The score is calculated by comparing the scores of the cells neighboring to the left, top or top-left (diagonal) of the cell and adding the appropriate score for match, mismatch or indel. Take the maximum of the candidate scores for each of the three possibilities:
The path from the top or left cell represents an indel pairing, so take the scores of the left and the top cell, and add the score for indel to each of them.
The diagonal path represents a match/mismatch, so take the score of the top-left diagonal cell and add the score for match if the corresponding bases (letters) in the row and column are matching or the score for mismatch if they do not.
The resulting score for the cell is the highest of the three candidate scores.
Given there is no 'top' or 'top-left' cells for the first row only the existing cell to the left can be used to calculate the score of each cell. Hence −1 is added for each shift to the right as this represents an indel from the previous score. This results in the first row being 0, −1, −2, −3, −4, −5, −6, −7. The same applies to the first column as only the existing score above each cell can be used. Thus the resulting table is:
The first case with existing scores in all 3 directions is the intersection of our first letters (in this case G and G). The surrounding cells are below:
This cell has three possible candidate sums:
The diagonal top-left neighbor has score 0. The pairing of G and G is a match, so add the score for match: 0+1 = 1
The top neighbor has score −1 and moving from there represents an indel, so add the score for indel: (−1) + (−1) = (−2)
The left neighbor also has score −1, represents an indel and also produces (−2).
The highest candidate is 1 and is entered into the cell:
The cell which gave the highest candidate score must also be recorded. In the completed diagram in figure 1 above, this is represented as an arrow from the cell in row and column 2 to the cell in row and column 1.
In the next example, the diagonal step for both X and Y represents a mismatch:
X:
Top: (−2)+(−1) = (−3)
Left: (+1)+(−1) = (0)
Top-Left: (−1)+(−1) = (−2)
Y:
Top: (1)+(−1) = (0)
Left: (−2)+(−1) = (−3)
Top-Left: (−1)+(−1) = (−2)
For both X and Y, the highest score is zero:
The highest candidate score may be reached by two of the neighboring cells:
Top: (1)+(−1) = (0)
Top-Left: (1)+(−1) = (0)
Left: (0)+(−1) = (−1)
In this case, all directions reaching the highest candidate score must be noted as possible origin cells in the finished diagram in figure 1, e.g. in the cell in row and column 6.
Filling in the table in this manner gives the scores of all possible alignment candidates, the score in the cell on the bottom right represents the alignment score for the best alignment.
Tracing arrows back to origin
Mark a path from the cell on the bottom right back to the cell on the top left by following the direction of the arrows. From this path, the sequence is constructed by these rules:
A diagonal arrow represents a match or mismatch, so the letter of the column and the letter of the row of the origin cell will align.
A horizontal or vertical arrow represents an indel. Vertical arrows will align a gap ("-") to the letter of the row (the "side" sequence), horizontal arrows will align a gap to the letter of the column (the "top" sequence).
If there are multiple arrows to choose from, they represent a branching of the alignments. If two or more branches all belong to paths from the bottom right to the top left cell, they are equally viable alignments. In this case, note the paths as separate alignment candidates.
Following these rules, the steps for one possible alignment candidate in figure 1 are:
G → CG → GCG → -GCG → T-GCG → AT-GCG → CAT-GCG → GCAT-GCG
A → CA → ACA → TACA → TTACA → ATTACA → -ATTACA → G-ATTACA
↓
(branch) → TGCG → -TGCG → ...
→ TACA → TTACA → ...
Scoring systems
Basic scoring schemes
The simplest scoring schemes simply give a value for each match, mismatch and indel. The step-by-step guide above uses match = 1, mismatch = −1, indel = −1. Thus the lower the alignment score the larger the edit distance, for this scoring system one wants a high score. Another scoring system might be:
Match = 0
Indel = -1
Mismatch = -1
For this system the alignment score will represent the edit distance between the two strings.
Different scoring systems can be devised for different situations, for example if gaps are considered very bad for your alignment you may use a scoring system that penalises gaps heavily, such as:
Match = 1
Indel = -10
Mismatch = -1
Similarity matrix
More complicated scoring systems attribute values not only for the type of alteration, but also for the letters that are involved. For example, a match between A and A may be given 1, but a match between T and T may be given 4. Here (assuming the first scoring system) more importance is given to the Ts matching than the As, i.e. the Ts matching is assumed to be more significant to the alignment. This weighting based on letters also applies to mismatches.
In order to represent all the possible combinations of letters and their resulting scores a similarity matrix is used. The similarity matrix for the most basic system is represented as:
Each score represents a switch from one of the letters the cell matches to the other. Hence this represents all possible matches and mismatches (for an alphabet of ACGT). Note all the matches go along the diagonal, also not all the table needs to be filled, only this triangle because the scores are reciprocal.= (Score for A → C = Score for C → A). If implementing the T-T = 4 rule from above the following similarity matrix is produced:
Different scoring matrices have been statistically constructed which give weight to different actions appropriate to a particular scenario. Having weighted scoring matrices is particularly important in protein sequence alignment due to the varying frequency of the different amino acids. There are two broad families of scoring matrices, each with further alterations for specific scenarios:
PAM
BLOSUM
Gap penalty
When aligning sequences there are often gaps (i.e. indels), sometimes large ones. Biologically, a large gap is more likely to occur as one large deletion as opposed to multiple single deletions. Hence two small indels should have a worse score than one large one. The simple and common way to do this is via a large gap-start score for a new indel and a smaller gap-extension score for every letter which extends the indel. For example, new-indel may cost -5 and extend-indel may cost -1. In this way an alignment such as:
GAAAAAAT
G--A-A-T
which has multiple equal alignments, some with multiple small alignments will now align as:
GAAAAAAT
GAA----T
or any alignment with a 4 long gap in preference over multiple small gaps.
Advanced presentation of algorithm
Scores for aligned characters are specified by a similarity matrix. Here, is the similarity of characters a and b. It uses a linear gap penalty, here called .
For example, if the similarity matrix was
then the alignment:
AGACTAGTTAC
CGA---GACGT
with a gap penalty of −5, would have the following score:
= −3 + 7 + 10 − (3 × 5) + 7 + (−4) + 0 + (−1) + 0 = 1
To find the alignment with the highest score, a two-dimensional array (or matrix) F is allocated. The entry in row i and column j is denoted here by
. There is one row for each character in sequence A, and one column for each character in sequence B. Thus, if aligning sequences of sizes n and m, the amount of memory used is in . Hirschberg's algorithm only holds a subset of the array in memory and uses space, but is otherwise similar to Needleman-Wunsch (and still requires time).
As the algorithm progresses, the will be assigned to be the optimal score for the alignment of the first characters in A and the first characters in B. The principle of optimality is then applied as follows:
Basis:
Recursion, based on the principle of optimality:
The pseudo-code for the algorithm to compute the F matrix therefore looks like this:
d ← Gap penalty score
for i = 0 to length(A)
F(i,0) ← d * i
for j = 0 to length(B)
F(0,j) ← d * j
for i = 1 to length(A)
for j = 1 to length(B)
{
Match ← F(i−1, j−1) + S(Ai, Bj)
Delete ← F(i−1, j) + d
Insert ← F(i, j−1) + d
F(i,j) ← max(Match, Insert, Delete)
}
Once the F matrix is computed, the entry gives the maximum score among all possible alignments. To compute an alignment that actually gives this score, you start from the bottom right cell, and compare the value with the three possible sources (Match, Insert, and Delete above) to see which it came from. If Match, then and are aligned, if Delete, then is aligned with a gap, and if Insert, then is aligned with a gap. (In general, more than one choice may have the same value, leading to alternative optimal alignments.)
AlignmentA ← ""
AlignmentB ← ""
i ← length(A)
j ← length(B)
while (i > 0 or j > 0)
{
if (i > 0 and j > 0 and F(i, j) == F(i−1, j−1) + S(Ai, Bj))
{
AlignmentA ← Ai + AlignmentA
AlignmentB ← Bj + AlignmentB
i ← i − 1
j ← j − 1
}
else if (i > 0 and F(i, j) == F(i−1, j) + d)
{
AlignmentA ← Ai + AlignmentA
AlignmentB ← "−" + AlignmentB
i ← i − 1
}
else
{
AlignmentA ← "−" + AlignmentA
AlignmentB ← Bj + AlignmentB
j ← j − 1
}
}
Complexity
Computing the score for each cell in the table is an operation. Thus the time complexity of the algorithm for two sequences of length and is . It has been shown that it is possible to improve the running time to using the Method of Four Russians. Since the algorithm fills an table the space complexity is
Historical notes and algorithm development
The original purpose of the algorithm described by Needleman and Wunsch was to find similarities in the amino acid sequences of two proteins.
Needleman and Wunsch describe their algorithm explicitly for the case when the alignment is penalized solely by the matches and mismatches, and gaps have no penalty (d=0). The original publication from 1970 suggests the recursion
.
The corresponding dynamic programming algorithm takes cubic time. The paper also points out that the recursion can accommodate arbitrary gap penalization formulas:
A penalty factor, a number subtracted for every gap made, may be assessed as a barrier to allowing the gap. The penalty factor could be a function of the size and/or direction of the gap. [page 444]
A better dynamic programming algorithm with quadratic running time for the same problem (no gap penalty) was introduced later by David Sankoff in 1972.
Similar quadratic-time algorithms were discovered independently
by T. K. Vintsyuk in 1968 for speech processing
("time warping"), and by Robert A. Wagner and Michael J. Fischer in 1974 for string matching.
Needleman and Wunsch formulated their problem in terms of maximizing similarity. Another possibility is to minimize the edit distance between sequences, introduced by Vladimir Levenshtein. Peter H. Sellers showed in 1974 that the two problems are equivalent.
The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. However, the algorithm is expensive with respect to time and space, proportional to the product of the length of two sequences and hence is not suitable for long sequences.
Recent development has focused on improving the time and space cost of the algorithm while maintaining quality. For example, in 2013, a Fast Optimal Global Sequence Alignment Algorithm (FOGSAA), suggested alignment of nucleotide/protein sequences faster than other optimal global alignment methods, including the Needleman–Wunsch algorithm. The paper claims that when compared to the Needleman–Wunsch algorithm, FOGSAA achieves a time gain of 70–90% for highly similar nucleotide sequences (with > 80% similarity), and 54–70% for sequences having 30–80% similarity.
Applications outside bioinformatics
Computer stereo vision
Stereo matching is an essential step in the process of 3D reconstruction from a pair of stereo images. When images have been rectified, an analogy can be drawn between aligning nucleotide and protein sequences and matching pixels belonging to scan lines, since both tasks aim at establishing optimal correspondence between two strings of characters.
Although in many applications image rectification can be performed, e.g. by camera resectioning or calibration, it is sometimes impossible or impractical since the computational cost of accurate rectification models prohibit their usage in real-time applications. Moreover, none of these models is suitable when a camera lens displays unexpected distortions, such as those generated by raindrops, weatherproof covers or dust. By extending the Needleman–Wunsch algorithm, a line in the 'left' image can be associated to a curve in the 'right' image by finding the alignment with the highest score in a three-dimensional array (or matrix). Experiments demonstrated that such extension allows dense pixel matching between unrectified or distorted images.
See also
Wagner–Fischer algorithm
Smith–Waterman algorithm
Sequence mining
Levenshtein distance
Dynamic time warping
Sequence alignment
References
External links
NW-align: A protein sequence-to-sequence alignment program by Needleman-Wunsch algorithm (online server and source code)
A live Javascript-based demo of Needleman–Wunsch
An interactive Javascript-based visual explanation of Needleman-Wunsch Algorithm
Sequence Alignment Techniques at Technology Blog
Biostrings R package implementing Needleman–Wunsch algorithm among others
Bioinformatics algorithms
Sequence alignment algorithms
Computational phylogenetics
Dynamic programming
Articles with example pseudocode | Needleman–Wunsch algorithm | [
"Biology"
] | 3,779 | [
"Genetics techniques",
"Computational phylogenetics",
"Bioinformatics algorithms",
"Bioinformatics",
"Phylogenetics"
] |
9,402,865 | https://en.wikipedia.org/wiki/List%20of%20thermal%20conductivities | In heat transfer, the thermal conductivity of a substance, k, is an intensive property that indicates its ability to conduct heat. For most materials, the amount of heat conducted varies (usually non-linearly) with temperature.
Thermal conductivity is often measured with laser flash analysis. Alternative measurements are also established.
Mixtures may have variable thermal conductivities due to composition. Note that for gases in usual conditions, heat transfer by advection (caused by convection or turbulence for instance) is the dominant mechanism compared to conduction.
This table shows thermal conductivity in SI units of watts per metre-kelvin (W·m−1·K−1). Some measurements use the imperial unit BTUs per foot per hour per degree Fahrenheit ( =
Sortable list
This concerns materials at atmospheric pressure and around .
Analytical list
Thermal conductivities have been measured with longitudinal heat flow methods where the experimental arrangement is so designed to accommodate heat flow in only the axial direction, temperatures are constant, and radial heat loss is prevented or minimized. For the sake of simplicity the conductivities that are found by that method in all of its variations are noted as L conductivities, those that are found by radial measurements of the sort are noted as R conductivities, and those that are found from periodic or transient heat flow are distinguished as P conductivities. Numerous variations of all of the above and various other methods have been discussed by some G. K. White, M. J. Laubits, D. R. Flynn, B. O. Peirce and R. W. Wilson and various other theorists who are noted in an international Data Series from Purdue University, Volume I pages 14a–38a.
This concerns materials at various temperatures and pressures.
See also
Laser flash analysis
List of insulation materials
R-value (insulation)
Thermal transmittance
Specific heat capacity
Thermal conductivity
Thermal conductivities of the elements (data page)
Thermal diffusivity
Thermodynamics
References
Bibliography
External links
Heat Conduction Calculator
Thermal Conductivity Online Converter - An online thermal conductivity calculator
Thermal Conductivities of Solders
Thermal conductivity of air as a function of temperature can be found at James Ierardi's Fire Protection Engineering Site
Non-Metallic Solids: The thermal conductivities of non-metallic solids are found in about 1286 pages in the TPRC Data Series volume 2 at the PDF link here (Identifier ADA951936): http://www.dtic.mil/docs/citations/ADA951936 with full text link https://apps.dtic.mil/dtic/tr/fulltext/u2/a951936.pdf retrieved February 2, 2019 at 10:15 PM EST.
Gasses and Liquids: The thermal conductivities of gasses and liquids are found in the TPRC Data Series volume 3 at the PDF link here (Identifier ADA951937): http://www.dtic.mil/docs/citations/ADA951937 with full text link https://apps.dtic.mil/dtic/tr/fulltext/u2/a951937.pdf retrieved February 2, 2019 at 10:19 PM EST.
Metals and Alloys: The thermal conductivities of metals are found in about 1595 pages in the TPRC Data Series volume 1 at the PDF link here: http://www.dtic.mil/docs/citations/ADA951935 with full text link https://apps.dtic.mil/dtic/tr/fulltext/u2/a951935.pdf retrieved February 2, 2019 at 10:20 PM EST.
Specific Heat and Thermal Radiation: Primary sources are found in the TPRC data series volumes 4 — 9, links: https://apps.dtic.mil/dtic/tr/fulltext/u2/a951938.pdf, https://apps.dtic.mil/dtic/tr/fulltext/u2/a951939.pdf, https://apps.dtic.mil/dtic/tr/fulltext/u2/a951940.pdf, https://apps.dtic.mil/dtic/tr/fulltext/u2/a951941.pdf, https://apps.dtic.mil/dtic/tr/fulltext/u2/a951942.pdf and https://apps.dtic.mil/dtic/tr/fulltext/u2/a951943.pdf retrieved at various times February 2 and 3, 2019.
Vacuums: Vacuums and various levels of vacuums and the thermal conductivities of air at reduced pressures are known at http://www.electronics-cooling.com/2002/11/the-thermal-conductivity-of-air-at-reduced-pressures-and-length-scales/ retrieved February 2, 2019 at 10:44 PM EST.
Chemical properties
Physical quantities
Heat conduction
Technology-related lists
Heat transfer
Thermodynamics | List of thermal conductivities | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,079 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Quantity",
"Thermodynamics",
"nan",
"Heat conduction",
"Physical properties",
"Dynamical systems"
] |
9,403,552 | https://en.wikipedia.org/wiki/Situated%20robotics | In artificial intelligence and cognitive science, the term situated refers to an agent which is embedded in an environment. In this used, the term is used to refer to robots, but some researchers argue that software agents can also be situated if:
they exist in a dynamic (rapidly changing) environment, which
they can manipulate or change through their actions, and which
they can sense or perceive.
Being situated is generally considered to be part of being embodied, but it is useful to take both perspectives. The situated perspective emphasizes the environment and the agent's interactions with it. These interactions define an agent's embodiment.
See also
Robot general heading
Cognitive agents
Scruffies - people who tend to worry about whether their agent is situated.
References
Hendriks-Jansen, Horst (1996) Catching Ourselves in the Act: Situated Activity, Interactive Emergence, Evolution, and Human Thought. Cambridge, Mass.: MIT Press.
Robotics | Situated robotics | [
"Engineering"
] | 187 | [
"Robotics",
"Automation"
] |
9,407,202 | https://en.wikipedia.org/wiki/Dockominium | A dockominium is the water-based version of a condominium; rather than owning an apartment in a building, one owns a boat slip on the water. The term is a portmanteau of "dock" and "condominium." In addition to the exclusive right to use the boat slip, ownership also provides one with the right to use the common elements of the marina, much the same as one would have the right to use the common areas in a residential condominium development. Also, unit owners may use, rent, or sell their unit at any time, subject to association approval.
Dockominium
Similar to a condominium, a management company manages the common areas and provides all required services such as maintenance, security, insurance, bookkeeping, legal, and overall management and supervision of the dockominium facility. A monthly fee is charged to cover these expenses. Typically, water is included, while electricity and cable, etc. are billed separately via the management association. Real estate taxes are separately assessed by the municipality and are the responsibility of the unit owner.
Purpose
A dockominium is created when a marina converts or sells individual slips to individual owners. Traditionally, marinas are in the business of renting or leasing space. A comparison would be the conversion of a rental apartment to a condominium. An association is created that monitors the maintenance and operation of the marina. Individual owners are responsible for paying their monthly, quarterly, or annual association dues and for paying their own property taxes assessed on the slip. Dockominium conversions are a popular trend taking place in the marina industry in high demand areas focusing on the luxury markets.
Limits
However, despite the advantages, whether or not dockominium sales are legal varies according to laws of each area. Few marina owners also own the land under the water, and most have only an easement to the property. Individual unit sales may violate law, thus following the legal concept of the public trust doctrine that provides that public trust lands, waters, and living resources in a state are held by the State in trust for the benefit of all of the people.
See also
Condominium
Marina
Real estate
Coastal construction
Condominium | Dockominium | [
"Engineering"
] | 439 | [
"Construction",
"Coastal construction"
] |
9,409,080 | https://en.wikipedia.org/wiki/G%20protein-coupled%20receptor%20kinase%202 | G-protein-coupled receptor kinase 2 (GRK2) is an enzyme that in humans is encoded by the ADRBK1 gene. GRK2 was initially called Beta-adrenergic receptor kinase (βARK or βARK1), and is a member of the G protein-coupled receptor kinase subfamily of the Ser/Thr protein kinases that is most highly similar to GRK3(βARK2).
Functions
G protein-coupled receptor kinases phosphorylate activated G protein-coupled receptors, which promotes the binding of an arrestin protein to the receptor. Arrestin binding to phosphorylated, active receptor prevents receptor stimulation of heterotrimeric G protein transducer proteins, blocking their cellular signaling and resulting in receptor desensitization. Arrestin binding also directs receptors to specific cellular internalization pathways, removing the receptors from the cell surface and also preventing additional activation. Arrestin binding to phosphorylated, active receptor also enables receptor signaling through arrestin partner proteins. Thus the GRK/arrestin system serves as a complex signaling switch for G protein-coupled receptors.
GRK2 and the closely related GRK3 phosphorylate receptors at sites that encourage arrestin-mediated receptor desensitization, internalization and trafficking rather than arrestin-mediated signaling (in contrast to GRK5 and GRK6, which have the opposite effect). This difference is one basis for pharmacological biased agonism (also called functional selectivity), where a drug binding to a receptor may bias that receptor’s signaling toward a particular subset of the actions stimulated by that receptor.
GRK2 is expressed broadly in tissues, but generally at higher levels than the related GRK3. GRK2 was originally identified as a protein kinase that phosphorylated the β2-adrenergic receptor, and has been most extensively studied as a regulator of adrenergic receptors (and other GPCRs) in the heart, where it has been proposed as a drug target to treat heart failure. Strategies to inhibit GRK2 include using small molecules (including Paroxetine and Compound-101) and using gene therapy approaches utilizing regulatory domains of GRK2 (particularly overexpressing the carboxy terminal pleckstrin-homology (PH) domain that binds the G protein βγ-subunit complex and inhibits GRK2 activation (often called the “βARKct”), or just a peptide from this PH domain).
GRK2 and the related GRK3 can interact with heterotrimeric G protein subunits resulting from GPCR activation, both to be activated and to regulate G protein signaling pathways. GRK2 and GRK3 share a carboxyl terminal pleckstrin homology (PH) domain that binds to G protein βγ subunits, and GPCR activation of heterotrimeric G proteins releases this free βγ complex that binds to GRK2/3 to recruit these kinases to the cell membrane precisely at the location of the activated receptor, augmenting GRK activity to regulate the activated receptor. The amino terminal RGS-homology (RH) domain of GRK2 and GRK3 binds to heterotrimeric G protein subunits of the Gq family to reduce Gq signaling by sequestering active G proteins away from their effector proteins such as phospholipase C-beta; but the GRK2 and GRK3 RH domains are unable to function as GTPase-activating proteins (as do traditional RGS proteins) to turn off G protein signaling.
Interactions
GRK2 has been shown to interact with numerous protein partners, including:
G protein βγ complex
G protein GNAQ family members
GIT1 and GIT2
PDE6G
PRKCB1
Src
See also
G protein-coupled receptor kinases
G protein
desensitization (medicine)
arrestin
Kinase
References
External links
Proteins
EC 2.7.11
Transferases
Protein kinases | G protein-coupled receptor kinase 2 | [
"Chemistry"
] | 832 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
9,411,116 | https://en.wikipedia.org/wiki/Complement%20control%20protein | Complement control proteins are proteins that interact with components of the complement system.
The complement system is tightly regulated by a network of proteins known as "regulators of complement activation (RCA)" that help distinguish target cells as "self" or "non-self." A subset of this family of proteins, complement control proteins (CCP), are characterized by domains of conserved repeats that direct interaction with components of the complement system. These "Sushi" domains have been used to identify other putative members of the CCP family. There are many other RCA proteins that do not fall into this family.
Most CCPs prevent activation of the complement system on the surface of host cells and protect host tissues against damage caused by autoimmunity. Because of this, these proteins play important roles in autoimmune disorders and cancers.
Members
Most of the well-studied proteins within this family can be categorized in two classes:
Membrane-bound complement regulators
Membrane Cofactor Protein, MCP (CD46)
Decay Accelerating Factor, DAF (CD55)
Protectin (CD59)
Complement C3b/C4b Receptor 1, CR1 (CD35)
Complement Regulator of the Immunoglobulin Superfamily, CRIg
Soluble complement regulators
Factor H
C4-Binding Protein (C4bp)
Other proteins with characteristic CCP domains have been identified including members of the sushi domain containing (SUSD) protein family and Human CUB and sushi multiple domains family (CSMD).
Mechanisms of protection
Every cell in the human body is protected by one or more of the membrane-associated RCA proteins, CR1, DAF or MCP. Factor H and C4BP circulate in the plasma and are recruited to self-surfaces through binding to host-specific polysaccharides such as the glycosaminoglycans.
Most CCPs function by preventing convertase activity. Convertases, specifically the C3 convertases C3b.Bb and C4b.2a, are the enzymes that drive complement activation by activating C3b, a central component of the complement system. Some CCPs, such as CD46, recruit other RCAs to proteolytically inactivate developing convertases. CD55 and other CCPs promote the rapid dissociation of active enzymes. Other CCPs prevent the activity of terminal effectors of the complement system, CD59 for example blocks oligomerization of the complement peptide C9 stalling the formation of the Membrane Attack Complex (MAC).
For example, C3b.Bb is an important convertase that is part of the alternative pathway, and it is formed when factor B binds C3b and is subsequently cleaved. To prevent this from happening, factor H competes with factor B to bind C3b; if it manages to bind, then the convertase is not formed. Factor H can bind C3b much more easily in the presence of sialic acid, which is a component of most cells in the human body; conversely, in the absence of sialic acid, factor B can bind C3b more easily. This means that if C3b is bound to a "self" cell, the presence of sialic acid and the binding of factor H will prevent the complement cascade from activating; if C3b is bound to a bacterium, factor B will bind and the cascade will be set off as normal. This mechanism of immune regulation using Factor H has been exploited by several bacterial pathogens.
Structure
RCA proteins typically possess CCP domains, also termed Sushi domains or Short Consensus Repeats (SCR). Such beta-sandwich domains contain about 60 amino acid residues, each with 4 conserved cysteines arranged in two conserved disulfide bonds (oxidized in 'abab' manner), and a conserved tryptophan, but otherwise can vary greatly in sequence. Recently, it has been demonstrated that the order, spatial relationship, and structure of these domains is essential for determining function.
The first CCP structure determined was a solution structure of the 16th module of factor H (pdb:1hcc). Since then, other CCP domains have been solved either by NMR-spectroscopy (also relaxation studies, e.g. module 2 and 3 from CD55 (pdb:1nwv)) or by X-ray diffraction (also with co-crystallized partner, e.g. CR2 CCP modules complexed with C3d (pdb:1ghq)).
Clinical significance
Complement has been implicated in many diseases associated with inflammation and autoimmunity. Efforts to develop therapeutics that target the interactions between the RCA network, CCPs, and components of the complement system have led to the development of successful drugs including Eculizumab.
There are two primary mechanisms by which dysfunction of complement can contribute to tissue damage:
Decreased protection of host tissues from complement activation due to the absence or lack of function of CCPs
Exhaustion of CRAs due to exposure of host cells that activate complement (either through direct damage or dysfunction) or prolonged attack by a potential pathogen such as during sepsis
The importance of complement regulation for good health is highlighted by recent work that seems to imply that individuals carrying point mutations or single nucleotide polymorphisms in their genes for factor H may be more susceptible to diseases including atypical hemolytic uremic syndrome, dense deposit diseases (or membranoproliferative glomerulonephritis type 2) and - most notably because of its prevalence in the elderly - age-related macular degeneration. Transgenic pigs that express human complement regulation factors were some of the first transgenic pigs used for xenotransplantation.
Complement control proteins also play a role in malignancy. Complement proteins protect against malignant cells- both by direct complement attack and through initiation of Complement-dependent cytotoxicity, which synergises with specific monoclonal antibody therapies. However, some malignant cells have been shown to have increased expression of membrane-bound complement control proteins, especially CD46, DAF and CD59. This mechanism allows some tumours to evade complement action.
CCPs have been exploited extensively by pathogenic microbes. Neisseria gonorhoeae and Neisseria meningitidis, the bacteria responsible for gonorrhea and meningitis have many well-studied evasion strategies involving CCPs, including binding soluble regulators like Factor H and C4bp. Many viruses, such as Vaccinia incorporate mimics of CCPs into their envelope for the purposes of evading the complement system. Still other microbes such as the measles virus use CCPs as receptors to gain entry to cells during infection. Each of these strategies may provide targets for the development of vaccines, as with the case of N. meningitidis.
Certain forms of schizophrenia are characterised by an underlying biological mechanism of excessive synaptic pruning, mediated by a dysregulated complement system in the brain. Accordingly, genetic variants of a brain-specific complement inhibitor, CSMD1, are associated with the risk of developing schizophrenia.
Sources
Further reading
External links
Complement system
Proteins | Complement control protein | [
"Chemistry"
] | 1,481 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
9,412,979 | https://en.wikipedia.org/wiki/Pushforward%20measure | In measure theory, a pushforward measure (also known as push forward, push-forward or image measure) is obtained by transferring ("pushing forward") a measure from one measurable space to another using a measurable function.
Definition
Given measurable spaces and , a measurable mapping and a measure , the pushforward of by is defined to be the measure given by
for
This definition applies mutatis mutandis for a signed or complex measure.
The pushforward measure is also denoted as , , , or .
Properties
Change of variable formula
Theorem: A measurable function g on X2 is integrable with respect to the pushforward measure f∗(μ) if and only if the composition is integrable with respect to the measure μ. In that case, the integrals coincide, i.e.,
Note that in the previous formula .
Functoriality
Pushforwards of measures allow to induce, from a function between measurable spaces , a function between the spaces of measures .
As with many induced mappings, this construction has the structure of a functor, on the category of measurable spaces.
For the special case of probability measures, this property amounts to functoriality of the Giry monad.
Examples and applications
If is a probability space, is a measurable space, and is a -valued random variable, then the probability distribution of is the pushforward measure of by onto .
A natural "Lebesgue measure" on the unit circle S1 (here thought of as a subset of the complex plane C) may be defined using a push-forward construction and Lebesgue measure λ on the real line R. Let λ also denote the restriction of Lebesgue measure to the interval [0, 2π) and let f : [0, 2π) → S1 be the natural bijection defined by f(t) = exp(i t). The natural "Lebesgue measure" on S1 is then the push-forward measure f∗(λ). The measure f∗(λ) might also be called "arc length measure" or "angle measure", since the f∗(λ)-measure of an arc in S1 is precisely its arc length (or, equivalently, the angle that it subtends at the centre of the circle.)
The previous example extends nicely to give a natural "Lebesgue measure" on the n-dimensional torus Tn. The previous example is a special case, since S1 = T1. This Lebesgue measure on Tn is, up to normalization, the Haar measure for the compact, connected Lie group Tn.
Gaussian measures on infinite-dimensional vector spaces are defined using the push-forward and the standard Gaussian measure on the real line: a Borel measure γ on a separable Banach space X is called Gaussian if the push-forward of γ by any non-zero linear functional in the continuous dual space to X is a Gaussian measure on R.
Consider a measurable function f : X → X and the composition of f with itself n times:
This iterated function forms a dynamical system. It is often of interest in the study of such systems to find a measure μ on X that the map f leaves unchanged, a so-called invariant measure, i.e one for which f∗(μ) = μ.
One can also consider quasi-invariant measures for such a dynamical system: a measure on is called quasi-invariant under if the push-forward of by is merely equivalent to the original measure μ, not necessarily equal to it. A pair of measures on the same space are equivalent if and only if , so is quasi-invariant under if
Many natural probability distributions, such as the chi distribution, can be obtained via this construction.
Random variables induce pushforward measures. They map a probability space into a codomain space and endow that space with a probability measure defined by the pushforward. Furthermore, because random variables are functions (and hence total functions), the inverse image of the whole codomain is the whole domain, and the measure of the whole domain is 1, so the measure of the whole codomain is 1. This means that random variables can be composed ad infinitum and they will always remain random variables and endow the codomain spaces with probability measures.
A generalization
In general, any measurable function can be pushed forward. The push-forward then becomes a linear operator, known as the transfer operator or Frobenius–Perron operator. In finite spaces this operator typically satisfies the requirements of the Frobenius–Perron theorem, and the maximal eigenvalue of the operator corresponds to the invariant measure.
The adjoint to the push-forward is the pullback; as an operator on spaces of functions on measurable spaces, it is the composition operator or Koopman operator.
See also
Measure-preserving dynamical system
Normalizing flow
Optimal transport
Notes
References
Measures (measure theory) | Pushforward measure | [
"Physics",
"Mathematics"
] | 1,039 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
4,153,106 | https://en.wikipedia.org/wiki/T%20arm | The T-arm or T-loop is a specialized region on the tRNA molecule which acts as a special recognition site for the ribosome to form a tRNA-ribosome complex during protein biosynthesis or translation (biology).
The T-arm has two components to it; the T-stem and the T-loop.
The T-stem consists of a series of paired nucleotides, typically 5 pairs, but sometimes as few as 1 or as many as 6.
The T-loop is also often known as the TΨC arm due to the presence of ribothymidine (T/m5U), pseudouridine and cytidine residues. It folds into a unique structural element consisting of stacked bases in a U-turn, now termed the "T-loop motif".
In archaea, the m5U is replaced with N1-methylpseudouridine (m1Ψ). The m5U/m1Ψ modification at position 54 is thought to increase structural stability.
Organisms with T-loop lacking tRNA exhibit a much lower level of aminoacylation and EF-Tu-binding than in organisms which have the native tRNA.
The T-loop motif has been identified as a ubiquitous structural element in a number of noncoding RNAs. At least one other instance of the T-loop, found in rRNA, also carries the m5U modification.
References
RNA
Protein biosynthesis | T arm | [
"Chemistry"
] | 303 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
4,153,112 | https://en.wikipedia.org/wiki/Micromagnetics | Micromagnetics is a field of physics dealing with the prediction of magnetic behaviors at sub-micrometer length scales. The length scales considered are large enough for the atomic structure of the material to be ignored (the continuum approximation), yet small enough to resolve magnetic structures such as domain walls or vortices.
Micromagnetics can deal with static equilibria, by minimizing the magnetic energy, and with dynamic behavior, by solving the time-dependent dynamical equation.
History
Micromagnetics originated from a 1935 paper
by Lev Landau and Evgeny Lifshitz on antidomain walls.
Micromagnetics was then expanded upon by William Fuller Brown Jr. in several works in 1940-1941 using energy expressions taken from a 1938 paper by William Cronk Elmore.
According to D. Wei, Brown introduced the name "micromagnetics" in 1958.
The field prior to 1960 was summarised in Brown's book Micromagnetics.
In the 1970's computational methods were developed for the analysis of recording media due to the introduction of personal computers.
Static micromagnetics
The purpose of static micromagnetics is to solve for the spatial distribution of the magnetization at equilibrium. In most cases, as the temperature is much lower than the Curie temperature of the material considered, the modulus of the magnetization is assumed to be everywhere equal to the saturation magnetization . The problem then consists in finding the spatial orientation of the magnetization, which is given by the magnetization direction vector , also called reduced magnetization.
The static equilibria are found by minimizing the magnetic energy,
subject to the constraint or .
The contributions to this energy are the following:
Exchange energy
The exchange energy is a phenomenological continuum description of the quantum-mechanical exchange interaction. It is written as:
where is the exchange constant; , and are the components of ;
and the integral is performed over the volume of the sample.
The exchange energy tends to favor configurations where the magnetization varies slowly across the sample. This energy is minimized when the magnetization is perfectly uniform.
The exchange term is isotropic,
so any direction is equally acceptable.
Anisotropy energy
Magnetic anisotropy arises due to a combination of crystal structure and spin-orbit interaction. It can be generally written as:
where , the anisotropy energy density, is a function of the orientation of the magnetization. Minimum-energy directions for are called easy axes.
Time-reversal symmetry ensures that is an even function of . The simplest such function is
where K1 is called the anisotropy constant. In this approximation, called uniaxial anisotropy, the easy axis is the axis.
The anisotropy energy favors magnetic configurations where the magnetization is everywhere aligned along an easy axis.
Zeeman energy
The Zeeman energy is the interaction energy between the magnetization and any externally applied field. It is written as:
where is the applied field and is the vacuum permeability.
The Zeeman energy favors alignment of the magnetization parallel to the applied field.
Energy of the demagnetizing field
The demagnetizing field is the magnetic field created by the magnetic sample upon itself. The associated energy is:
where is the demagnetizing field. The field satisfies
and hence can be written as the gradient of a potential . This field depends on the magnetic configuration itself, and it can be found by solving
inside of the body and
outside of the body.
These are supplemented with the boundary conditions on the surface of the body
where is the unit normal to the surface. Furthermore, the potential satisfies the condition that and remain bounded as . The solution of these equations (c.f. magnetostatics) is:
The quantity is often called the volume charge density, and is called the surface charge density.
The energy of the demagnetizing field favors magnetic configurations that minimize magnetic charges. In particular, on the edges of the sample, the magnetization tends to run parallel to the surface. In most cases it is not possible to minimize this energy term at the same time as the others. The static equilibrium then is a compromise that minimizes the total magnetic energy, although it may not minimize individually any particular term.
Dzyaloshinskii–Moriya Interaction Energy
This interaction arises when a crystal lacks inversion symmetry, encouraging the magnetization to be perpendicular to its neighbours. It directly competes with the exchange energy. It is modelled with the energy contribution
where is the spiralization tensor,
that depends upon the crystal class. For bulk DMI,
and for a thin film in the plane
interfacial DMI takes the form
and for materials with symmetry class the energy contribution is
This term is important for the formation of magnetic skyrmions.
Magnetoelastic Energy
The magnetoelastic energy describes the energy storage due to elastic lattice distortions. It may be neglected if magnetoelastic coupled effects are neglected.
There exists a preferred local distortion of the crystalline solid associated with the magnetization director .
For a simple small-strain model, one can assume this strain to be isochoric and fully
isotropic in the lateral direction, yielding the deviatoric ansatz
where the material parameter is the isotropic magnetostrictive
constant. The elastic
energy density is assumed to be a function of the elastic, stress-producing
strains . A quadratic form for the magnetoelastic energy is
where
is the fourth-order elasticity tensor. Here the elastic response is assumed to be isotropic (based on
the two Lamé constants and ).
Taking into account the constant length of , we obtain the invariant-based representation
This energy term contributes to magnetostriction.
Dynamic micromagnetics
The purpose of dynamic micromagnetics is to predict the time evolution of the magnetic configuration. This is especially important if the sample is subject to some non-steady conditions such as the application of a field pulse or an AC field. This is done by solving the Landau-Lifshitz-Gilbert equation, which is a partial differential equation describing the evolution of the magnetization in terms of the local effective field acting on it.
Effective field
The effective field is the local field felt by the magnetization. The only real fields however are the magnetostatic field and the applied field. It can be described informally as the derivative of the magnetic energy density with respect to the orientation of the magnetization, as in:
where dE/dV is the energy density. In variational terms, a change dm of the magnetization and the associated change dE of the magnetic energy are related by:
Since m is a unit vector, dm is always perpendicular to m. Then the above definition leaves unspecified the component of Heff that is parallel to m. This is usually not a problem, as this component has no effect on the magnetization dynamics.
From the expression of the different contributions to the magnetic energy, the effective field can be found to be (excluding the DMI and magnetoelastic contributions):
Landau-Lifshitz-Gilbert equation
This is the equation of motion of the magnetization. It describes a Larmor precession of the magnetization around the effective field, with an additional damping term arising from the coupling of the magnetic system to the environment. The equation can be written in the so-called Gilbert form (or implicit form) as:
where is the electron gyromagnetic ratio and the Gilbert damping constant.
It can be shown that this is mathematically equivalent to the following Landau-Lifshitz (or explicit) form:
where is the Gilbert Damping constant, characterizing how quickly the damping term takes away energy from the system ( = 0, no damping, permanent precession).
These equations preserve the constraint , as
Applications
The interaction of micromagnetics with mechanics is also of interest in the context of industrial applications that deal with magneto-acoustic resonance such as in hypersound speakers, high frequency magnetostrictive transducers etc.
FEM simulations taking into account the effect of magnetostriction into micromagnetics are of importance. Such simulations use models described above within a finite element framework.
Apart from conventional magnetic domains and domain-walls, the theory also treats the statics and dynamics of topological line and point configurations, e.g. magnetic vortex and antivortex states; or even 3d-Bloch points, where, for example, the magnetization leads radially into all directions from the origin, or into topologically equivalent configurations. Thus in space, and also in time, nano- (and even pico-)scales are used.
The corresponding topological quantum numbers are thought to be used as information carriers, to apply the most recent, and already studied, propositions in information technology.
Another application that has emerged in the last decade is the application of micromagnetics towards neuronal stimulation. In this discipline, numerical methods such as finite-element analysis are used to analyze the electric/magnetic fields generated by the stimulation apparatus; then the results are validated or explored further using in-vivo or in-vitro neuronal stimulation. Several distinct set of neurons have been studied using this methodology including retinal neurons, cochlear neurons, vestibular neurons, and cortical neurons of embryonic rats.
See also
Magnetism
Magnetic nanoparticles
Footnotes and references
Further reading
External links
μMAG -- Micromagnetic Modeling Activity Group.
OOMMF -- Micromagnetic Modeling Tool.
MuMax -- GPU-accelerated Micromagnetic Modeling Tool.
Dynamical systems
Magnetic ordering
Magnetostatics | Micromagnetics | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,977 | [
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Mechanics",
"Condensed matter physics",
"Dynamical systems"
] |
4,153,139 | https://en.wikipedia.org/wiki/Cryptoregiochemistry | Cryptoregiochemistry refers to the site of initial oxidative attack in double bond formation by enzymes such as fatty acid desaturases. This is a mechanistic parameter that is usually determined through the use of kinetic isotope effect experiments, based on the premise that the initial C-H bond cleavage step should be energetically more difficult and therefore more sensitive to isotopic substitution than the second C-H bond breaking step.
References
Chemical kinetics
Stereochemistry | Cryptoregiochemistry | [
"Physics",
"Chemistry"
] | 97 | [
"Chemical reaction engineering",
"Stereochemistry",
"Space",
"Stereochemistry stubs",
"nan",
"Spacetime",
"Chemical kinetics"
] |
4,154,187 | https://en.wikipedia.org/wiki/Cosmogenic%20nuclide | Cosmogenic nuclides (or cosmogenic isotopes) are rare nuclides (isotopes) created when a high-energy cosmic ray interacts with the nucleus of an in situ Solar System atom, causing nucleons (protons and neutrons) to be expelled from the atom (see cosmic ray spallation). These nuclides are produced within Earth materials such as rocks or soil, in Earth's atmosphere, and in extraterrestrial items such as meteoroids. By measuring cosmogenic nuclides, scientists are able to gain insight into a range of geological and astronomical processes. There are both radioactive and stable cosmogenic nuclides. Some of these radionuclides are tritium, carbon-14 and phosphorus-32.
Certain light (low atomic number) primordial nuclides (isotopes of lithium, beryllium and boron) are thought to have been created not only during the Big Bang, but also (and perhaps primarily) to have been made after the Big Bang, but before the condensation of the Solar System, by the process of cosmic ray spallation on interstellar gas and dust. This explains their higher abundance in cosmic dust as compared with their abundances on Earth. This also explains the overabundance of the early transition metals just before iron in the periodic table – the cosmic-ray spallation of iron produces scandium through chromium on the one hand and helium through boron on the other. However, the arbitrary defining qualification for cosmogenic nuclides of being formed "in situ in the Solar System" (meaning inside an already aggregated piece of the Solar System) prevents primordial nuclides formed by cosmic ray spallation before the formation of the Solar System from being termed "cosmogenic nuclides"—even though the mechanism for their formation is exactly the same. These same nuclides still arrive on Earth in small amounts in cosmic rays, and are formed in meteoroids, in the atmosphere, on Earth, "cosmogenically". However, beryllium (all of it stable beryllium-9) is present primordially in the Solar System in much larger amounts, having existed prior to the condensation of the Solar System, and thus present in the materials from which the Solar System formed.
To make the distinction in another fashion, the timing of their formation determines which subset of cosmic ray spallation-produced nuclides are termed primordial or cosmogenic (a nuclide cannot belong to both classes). By convention, certain stable nuclides of lithium, beryllium, and boron are thought to have been produced by cosmic ray spallation in the period of time between the Big Bang and the Solar System's formation (thus making these primordial nuclides, by definition) are not termed "cosmogenic", even though they were formed by the same process as the cosmogenic nuclides (although at an earlier time). The primordial nuclide beryllium-9, the only stable beryllium isotope, is an example of this type of nuclide.
In contrast, even though the radioactive isotopes beryllium-7 and beryllium-10 fall into this series of three light elements (lithium, beryllium, boron) formed mostly by cosmic ray spallation nucleosynthesis, both of these nuclides have half lives too short (53 days and ca. 1.4 million years, resp.) for them to have been formed before the formation of the Solar System, and thus they cannot be primordial nuclides. Since the cosmic ray spallation route is the only possible source of beryllium-7 and beryllium-10 occurrence naturally in the environment, they are therefore cosmogenic.
Cosmogenic nuclides
Here is a list of radioisotopes formed by the action of cosmic rays; the list also contains the production mode of the isotope. Most cosmogenic nuclides are formed in the atmosphere, but some are formed in situ in soil and rock exposed to cosmic rays, notably calcium-41 in the table below.
Applications in geology listed by isotope
Use in geochronology
As seen in the table above, there are a wide variety of useful cosmogenic nuclides which can be measured in soil, rocks, groundwater, and the atmosphere. These nuclides all share the common feature of being absent in the host material at the time of formation. These nuclides are chemically distinct and fall into two categories. The nuclides of interest are either noble gases which due to their inert behavior are inherently not trapped in a crystallized mineral or has a short enough half-life such that it has decayed since nucleosynthesis, but a long enough half-life such that it has built up measurable concentrations. The former includes measuring abundances of 81Kr and 39Ar whereas the latter includes measuring abundances of 10Be, 14C, and 26Al.
Three types of cosmic-ray reactions can occur once a cosmic ray strikes matter which in turn produce the measured cosmogenic nuclides.
cosmic ray spallation, which is the most common reaction on the near-surface (typically 0 to 60 cm below) the Earth and can create secondary particles which can cause additional reaction upon interaction with another nuclei called a collision cascade.
muon capture, which pervades at depths a few meters below the subsurface because muons are inherently less reactive; in some cases, high-energy muons can reach greater depths
neutron capture, which due to the neutron's low energy are captured into a nucleus, most commonly by water, but this process is highly dependent on snow, soil moisture and trace element concentrations.
Corrections for cosmic-ray fluxes
Since the Earth bulges at the equator and mountains and deep oceanic trenches allow for deviations of several kilometers relative to a uniformly smooth spheroid, cosmic rays bombard the Earth's surface unevenly based on the latitude and altitude. Thus, many geographic and geologic considerations must be understood in order for cosmic-ray flux to be accurately determined. Atmospheric pressure, for example, which varies with altitude, can change the production rate of nuclides within minerals by a factor of 30 between sea level and the top of a 5 km high mountain. Even variations in the slope of the ground can affect how far high-energy muons can penetrate the subsurface. Geomagnetic field strength which varies over time affects the production rate of cosmogenic nuclides though some models assume variations of the field strength are averaged out over geologic time and are not always considered.
See also
Environmental radioactivity
References
Concepts in astrophysics
Environmental isotopes
Geochemistry
Nuclear technology
Nuclear chemistry
Nuclear physics
Radioactivity
Radiometric dating | Cosmogenic nuclide | [
"Physics",
"Chemistry"
] | 1,407 | [
"Concepts in astrophysics",
"Nuclear chemistry",
"Environmental isotopes",
"Astrophysics",
"Nuclear technology",
"Isotopes",
"Radiometric dating",
"nan",
"Nuclear physics",
"Radioactivity"
] |
4,154,507 | https://en.wikipedia.org/wiki/Polymer%20brush | In materials science, a polymer brush is the name given to a surface coating consisting of polymers tethered to a surface. The brush may be either in a solvated state, where the tethered polymer layer consists of polymer and solvent, or in a melt state, where the tethered chains completely fill up the space available. These polymer layers can be tethered to flat substrates such as silicon wafers, or highly curved substrates such as nanoparticles. Also, polymers can be tethered in high density to another single polymer chain, although this arrangement is normally named a bottle brush. Additionally, there is a separate class of polyelectrolyte brushes, when the polymer chains themselves carry an electrostatic charge.
The brushes are often characterized by the high density of grafted chains. The limited space then leads to a strong extension of the chains. Brushes can be used to stabilize colloids, reduce friction between surfaces, and to provide lubrication in artificial joints.
Polymer brushes have been modeled with molecular dynamics, Monte Carlo methods, Brownian dynamics simulations, and molecular theories.
Structure
Polymer molecules within a brush are stretched away from the attachment surface as a result of the fact that they repel each other (steric repulsion or osmotic pressure). More precisely, they are more elongated near the attachment point and unstretched at the free end, as depicted on the drawing.
More precisely, within the approximation derived by Milner, Witten, Cates, the average density of all monomers in a given chain is always the same up to a prefactor:
where is the altitude of the end monomer and the number of monomers per chain.
The averaged density profile of the end monomers of all attached chains, convoluted with the above density profile for one chain, determines the density profile of the brush as a whole:
A dry brush has a uniform monomer density up to some altitude . One can show that the corresponding end monomer density profile is given by:
where is the monomer size.
The above monomer density profile for one single chain minimizes the total elastic energy of the brush,
regardless of the end monomer density profile , as shown in.
From a dry brush to any brush
As a consequence, the structure of any brush can be derived from the brush density profile . Indeed, the free end distribution is simply a convolution of the density profile with the free end distribution of a dry brush:
.
Correspondingly, the brush elastic free energy is given by:
.
This method has been used to derive wetting properties of polymer melts on polymer brushes of the same species and to understand fine interpenetration asymmetries between copolymer lamellae that may yield very unusual non-centrosymmetric lamellar structures.
Applications
Polymer brushes can be used in Area-selective deposition. Area-selective deposition is a promising technique for positional self-alignment of materials at a prepatterned surface.
See also
Dendronized polymer
References
Surface science
Soft matter
Polymer chemistry | Polymer brush | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 624 | [
"Soft matter",
"Materials science",
"Surface science",
"Condensed matter physics",
"Polymer chemistry"
] |
4,156,844 | https://en.wikipedia.org/wiki/Cardiovascular%20physiology | Cardiovascular physiology is the study of the cardiovascular system, specifically addressing the physiology of the heart ("cardio") and blood vessels ("vascular").
These subjects are sometimes addressed separately, under the names cardiac physiology and circulatory physiology.
Although the different aspects of cardiovascular physiology are closely interrelated, the subject is still usually divided into several subtopics.
Heart
Cardiac output (= heart rate * stroke volume. Can also be calculated with Fick principle, palpating method.)
Stroke volume (= end-diastolic volume − end-systolic volume)
Ejection fraction (= stroke volume / end-diastolic volume)
Cardiac output is mathematically ` to systole
Inotropic, chronotropic, and dromotropic states
Cardiac input (= heart rate * suction volume Can be calculated by inverting terms in Fick principle)
Suction volume (= end-systolic volume + end-diastolic volume)
Injection fraction (=suction volume / end-systolic volume)
Cardiac input is mathematically ` to diastole
Electrical conduction system of the heart
Electrocardiogram
Cardiac marker
Cardiac action potential
Frank–Starling law of the heart
Wiggers diagram
Pressure volume diagram
Regulation of blood pressure
Baroreceptor
Baroreflex
Renin–angiotensin system
Renin
Angiotensin
Juxtaglomerular apparatus
Aortic body and carotid body
Autoregulation
Cerebral Autoregulation
Hemodynamics
Under most circumstances, the body attempts to maintain a steady mean arterial pressure.
When there is a major and immediate decrease (such as that due to hemorrhage or standing up), the body can increase the following:
Heart rate
Total peripheral resistance (primarily due to vasoconstriction of arteries)
Inotropic state
In turn, this can have a significant impact upon several other variables:
Stroke volume
Cardiac output
Pressure
Pulse pressure (systolic pressure - diastolic pressure)
Mean arterial pressure (usually approximated with diastolic pressure + 1/3 pulse pressure)
Central venous pressure
Regional circulation
See also
Cardiovascular System Dynamics Society
References
External links
Cardiovascular Physiology Concepts - Comprehensive explanation of basic cardiovascular concepts, based on a textbook of the same name.
The Gross Physiology of the Cardiovascular System - Mechanical overview of cardiovascular function. Free eBook and video resources.
Clinical Sciences - Cardiovascular An iPhone app covering detailed cardiovascular physiology and anatomy
Quantitative Cardiovascular Physiology and Clinical Applications for Engineers
Cardiology
Circulatory system
Heart
Cardiac anatomy | Cardiovascular physiology | [
"Biology"
] | 523 | [
"Organ systems",
"Circulatory system"
] |
4,157,248 | https://en.wikipedia.org/wiki/Succinyl%20coenzyme%20A%20synthetase | Succinyl coenzyme A synthetase (SCS, also known as succinyl-CoA synthetase or succinate thiokinase or succinate-CoA ligase) is an enzyme that catalyzes the reversible reaction of succinyl-CoA to succinate. The enzyme facilitates the coupling of this reaction to the formation of a nucleoside triphosphate molecule (either GTP or ATP) from an inorganic phosphate molecule and a nucleoside diphosphate molecule (either GDP or ADP). It plays a key role as one of the catalysts involved in the citric acid cycle, a central pathway in cellular metabolism, and it is located within the mitochondrial matrix of a cell.
Chemical reaction and enzyme mechanism
Succinyl CoA synthetase catalyzes the following reversible reaction:
Succinyl CoA + Pi + NDP ↔ Succinate + CoA + NTP
where Pi denotes inorganic phosphate, NDP denotes nucleotide diphosphate (either GDP or ADP), and NTP denotes nucleotide triphosphate (either GTP or ATP). As mentioned, the enzyme facilitates coupling of the conversion of succinyl CoA to succinate with the formation of NTP from NDP and Pi. The reaction has a biochemical standard state free energy change of -3.4 kJ/mol. The reaction takes place by a three-step mechanism which is depicted in the image below. The first step involves displacement of CoA from succinyl CoA by a nucleophilic inorganic phosphate molecule to form succinyl phosphate. The enzyme then utilizes a histidine residue to remove the phosphate group from succinyl phosphate and generate succinate. Finally, the phosphorylated histidine transfers the phosphate group to a nucleoside diphosphate, which generates the high-energy carrying nucleoside triphosphate.
Structure
Subunits
Bacterial and mammalian SCSs are made up of α and β subunits. In E. coli two αβ heterodimers link together to form an α2β2 heterotetrameric structure. However, mammalian mitochondrial SCSs are active as αβ dimers and do not form a heterotetramer.
The E. coli SCS heterotetramer has been crystallized and characterized in great detail. As can be seen in Image 2, the two α subunits (pink and green) reside on opposite sides of the structure and the two β subunits (yellow and blue) interact in the middle region of the protein. The two α subunits only interact with a single β unit, whereas the β units interact with a single α unit (to form the αβ dimer) and the β subunit of the other αβ dimer. A short amino acid chain links the two β subunits which gives rise to the tetrameric structure.
The crystal structure of Succinyl-CoA synthetase alpha subunit (succinyl-CoA-binding isoform) was determined by Joyce et al. to a resolution of 2.10 A, with PDB code 1CQJ. .
Catalytic residues
Crystal structures for the E. coli SCS provide evidence that the coenzyme A binds within each α-subunit (within a Rossmann fold) in close proximity to a histidine residue (His246α). This histidine residue becomes phosphorylated during the succinate forming step in the reaction mechanism. The exact binding location of succinate is not well-defined. The formation of the nucleotide triphosphate occurs in an ATP grasp domain, which is located near the N-terminus of the each β subunit. However, this grasp domain is located about 35 Å away from the phosphorylated histidine residue. This leads researchers to believe that the enzyme must undergo a major change in conformation to bring the histidine to the grasp domain and facilitate the formation of the nucleoside triphosphate. Mutagenesis experiments have determined that two glutamate residues (one near the catalytic histidine, Glu208α and one near the ATP grasp domain, Glu197β) play a role in the phosphorylation and dephosphorylation of the histidine, but the exact mechanism by which the enzyme changes conformation is not fully understood.
Isoforms
Johnson et al. describe two isoforms of succinyl-CoA synthetase in amniotes, one that specifies synthesis of ATP, and one that synthesises GTP.
- ATP-forming - SUCLA2
- GTP-forming - SUCLG2
In amniotes, the enzyme is a heterodimer of an α- and a β-subunit. The specificity for either adenosine or guanosine phosphates is defined by the β-subunit, which is encoded by 2 genes. SUCLG2 is GTP-specific and SUCLA2 is ATP-specific, while SUCLG1 encodes the common α-subunit. β variants are produced at different amounts in different tissues, causing GTP or ATP substrate requirements.
Mostly consuming tissues such as heart and brain have more ATP-specific succinyl-CoA synthetase (ATPSCS), while synthetic tissues such as kidney and liver have the more GTP-specific form (GTPSCS). Kinetics analysis of ATPSCS from the breast muscle of pigeons and GTPSCS from pigeon liver showed that their apparent Michaelis constants were similar for CoA, but different for the nucleotides, phosphate, and succinate. The largest difference was for succinate: Kmapp of ATPSCS = 5mM versus that of GTPSCS = 0.5mM.
Function
Generation of nucleotide triphosphates
SCS is the only enzyme in the citric acid cycle that catalyzes a reaction in which a nucleotide triphosphate (GTP or ATP) is formed by substrate-level phosphorylation. Research studies have shown that E. coli SCSs can catalyze either GTP or ATP formation. However, mammals possess different types of SCSs that are specific for either GTP (G-SCS) or ATP (A-SCS) and are native to different types of tissue within the organism. An interesting study using pigeon cells showed that GTP specific SCSs were located in pigeon liver cells, and ATP specific SCSs were located in the pigeon breast muscle cells. Further research revealed a similar phenomenon of GTP and ATP specific SCSs in rat, mouse, and human tissue. It appears that tissue typically involved in anabolic metabolism (like the liver and kidneys) express G-SCS, whereas tissue involved in catabolic metabolism (like the brain, the heart, and muscular tissue) express A-SCS.
Formation of metabolic intermediates
SCS facilitates the flux of molecules into other metabolic pathways by controlling the interconversion between succinyl CoA and succinate. This is important because succinyl CoA is an intermediate necessary for porphyrin, heme, and ketone body biosynthesis.
Regulation and inhibition
In some bacteria, the enzyme is regulated at the transcriptional level. It has been demonstrated that the gene for SCS (sucCD) is transcribed along with the gene for α-ketoglutarate dehydrogenase (sucAB) under the control of a promoter called sdhC, which is part of the succinate dehydrogenase operon. This operon is up-regulated by the presence of oxygen and responds to a variety of carbon sources. Antibacterial drugs that prevent phosphorylation of histidine, like the molecule LY26650, are potent inhibitors of bacterial SCSs.
Optimal activity
Measurements (performed using a soy bean SCS) indicate an optimal temperature of 37 °C and an optimal pH of 7.0-8.0.
Role in disease
Fatal infantile lactic acidosis: Defective SCS has been implicated as a cause of fatal infantile lactic acidosis, which is a disease in infants that is characterized by the build-up of toxic levels of lactic acid. The condition (when it is most severe) results in death usually within 2–4 days after birth. It has been determined that patients with the condition display a two base pair deletion within the gene known as SUCLG1 that encodes the α subunit of SCS. As a result, functional SCS is absent in metabolism causing a major imbalance in flux between glycolysis and the citric acid cycle. Since the cells do not have a functional citric acid cycle, acidosis results because cells are forced to choose lactic acid production as the primary means of producing ATP.
See also
Citric acid cycle
Succinate dehydrogenase
Succinate—CoA ligase (ADP-forming)
Succinate—CoA ligase (GDP-forming)
References
External links
Metabolism
EC 6.2.1 | Succinyl coenzyme A synthetase | [
"Chemistry",
"Biology"
] | 1,888 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
4,157,363 | https://en.wikipedia.org/wiki/Stellated%20truncated%20hexahedron | In geometry, the stellated truncated hexahedron (or quasitruncated hexahedron, and stellatruncated cube) is a uniform star polyhedron, indexed as U19. It has 14 faces (8 triangles and 6 octagrams), 36 edges, and 24 vertices. It is represented by Schläfli symbol t'{4,3} or t{4/3,3}, and Coxeter-Dynkin diagram, . It is sometimes called quasitruncated hexahedron because it is related to the truncated cube, , except that the square faces become inverted into {8/3} octagrams.
Even though the stellated truncated hexahedron is a stellation of the truncated hexahedron, its core is a regular octahedron.
Orthographic projections
Related polyhedra
It shares the vertex arrangement with three other uniform polyhedra: the convex rhombicuboctahedron, the small rhombihexahedron, and the small cubicuboctahedron.
See also
List of uniform polyhedra
References
External links
Uniform polyhedra | Stellated truncated hexahedron | [
"Physics"
] | 232 | [
"Uniform polytopes",
"Uniform polyhedra",
"Symmetry"
] |
4,157,674 | https://en.wikipedia.org/wiki/Great%20snub%20dodecicosidodecahedron | In geometry, the great snub dodecicosidodecahedron (or great snub dodekicosidodecahedron) is a nonconvex uniform polyhedron, indexed as U64. It has 104 faces (80 triangles and 24 pentagrams), 180 edges, and 60 vertices. It has Coxeter diagram . It has the unusual feature that its 24 pentagram faces occur in 12 coplanar pairs.
Cartesian coordinates
Let the point be given by
,
where is the golden ratio.
Let the matrix be given by
.
is the rotation around the axis by an angle of , counterclockwise. Let the linear transformations
be the transformations which send a point to the even permutations of with an even number of minus signs.
The transformations constitute the group of rotational symmetries of a regular tetrahedron.
The transformations , constitute the group of rotational symmetries of a regular icosahedron.
Then the 60 points are the vertices of a great snub dodecicosidodecahedron. The edge length equals , the circumradius equals , and the midradius equals .
For a great snub dodecicosidodecahedron whose edge length is 1,
the circumradius is
.
Its midradius is
.
Related polyhedra
It shares its vertices and edges, as well as 20 of its triangular faces and all its pentagrammic faces, with the great dirhombicosidodecahedron, (although the latter has 60 edges not contained in the great snub dodecicosidodecahedron). It shares its other 60 triangular faces (and its pentagrammic faces again) with the great disnub dirhombidodecahedron.
The edges and triangular faces also occur in the compound of twenty octahedra. In addition, 20 of the triangular faces occur in one enantiomer of the compound of twenty tetrahemihexahedra, and the other 60 triangular faces occur in the other enantiomer.
Gallery
See also
List of uniform polyhedra
References
External links
Uniform polyhedra | Great snub dodecicosidodecahedron | [
"Physics"
] | 440 | [
"Uniform polytopes",
"Uniform polyhedra",
"Symmetry"
] |
4,159,367 | https://en.wikipedia.org/wiki/Briggs%E2%80%93Rauscher%20reaction | The Briggs–Rauscher oscillating reaction is one of a small number of known oscillating chemical reactions. It is especially well suited for demonstration purposes because of its visually striking colour changes: the freshly prepared colourless solution slowly turns an amber colour, then suddenly changes to a very dark blue. This slowly fades to colourless and the process repeats, about ten times in the most popular formulation, before ending as a dark blue liquid smelling strongly of iodine.
History
The first known homogeneous oscillating chemical reaction, reported by W. C. Bray in 1921, was between hydrogen peroxide (H2O2) and iodate () in acidic solution. Because of experimental difficulty, it attracted little attention and was unsuitable as a demonstration. In 1958 Boris Pavlovich Belousov discovered the Belousov–Zhabotinsky reaction (BZ reaction). The BZ reaction is suitable as a demonstration, but it too met with skepticism, largely because such oscillatory behaviour was unheard of up to that time, until Anatol Zhabotinsky learned of it and in 1964 published his research. In May 1972 a pair of articles in the Journal of Chemical Education brought it to the attention of Thomas Briggs and Warren Rauscher, two science instructors at Galileo High School in San Francisco. They discovered the Briggs–Rauscher oscillating reaction by replacing bromate () in the BZ reaction with iodate and adding hydrogen peroxide. They produced the strikingly colorful demonstration by adding starch indicator. Since then, many other investigators have added to the knowledge and uses of this very unusual reaction.
Description
Initial conditions
The initial aqueous solution contains hydrogen peroxide, an iodate, divalent manganese (Mn2+) as catalyst, a strong chemically unreactive acid (sulphuric acid (H2SO4) or perchloric acid (HClO4) are good), and an organic compound with an active ("enolic") hydrogen atom attached to carbon which will slowly reduce free iodine (I2) to iodide (I−). (Malonic acid (CH2(COOH)2) is excellent for that purpose.) Starch is optionally added as an indicator to show the abrupt increase in iodide ion concentration as a sudden change from amber (free iodine) to dark blue (the "iodine-starch complex", which requires both iodine and iodide.)
Recently it has been shown, however, that the starch is not only an indicator for iodine in the reaction. In the presence of starch the number of oscillations is higher and the period times are longer compared to the starch-free mixtures. It was also found that the iodine consumption segment within one period of oscillation is also significantly longer in the starch-containing mixtures. This suggests that the starch probably acts as a reservoir for the iodine and iodide because of the starch-triiodide equilibrium, thereby modifying the kinetics of the steps in which iodine and iodide are involved.
The reaction is "poisoned" by chloride (Cl−) ion, which must therefore be avoided, and will oscillate under a fairly wide range of initial concentrations. For recipes suitable for demonstration purposes, see Shakhashiri or Preparations in the external links.
Terminal conditions
The residual mixture contains iodinated malonic acid, inorganic acid, manganous catalysts, unreacted iodate and hydrogen peroxide. After the oscillations cease, the iodomalonic acid decomposes and iodine is produced. The rate of decomposition depends on the conditions. All of the components present in the residual mixture are of environmental concern: Iodate, iodine and hydrogen peroxide are strong oxidants, the acid is corrosive and manganese has been suggested to cause neurological disorders. A simple method has been developed employing thiosulfate and carbonate – two inexpensive salts – to remove all oxidants, neutralize the acidity and recover the manganous ion in the form of manganese dioxide.
Behaviour in time
The reaction shows recurring periodic changes, both gradual and sudden, which are visible: slow changes in the intensity of colour, interrupted by abrupt changes in hue. This demonstrates that a complex combination of slow and fast reactions are taking place simultaneously. For example, following the iodide ion concentration with a silver/silver iodide electrode (see ]) shows sudden dramatic swings of several orders of magnitude separated by slower variations. This is shown by the oscillogram above.
Oscillations persist over a wide range of temperatures. Higher temperatures make everything happen faster, with some qualitative change observable (see ).
Stirring the solution throughout the reaction is helpful for sharp colour changes; otherwise spatial variations may develop (see ).
Bubbles of free oxygen are evolved throughout, and in most cases, the final state is rich in free iodine.
Variants
Changing the initial concentrations
As noted above, the reaction will oscillate in a fairly wide range of initial concentrations of the reactants. For oscillometric demonstrations, more cycles are obtained in dilute solutions, which produce weaker colour changes. See for example the graph, which shows more than 40 cycles in 8 minutes.
Changing the organic substrate
Malonic acid has been replaced by other suitable organic molecules, such as acetone (CH3COCH3) or acetylacetone (CH3COCH2COCH3, pentane-2,4-dione). More exotic substrates have been used. The resulting oscillographic records often show distinctive features, for example as reported by Szalai.
Continuous flow reactors
The reaction may be made to oscillate indefinitely by using a continuous flow stirred tank reactor (CSTR), in which the starting reagents are continuously introduced and excess fluid is drawn.
Two dimensional phase space plots
By omitting the starch and monitoring the concentration of I2 photometrically, (i.e., measuring the absorption of a suitable light beam through the solution) while simultaneously monitoring the concentration of iodide ion with an iodide-selective electrode, a distorted spiral XY-plot will result. In a continuous-flow reactor, this becomes a closed loop (limit cycle).
Fluorescent demonstration
By replacing the starch with a fluorescent dye, Weinberg and Muyskens (2007) produced a demonstration visible in darkness under UV illumination.
Use as a biological assay
The reaction has been proposed as an assay procedure for antioxidants in foodstuffs. The sample to be tested is added at the onset of oscillations, stopping the action for a period proportional to its antioxidant activity. Compared to existing assay methods, this procedure is quick and easy and operates at the pH of the human stomach. For a detailed description suitable for high school chemistry, see Preparations.
In contrast to the findings referring predominantly to polyphenolic compounds reported in the above cited literature, it was found that the salicylic acid – a simple monophenolic compound – did not stop the oscillations immediately after it was added into the active Briggs-Rauscher mixture. In the low concentration interval the salicyclic acid only damped the oscillations, while in higher concentrations the damping effect was much stronger and complete inhibition was also observed. Sulfosalicylic acid, a derivative of salicyclic acid, practically did not affect the oscillations.
Chemical mechanism
The detailed mechanism of this reaction is quite complex. Nevertheless, a good general explanation can be given.
For best results, and to prevent side reactions that may interfere with the main reaction, the solutions are best prepared a short time before the reaction. If left undisturbed, or exposed to ultra-violet radiation the reactants can decompose or react with themselves, interfering with the process.
The essential features of the system depend on two key processes (These processes each involve many reactions working together):
A ("non-radical process"): The slow consumption of free iodine by the malonic acid substrate in the presence of iodate. This process involves the intermediate production of iodide ion.
B ("radical process"): A fast auto-catalytic process involving manganese and free radical intermediates, which converts hydrogen peroxide and iodate to free iodine and oxygen. This process also can consume iodide up to a limiting rate.
But process B can operate only at low concentrations of iodide, creating a feedback loop as follows:
Initially, iodide is low and process B generates free iodine, which gradually accumulates. Meanwhile, process A slowly generates the intermediate iodide ion out of the free iodine at an increasing rate proportional to its (i.e. I2) concentration. At a certain point, this overwhelms process B, stopping the production of more free iodine, which is still being consumed by process A. Thus, eventually the concentration of free iodine (and thus iodide) falls low enough for process B to start up again and the cycle repeats as long as the original reactants hold out.
The overall result of both processes is (again, approximately):
+ 2 H2O2 + CH2(COOH)2 + H+ → ICH(COOH)2 + 2 O2 + 3 H2O
The colour changes seen during the reaction correspond to the actions of the two processes: the slowly increasing amber colour is due to the production of free iodine by process B. When process B stops, the resulting increase in iodide ion enables the sudden blue starch colour. But since process A is still acting, this slowly fades back to clear. The eventual resumption of process B is invisible, but can be revealed by the use of a suitable electrode.
A negative feedback loop which includes a delay (mediated here by process A) is a general mechanism for producing oscillations in many physical systems, but is very rare in nonbiological homogeneous chemical systems. (The BZ oscillating reaction has a somewhat similar feedback loop.)
External links
Videos
Continuously stirred demo showing rapid and uniform colour changes
Continuously stirred demo showing 16 colourful oscillations gradually increasing in intensity
Unstirred demo showing minor spatial variations
Unstirred demo showing extreme spatial variations
This demo runs to completion in 19 cycles. Here the blue starch complex appears late, so the variations in free iodine are plainly visible
This demo completes in 13 cycles. An iodide-selective electrode is used to produce a graph of I− in real time
This demo is continuously stirred and has notably distinct transitions
Effect of temperature
This series of four videos vividly shows the effect of temperature on the oscillations: 10 °C 22 °C 40 °C 60 °C
Preparations
from NCSU (PDF)
from about.com, with a brief description of the chemical mechanism
from John A. Pojman (uses readily available 3% H2O2)
complete description of use as an antioxidant assay suitable for use in high school chemistry class
References
Name reactions
Non-equilibrium thermodynamics
Articles containing video clips
Clock reactions | Briggs–Rauscher reaction | [
"Chemistry",
"Mathematics"
] | 2,319 | [
"Clock reactions",
"Non-equilibrium thermodynamics",
"Name reactions",
"Chemical kinetics",
"Dynamical systems"
] |
4,160,503 | https://en.wikipedia.org/wiki/Float%20switch | A float switch is a type of level sensor, a device used to detect the level of liquid within a tank. The switch may be used to control a pump, as an indicator, an alarm, or to control other devices.
One type of float switch uses a mercury switch inside a hinged float. Another common type is a float that raises a rod to actuate a microswitch. One pattern uses a reed switch mounted in a tube; a float, containing a magnet, surrounds the tube and is guided by it. When the float raises the magnet to the reed switch, it closes. Several reeds can be mounted in the tube for different level indications by one assembly.
A very common application is in sump pumps and condensate pumps where the switch detects the rising level of liquid in the sump or tank and energizes an electrical pump which then pumps liquid out until the level of the liquid has been substantially reduced, at which point the pump is switched off again. Float switches are often adjustable and can include substantial hysteresis. That is, the switch's "turn on" point may be much higher than the "shut off" point. This minimizes the on-off cycling of the associated pump.
Some float switches contain a two-stage switch. As liquid rises to the trigger point of the first stage, the associated pump is activated. If the liquid continues to rise (perhaps because the pump has failed or its discharge is blocked), the second stage will be triggered. This stage may switch off the source of the liquid being pumped, trigger an alarm, or both.
Where level must be sensed inside a pressurized vessel, often a magnet is used to couple the motion of the float to a switch located outside the pressurized volume. In some cases, a rod through a stuffing box can be used to operate a switch, but this creates high drag and has a potential for leakage. Successful float switch installations minimize the opportunity for accumulation of dirt on the float that would impede its motion. Float switch materials are selected to resist the deleterious effects of corrosive process liquids. In some systems, a properly selected and sized float can be used to sense the interface level between two liquids of different density.
See also
Float (liquid level)
Fuel gauge
Level sensor
Sight glass
References
Fluid dynamics
Heating, ventilation, and air conditioning
Sensors
Mechanisms (engineering)
Pumps
Switches | Float switch | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 488 | [
"Pumps",
"Turbomachinery",
"Chemical engineering",
"Measuring instruments",
"Physical systems",
"Hydraulics",
"Mechanical engineering",
"Piping",
"Sensors",
"Mechanisms (engineering)",
"Fluid dynamics"
] |
4,160,674 | https://en.wikipedia.org/wiki/Sight%20glass | A sight glass or water gauge is a type of level sensor, a transparent tube through which the operator of a tank or boiler can observe the level of liquid contained within.
Liquid in tanks
Simple sight glasses may be just a plastic or glass tube connected to the bottom of the tank at one end and the top of the tank at the other. The level of liquid in the sight glass will be the same as the level of liquid in the tank. Today, however, sophisticated float switches have replaced sight glasses in many such applications.
Steam boilers
If the liquid is hazardous or under pressure, more sophisticated arrangements must be made. In the case of a boiler, the pressure of the water below and the steam above is equal, so any change in the water level will be seen in the gauge. The transparent tube (the “glass” itself) may be mostly enclosed within a metal or toughened glass shroud to prevent it from being damaged through scratching or impact and offering protection to the operators in the case of breakage. This usually has a patterned backplate to make the magnifying effect of the water in the tube more obvious and so allow for easier reading. In some locomotives where the boiler is operated at very high pressures, the tube itself would be made of metal-reinforced toughened glass. It is important to keep the water at the specified level, otherwise the top of the firebox will be exposed, creating an overheat hazard and causing damage and possibly catastrophic failure.
To check that the device is offering a correct reading and the connecting pipes to the boiler are not blocked by scale, the water level needs to be “bobbed” by quickly opening the taps in turn and allowing a brief spurt of water through the drain cock.
The National Board of Boiler and Pressure Vessel Inspectors recommends a daily testing procedure described by the American National Standards Institute, chapter 2 part I-204.3 water level gauge. While not strictly required, this procedure is designed to allow an operator to safely verify that all parts of the sight glass are operating correctly and have free flowing connections to the boiler necessary for proper operation.
Failure
The gauge glass on a boiler needs to be inspected periodically and replaced if it is seen to have worn thin in the vicinity of the gland nuts, but a failure in service can still occur. Drivers are expected to carry two or three glass tubes, pre-cut to the required length, together with hemp or rubber seals, to replace the tubes on the road. Familiarity with this disquieting occurrence was considered so important that a glass would often be smashed deliberately while a trainee driver was on the footplate, to give him practice in fitting a new tube. Although automatic ball valves are fitted in the mounts to limit the release of steam and scalding water, these can fail through accumulation of limescale. It was standard procedure to hold the coal scoop in front of the face while the other hand, holding the cap for protection, reached to turn off the valves at both ends of the glass.
Reflex gauges
A reflex gauge is more complex in construction but can give a clearer distinction between gas (steam) and liquid (water). Instead of containing the media in a glass tube, the gauge consists of a vertically oriented slotted metal body with a strong glass plate mounted on the open side of the slot facing the operator. The rear of the glass, in contact with the media, has grooves moulded into its surface, running vertically. The grooves form a zig-zag pattern with 90° angles. Incident light entering the glass is refracted at the rear surface in contact with the media. In the region that is contact with the gas, most of the light is reflected from the surface of one groove to the next and back towards the operator, appearing silvery white. In the region that is in contact with the liquid, most of the light is refracted into the liquid causing this region to appear almost black to the operator. Well-known makes of reflex gauge are Clark-Reliance, IGEMA, TGI Ilmadur, Penberthy, Jerguson, Klinger, Cesare-Bonetti and Kenco. Due to the caustic nature of boiler anti-scaling treatments ("water softeners"), reflex gauges tend to become relatively rapidly etched by the water and lose their effectiveness at displaying the liquid level. Therefore, bi-colour gauges are recommended for certain types of boiler, particularly those operating at pressure above 60 bar.
Bi-colour gauges
A bi-colour gauge is generally preferred for caustic media in order to afford protection to the glass. The gauge consists of a vertically oriented slotted metal body with a strong plain glass to the front and the rear. The front and rear body surfaces are in non-parallel vertical planes. Behind the gauge body are light sources with two quite different wavelengths, typically red and green. Due to the different refraction of the red and green light, the liquid region appears green to the operator, while the gas region appears red. Unlike the reflex gauge, the glass has a plane surface which it does not need to be in direct contact with the media and can be protected with a layer of a caustic-resistant transparent material such as silica. Well-known manufacturers of the highest quality Bi-Colour Level Gauges are Clark-Reliance, Klinger, FPS-Aquarian, IGEMA and Quest-Tec
Magnetic indicator
In a magnetic indicator is a float on the surface of the liquid contains a permanent magnet. The liquid is contained in a chamber of strong, non-magnetic material, avoiding the use of glass. The level indicator consists of a number of pivoting magnetic vanes arranged one above the other and placed close to the chamber containing the float. The two faces of the vanes are differently coloured. As the magnet passes up and down behind the vanes it cause them to rotate, displaying one colour for the region containing the liquid and another for the region containing gas. Magnetic indicators are stated in various manufacturers' literature to be most suitable for very high pressure and / or temperature and for aggressive liquids.
History
The first locomotive to be fitted with the device was built in 1829 by John Rastrick at his Stourbridge works.
Modern industrial sight glass
Industrial observational instruments have changed with industry itself. More structurally sophisticated than the water gauge, the contemporary sight glass — also called the sight window or sight port — can be found on the media vessel at chemical plants and in other industrial settings, including pharmaceutical, food, beverage and bio gas plants. Sight glasses enable operators to visually observe processes inside tanks, pipes, reactors and vessels.
The modern industrial sight glass is a glass disk held between two metal frames, which are secured by bolts and gaskets, or the glass disc is fused to the metal frame during manufacture. The glass used for this purpose is either soda lime glass or borosilicate glass, and the metal, usually a type of stainless steel, is chosen for desired properties of strength. Borosilicate glass is superior to other formulations in terms of chemical corrosion resistance and temperature tolerance, as well as transparency.
Fused sight glasses are also called mechanically prestressed glass, because the glass is strengthened by compression of the metal ring. Heat is applied to a glass disc and its surrounding steel ring, causing a fusion of the materials. As the steel cools, it contracts, compressing the glass and making it resistant to tension. Because glass typically breaks under tension, mechanically prestressed glass is unlikely to break and endanger workers. The strongest sight glasses are made with borosilicate glass, because of the greater difference in its coefficient of expansions.
See also
Fuel gauge
Fusible plug
References
External links
Reflex Gauge, Flat Glass or Transparent Gauge, and Ported Gauge, FPS-Aquarian
Heating, ventilation, and air conditioning
Measuring instruments
Volumetric instruments
Mechanical engineering
Glass applications | Sight glass | [
"Physics",
"Technology",
"Engineering"
] | 1,602 | [
"Applied and interdisciplinary physics",
"Volumetric instruments",
"Mechanical engineering",
"Measuring instruments"
] |
16,079,692 | https://en.wikipedia.org/wiki/Sewage%20treatment | Sewage treatment (or domestic wastewater treatment, municipal wastewater treatment) is a type of wastewater treatment which aims to remove contaminants from sewage to produce an effluent that is suitable to discharge to the surrounding environment or an intended reuse application, thereby preventing water pollution from raw sewage discharges. Sewage contains wastewater from households and businesses and possibly pre-treated industrial wastewater. There are a high number of sewage treatment processes to choose from. These can range from decentralized systems (including on-site treatment systems) to large centralized systems involving a network of pipes and pump stations (called sewerage) which convey the sewage to a treatment plant. For cities that have a combined sewer, the sewers will also carry urban runoff (stormwater) to the sewage treatment plant. Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes and nutrient removal. Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage, using aerobic or anaerobic biological processes. A so-called quarternary treatment step (sometimes referred to as advanced treatment) can also be added for the removal of organic micropollutants, such as pharmaceuticals. This has been implemented in full-scale for example in Sweden.
A large number of sewage treatment technologies have been developed, mostly using biological treatment processes. Design engineers and decision makers need to take into account technical and economical criteria of each alternative when choosing a suitable technology. Often, the main criteria for selection are: desired effluent quality, expected construction and operating costs, availability of land, energy requirements and sustainability aspects. In developing countries and in rural areas with low population densities, sewage is often treated by various on-site sanitation systems and not conveyed in sewers. These systems include septic tanks connected to drain fields, on-site sewage systems (OSS), vermifilter systems and many more. On the other hand, advanced and relatively expensive sewage treatment plants may include tertiary treatment with disinfection and possibly even a fourth treatment stage to remove micropollutants.
At the global level, an estimated 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%.
The treatment of sewage is part of the field of sanitation. Sanitation also includes the management of human waste and solid waste as well as stormwater (drainage) management. The term sewage treatment plant is often used interchangeably with the term wastewater treatment plant.
Terminology
The term sewage treatment plant (STP) (or sewage treatment works) is nowadays often replaced with the term wastewater treatment plant (WWTP). Strictly speaking, the latter is a broader term that can also refer to industrial wastewater treatment.
The terms water recycling center or water reclamation plants are also in use as synonyms.
Purposes and overview
The overall aim of treating sewage is to produce an effluent that can be discharged to the environment while causing as little water pollution as possible, or to produce an effluent that can be reused in a useful manner. This is achieved by removing contaminants from the sewage. It is a form of waste management.
With regards to biological treatment of sewage, the treatment objectives can include various degrees of the following: to transform or remove organic matter, nutrients (nitrogen and phosphorus), pathogenic organisms, and specific trace organic constituents (micropollutants).
Some types of sewage treatment produce sewage sludge which can be treated before safe disposal or reuse. Under certain circumstances, the treated sewage sludge might be termed biosolids and can be used as a fertilizer.
Sewage characteristics
Collection
Types of treatment processes
Sewage can be treated close to where the sewage is created, which may be called a decentralized system or even an on-site system (on-site sewage facility, septic tanks, etc.). Alternatively, sewage can be collected and transported by a network of pipes and pump stations to a municipal treatment plant. This is called a centralized system (see also sewerage and pipes and infrastructure).
A large number of sewage treatment technologies have been developed, mostly using biological treatment processes (see list of wastewater treatment technologies). Very broadly, they can be grouped into high tech (high cost) versus low tech (low cost) options, although some technologies might fall into either category. Other grouping classifications are intensive or mechanized systems (more compact, and frequently employing high tech options) versus extensive or natural or nature-based systems (usually using natural treatment processes and occupying larger areas) systems. This classification may be sometimes oversimplified, because a treatment plant may involve a combination of processes, and the interpretation of the concepts of high tech and low tech, intensive and extensive, mechanized and natural processes may vary from place to place.
Low tech, extensive or nature-based processes
Examples for more low-tech, often less expensive sewage treatment systems are shown below. They often use little or no energy. Some of these systems do not provide a high level of treatment, or only treat part of the sewage (for example only the toilet wastewater), or they only provide pre-treatment, like septic tanks. On the other hand, some systems are capable of providing a good performance, satisfactory for several applications. Many of these systems are based on natural treatment processes, requiring large areas, while others are more compact. In most cases, they are used in rural areas or in small to medium-sized communities.
For example, waste stabilization ponds are a low cost treatment option with practically no energy requirements but they require a lot of land. Due to their technical simplicity, most of the savings (compared with high tech systems) are in terms of operation and maintenance costs.
Anaerobic digester types and anaerobic digestion, for example:
Upflow anaerobic sludge blanket reactor
Septic tank
Imhoff tank
Constructed wetland (see also biofilters)
Decentralized wastewater system
Nature-based solutions
On-site sewage facility
Sand filter
Vermi filter
Waste stabilization pond with sub-types:
e.g. Facultative ponds, high rate ponds, maturation ponds
Examples for systems that can provide full or partial treatment for toilet wastewater only:
Composting toilet (see also dry toilets in general)
Urine-diverting dry toilet
Vermifilter toilet
High tech, intensive or mechanized processes
Examples for more high-tech, intensive or mechanized, often relatively expensive sewage treatment systems are listed below. Some of them are energy intensive as well. Many of them provide a very high level of treatment. For example, broadly speaking, the activated sludge process achieves a high effluent quality but is relatively expensive and energy intensive.
Activated sludge systems
Aerobic treatment system
Enhanced biological phosphorus removal
Expanded granular sludge bed digestion
Filtration
Membrane bioreactor
Moving bed biofilm reactor
Rotating biological contactor
Trickling filter
Ultraviolet disinfection
Disposal or treatment options
There are other process options which may be classified as disposal options, although they can also be understood as basic treatment options. These include: Application of sludge, irrigation, soak pit, leach field, fish pond, floating plant pond, water disposal/groundwater recharge, surface disposal and storage.
The application of sewage to land is both: a type of treatment and a type of final disposal. It leads to groundwater recharge and/or to evapotranspiration. Land application include slow-rate systems, rapid infiltration, subsurface infiltration, overland flow. It is done by flooding, furrows, sprinkler and dripping. It is a treatment/disposal system that requires a large amount of land per person.
Design aspects
Population equivalent
The per person organic matter load is a parameter used in the design of sewage treatment plants. This concept is known as population equivalent (PE). The base value used for PE can vary from one country to another. Commonly used definitions used worldwide are: 1 PE equates to 60 gram of BOD per person per day, and it also equals 200 liters of sewage per day. This concept is also used as a comparison parameter to express the strength of industrial wastewater compared to sewage.
Process selection
When choosing a suitable sewage treatment process, decision makers need to take into account technical and economical criteria. Therefore, each analysis is site-specific. A life cycle assessment (LCA) can be used, and criteria or weightings are attributed to the various aspects. This makes the final decision subjective to some extent. A range of publications exist to help with technology selection.
In industrialized countries, the most important parameters in process selection are typically efficiency, reliability, and space requirements. In developing countries, they might be different and the focus might be more on construction and operating costs as well as process simplicity.
Choosing the most suitable treatment process is complicated and requires expert inputs, often in the form of feasibility studies. This is because the main important factors to be considered when evaluating and selecting sewage treatment processes are numerous. They include: process applicability, applicable flow, acceptable flow variation, influent characteristics, inhibiting or refractory compounds, climatic aspects, process kinetics and reactor hydraulics, performance, treatment residuals, sludge processing, environmental constraints, requirements for chemical products, energy and other resources; requirements for personnel, operating and maintenance; ancillary processes, reliability, complexity, compatibility, area availability.
With regards to environmental impacts of sewage treatment plants the following aspects are included in the selection process: Odors, vector attraction, sludge transportation, sanitary risks, air contamination, soil and subsoil contamination, surface water pollution or groundwater contamination, devaluation of nearby areas, inconvenience to the nearby population.
Odor control
Odors emitted by sewage treatment are typically an indication of an anaerobic or septic condition. Early stages of processing will tend to produce foul-smelling gases, with hydrogen sulfide being most common in generating complaints. Large process plants in urban areas will often treat the odors with carbon reactors, a contact media with bio-slimes, small doses of chlorine, or circulating fluids to biologically capture and metabolize the noxious gases. Other methods of odor control exist, including addition of iron salts, hydrogen peroxide, calcium nitrate, etc. to manage hydrogen sulfide levels.
Energy requirements
The energy requirements vary with type of treatment process as well as sewage strength. For example, constructed wetlands and stabilization ponds have low energy requirements. In comparison, the activated sludge process has a high energy consumption because it includes an aeration step. Some sewage treatment plants produce biogas from their sewage sludge treatment process by using a process called anaerobic digestion. This process can produce enough energy to meet most of the energy needs of the sewage treatment plant itself.
For activated sludge treatment plants in the United States, around 30 percent of the annual operating costs is usually required for energy. Most of this electricity is used for aeration, pumping systems and equipment for the dewatering and drying of sewage sludge. Advanced sewage treatment plants, e.g. for nutrient removal, require more energy than plants that only achieve primary or secondary treatment.
Small rural plants using trickling filters may operate with no net energy requirements, the whole process being driven by gravitational flow, including tipping bucket flow distribution and the desludging of settlement tanks to drying beds. This is usually only practical in hilly terrain and in areas where the treatment plant is relatively remote from housing because of the difficulty in managing odors.
Co-treatment of industrial effluent
In highly regulated developed countries, industrial wastewater usually receives at least pretreatment if not full treatment at the factories themselves to reduce the pollutant load, before discharge to the sewer. The pretreatment has the following two main aims: Firstly, to prevent toxic or inhibitory compounds entering the biological stage of the sewage treatment plant and reduce its efficiency. And secondly to avoid toxic compounds from accumulating in the produced sewage sludge which would reduce its beneficial reuse options. Some industrial wastewater may contain pollutants which cannot be removed by sewage treatment plants. Also, variable flow of industrial waste associated with production cycles may upset the population dynamics of biological treatment units.
Design aspects of secondary treatment processes
Non-sewered areas
Urban residents in many parts of the world rely on on-site sanitation systems without sewers, such as septic tanks and pit latrines, and fecal sludge management in these cities is an enormous challenge.
For sewage treatment the use of septic tanks and other on-site sewage facilities (OSSF) is widespread in some rural areas, for example serving up to 20 percent of the homes in the U.S.
Available process steps
Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes. Different types of sewage treatment may utilize some or all of the process steps listed below.
Preliminary treatment
Preliminary treatment (sometimes called pretreatment) removes coarse materials that can be easily collected from the raw sewage before they damage or clog the pumps and sewage lines of primary treatment clarifiers.
Screening
The influent in sewage water passes through a bar screen to remove all large objects like cans, rags, sticks, plastic packets, etc. carried in the sewage stream. This is most commonly done with an automated mechanically raked bar screen in modern plants serving large populations, while in smaller or less modern plants, a manually cleaned screen may be used. The raking action of a mechanical bar screen is typically paced according to the accumulation on the bar screens and/or flow rate. The solids are collected and later disposed in a landfill, or incinerated. Bar screens or mesh screens of varying sizes may be used to optimize solids removal. If gross solids are not removed, they become entrained in pipes and moving parts of the treatment plant, and can cause substantial damage and inefficiency in the process.
Grit removal
Grit consists of sand, gravel, rocks, and other heavy materials. Preliminary treatment may include a sand or grit removal channel or chamber, where the velocity of the incoming sewage is reduced to allow the settlement of grit. Grit removal is necessary to (1) reduce formation of deposits in primary sedimentation tanks, aeration tanks, anaerobic digesters, pipes, channels, etc. (2) reduce the frequency of tank cleaning caused by excessive accumulation of grit; and (3) protect moving mechanical equipment from abrasion and accompanying abnormal wear. The removal of grit is essential for equipment with closely machined metal surfaces such as comminutors, fine screens, centrifuges, heat exchangers, and high pressure diaphragm pumps.
Grit chambers come in three types: horizontal grit chambers, aerated grit chambers, and vortex grit chambers. Vortex grit chambers include mechanically induced vortex, hydraulically induced vortex, and multi-tray vortex separators. Given that traditionally, grit removal systems have been designed to remove clean inorganic particles that are greater than , most of the finer grit passes through the grit removal flows under normal conditions. During periods of high flow deposited grit is resuspended and the quantity of grit reaching the treatment plant increases substantially.
Flow equalization
Equalization basins can be used to achieve flow equalization. This is especially useful for combined sewer systems which produce peak dry-weather flows or peak wet-weather flows that are much higher than the average flows. Such basins can improve the performance of the biological treatment processes and the secondary clarifiers.
Disadvantages include the basins' capital cost and space requirements. Basins can also provide a place to temporarily hold, dilute and distribute batch discharges of toxic or high-strength wastewater which might otherwise inhibit biological secondary treatment (such was wastewater from portable toilets or fecal sludge that is brought to the sewage treatment plant in vacuum trucks). Flow equalization basins require variable discharge control, typically include provisions for bypass and cleaning, and may also include aerators and odor control.
Fat and grease removal
In some larger plants, fat and grease are removed by passing the sewage through a small tank where skimmers collect the fat floating on the surface. Air blowers in the base of the tank may also be used to help recover the fat as a froth. Many plants, however, use primary clarifiers with mechanical surface skimmers for fat and grease removal.
Primary treatment
Primary treatment is the "removal of a portion of the suspended solids and organic matter from the sewage".It consists of allowing sewage to pass slowly through a basin where heavy solids can settle to the bottom while oil, grease and lighter solids float to the surface and are skimmed off. These basins are called primary sedimentation tanks or primary clarifiers and typically have a hydraulic retention time (HRT) of 1.5 to 2.5 hours. The settled and floating materials are removed and the remaining liquid may be discharged or subjected to secondary treatment. Primary settling tanks are usually equipped with mechanically driven scrapers that continually drive the collected sludge towards a hopper in the base of the tank where it is pumped to sludge treatment facilities.
Sewage treatment plants that are connected to a combined sewer system sometimes have a bypass arrangement after the primary treatment unit. This means that during very heavy rainfall events, the secondary and tertiary treatment systems can be bypassed to protect them from hydraulic overloading, and the mixture of sewage and storm-water receives primary treatment only.
Primary sedimentation tanks remove about 50–70% of the suspended solids, and 25–40% of the biological oxygen demand (BOD).
Secondary treatment
The main processes involved in secondary sewage treatment are designed to remove as much of the solid material as possible. They use biological processes to digest and remove the remaining soluble material, especially the organic fraction. This can be done with either suspended-growth or biofilm processes. The microorganisms that feed on the organic matter present in the sewage grow and multiply, constituting the biological solids, or biomass. These grow and group together in the form of flocs or biofilms and, in some specific processes, as granules. The biological floc or biofilm and remaining fine solids form a sludge which can be settled and separated. After separation, a liquid remains that is almost free of solids, and with a greatly reduced concentration of pollutants.
Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage, using aerobic or anaerobic processes. The organisms involved in these processes are sensitive to the presence of toxic materials, although these are not expected to be present at high concentrations in typical municipal sewage.
Tertiary treatment
Advanced sewage treatment generally involves three main stages, called primary, secondary and tertiary treatment but may also include intermediate stages and final polishing processes. The purpose of tertiary treatment (also called advanced treatment) is to provide a final treatment stage to further improve the effluent quality before it is discharged to the receiving water body or reused. More than one tertiary treatment process may be used at any treatment plant. If disinfection is practiced, it is always the final process. It is also called effluent polishing. Tertiary treatment may include biological nutrient removal (alternatively, this can be classified as secondary treatment), disinfection and partly removal of micropollutants, such as environmental persistent pharmaceutical pollutants.
Tertiary treatment is sometimes defined as anything more than primary and secondary treatment in order to allow discharge into a highly sensitive or fragile ecosystem such as estuaries, low-flow rivers or coral reefs. Treated water is sometimes disinfected chemically or physically (for example, by lagoons and microfiltration) prior to discharge into a stream, river, bay, lagoon or wetland, or it can be used for the irrigation of a golf course, greenway or park. If it is sufficiently clean, it can also be used for groundwater recharge or agricultural purposes.
Sand filtration removes much of the residual suspended matter. Filtration over activated carbon, also called carbon adsorption, removes residual toxins. Micro filtration or synthetic membranes are used in membrane bioreactors and can also remove pathogens.
Settlement and further biological improvement of treated sewage may be achieved through storage in large human-made ponds or lagoons. These lagoons are highly aerobic, and colonization by native macrophytes, especially reeds, is often encouraged.
Disinfection
Disinfection of treated sewage aims to kill pathogens (disease-causing microorganisms) prior to disposal. It is increasingly effective after more elements of the foregoing treatment sequence have been completed. The purpose of disinfection in the treatment of sewage is to substantially reduce the number of pathogens in the water to be discharged back into the environment or to be reused. The target level of reduction of biological contaminants like pathogens is often regulated by the presiding governmental authority. The effectiveness of disinfection depends on the quality of the water being treated (e.g. turbidity, pH, etc.), the type of disinfection being used, the disinfectant dosage (concentration and time), and other environmental variables. Water with high turbidity will be treated less successfully, since solid matter can shield organisms, especially from ultraviolet light or if contact times are low. Generally, short contact times, low doses and high flows all militate against effective disinfection. Common methods of disinfection include ozone, chlorine, ultraviolet light, or sodium hypochlorite. Monochloramine, which is used for drinking water, is not used in the treatment of sewage because of its persistence.
Chlorination remains the most common form of treated sewage disinfection in many countries due to its low cost and long-term history of effectiveness. One disadvantage is that chlorination of residual organic material can generate chlorinated-organic compounds that may be carcinogenic or harmful to the environment. Residual chlorine or chloramines may also be capable of chlorinating organic material in the natural aquatic environment. Further, because residual chlorine is toxic to aquatic species, the treated effluent must also be chemically dechlorinated, adding to the complexity and cost of treatment.
Ultraviolet (UV) light can be used instead of chlorine, iodine, or other chemicals. Because no chemicals are used, the treated water has no adverse effect on organisms that later consume it, as may be the case with other methods. UV radiation causes damage to the genetic structure of bacteria, viruses, and other pathogens, making them incapable of reproduction. The key disadvantages of UV disinfection are the need for frequent lamp maintenance and replacement and the need for a highly treated effluent to ensure that the target microorganisms are not shielded from the UV radiation (i.e., any solids present in the treated effluent may protect microorganisms from the UV light). In many countries, UV light is becoming the most common means of disinfection because of the concerns about the impacts of chlorine in chlorinating residual organics in the treated sewage and in chlorinating organics in the receiving water.
As with UV treatment, heat sterilization also does not add chemicals to the water being treated. However, unlike UV, heat can penetrate liquids that are not transparent. Heat disinfection can also penetrate solid materials within wastewater, sterilizing their contents. Thermal effluent decontamination systems provide low resource, low maintenance effluent decontamination once installed.
Ozone () is generated by passing oxygen () through a high voltage potential resulting in a third oxygen atom becoming attached and forming . Ozone is very unstable and reactive and oxidizes most organic material it comes in contact with, thereby destroying many pathogenic microorganisms. Ozone is considered to be safer than chlorine because, unlike chlorine which has to be stored on site (highly poisonous in the event of an accidental release), ozone is generated on-site as needed from the oxygen in the ambient air. Ozonation also produces fewer disinfection by-products than chlorination. A disadvantage of ozone disinfection is the high cost of the ozone generation equipment and the requirements for special operators. Ozone sewage treatment requires the use of an ozone generator, which decontaminates the water as ozone bubbles percolate through the tank.
Membranes can also be effective disinfectants, because they act as barriers, avoiding the passage of the microorganisms. As a result, the final effluent may be devoid of pathogenic organisms, depending on the type of membrane used. This principle is applied in membrane bioreactors.
Biological nutrient removal
Sewage may contain high levels of the nutrients nitrogen and phosphorus. Typical values for nutrient loads per person and nutrient concentrations in raw sewage in developing countries have been published as follows: 8 g/person/d for total nitrogen (45 mg/L), 4.5 g/person/d for ammonia-N (25 mg/L) and 1.0 g/person/d for total phosphorus (7 mg/L). The typical ranges for these values are: 6–10 g/person/d for total nitrogen (35–60 mg/L), 3.5–6 g/person/d for ammonia-N (20–35 mg/L) and 0.7–2.5 g/person/d for total phosphorus (4–15 mg/L).
Excessive release to the environment can lead to nutrient pollution, which can manifest itself in eutrophication. This process can lead to algal blooms, a rapid growth, and later decay, in the population of algae. In addition to causing deoxygenation, some algal species produce toxins that contaminate drinking water supplies.
Ammonia nitrogen, in the form of free ammonia (NH3) is toxic to fish. Ammonia nitrogen, when converted to nitrite and further to nitrate in a water body, in the process of nitrification, is associated with the consumption of dissolved oxygen. Nitrite and nitrate may also have public health significance if concentrations are high in drinking water, because of a disease called metahemoglobinemia.
Phosphorus removal is important as phosphorus is a limiting nutrient for algae growth in many fresh water systems. Therefore, an excess of phosphorus can lead to eutrophication. It is also particularly important for water reuse systems where high phosphorus concentrations may lead to fouling of downstream equipment such as reverse osmosis.
A range of treatment processes are available to remove nitrogen and phosphorus. Biological nutrient removal (BNR) is regarded by some as a type of secondary treatment process, and by others as a tertiary (or advanced) treatment process.
Nitrogen removal
Nitrogen is removed through the biological oxidation of nitrogen from ammonia to nitrate (nitrification), followed by denitrification, the reduction of nitrate to nitrogen gas. Nitrogen gas is released to the atmosphere and thus removed from the water.
Nitrification itself is a two-step aerobic process, each step facilitated by a different type of bacteria. The oxidation of ammonia (NH4+) to nitrite (NO2−) is most often facilitated by bacteria such as Nitrosomonas spp. (nitroso refers to the formation of a nitroso functional group). Nitrite oxidation to nitrate (NO3−), though traditionally believed to be facilitated by Nitrobacter spp. (nitro referring the formation of a nitro functional group), is now known to be facilitated in the environment predominantly by Nitrospira spp.
Denitrification requires anoxic conditions to encourage the appropriate biological communities to form. Anoxic conditions refers to a situation where oxygen is absent but nitrate is present. Denitrification is facilitated by a wide diversity of bacteria. The activated sludge process, sand filters, waste stabilization ponds, constructed wetlands and other processes can all be used to reduce nitrogen. Since denitrification is the reduction of nitrate to dinitrogen (molecular nitrogen) gas, an electron donor is needed. This can be, depending on the wastewater, organic matter (from the sewage itself), sulfide, or an added donor like methanol. The sludge in the anoxic tanks (denitrification tanks) must be mixed well (mixture of recirculated mixed liquor, return activated sludge, and raw influent) e.g. by using submersible mixers in order to achieve the desired denitrification.
Over time, different treatment configurations for activated sludge processes have evolved to achieve high levels of nitrogen removal. An initial scheme was called the Ludzack–Ettinger Process. It could not achieve a high level of denitrification. The Modified Ludzak–Ettinger Process (MLE) came later and was an improvement on the original concept. It recycles mixed liquor from the discharge end of the aeration tank to the head of the anoxic tank. This provides nitrate for the facultative bacteria.
There are other process configurations, such as variations of the Bardenpho process. They might differ in the placement of anoxic tanks, e.g. before and after the aeration tanks.
Phosphorus removal
Studies of United States sewage in the late 1960s estimated mean per capita contributions of in urine and feces, in synthetic detergents, and lesser variable amounts used as corrosion and scale control chemicals in water supplies. Source control via alternative detergent formulations has subsequently reduced the largest contribution, but naturally the phosphorus content of urine and feces remained unchanged.
Phosphorus can be removed biologically in a process called enhanced biological phosphorus removal. In this process, specific bacteria, called polyphosphate-accumulating organisms (PAOs), are selectively enriched and accumulate large quantities of phosphorus within their cells (up to 20 percent of their mass).
Phosphorus removal can also be achieved by chemical precipitation, usually with salts of iron (e.g. ferric chloride) or aluminum (e.g. alum), or lime. This may lead to a higher sludge production as hydroxides precipitate and the added chemicals can be expensive. Chemical phosphorus removal requires significantly smaller equipment footprint than biological removal, is easier to operate and is often more reliable than biological phosphorus removal. Another method for phosphorus removal is to use granular laterite or zeolite.
Some systems use both biological phosphorus removal and chemical phosphorus removal. The chemical phosphorus removal in those systems may be used as a backup system, for use when the biological phosphorus removal is not removing enough phosphorus, or may be used continuously. In either case, using both biological and chemical phosphorus removal has the advantage of not increasing sludge production as much as chemical phosphorus removal on its own, with the disadvantage of the increased initial cost associated with installing two different systems.
Once removed, phosphorus, in the form of a phosphate-rich sewage sludge, may be sent to landfill or used as fertilizer in admixture with other digested sewage sludges. In the latter case, the treated sewage sludge is also sometimes referred to as biosolids. 22% of the world's phosphorus needs could be satisfied by recycling residential wastewater.
Fourth treatment stage
Micropollutants such as pharmaceuticals, ingredients of household chemicals, chemicals used in small businesses or industries, environmental persistent pharmaceutical pollutants (EPPP) or pesticides may not be eliminated in the commonly used sewage treatment processes (primary, secondary and tertiary treatment) and therefore lead to water pollution. Although concentrations of those substances and their decomposition products are quite low, there is still a chance of harming aquatic organisms. For pharmaceuticals, the following substances have been identified as toxicologically relevant: substances with endocrine disrupting effects, genotoxic substances and substances that enhance the development of bacterial resistances. They mainly belong to the group of EPPP.
Techniques for elimination of micropollutants via a fourth treatment stage during sewage treatment are implemented in Germany, Switzerland, Sweden and the Netherlands and tests are ongoing in several other countries. In Switzerland it has been enshrined in law since 2016. Since 1 January 2025, there has been a recast of the Urban Waste Water Treatment Directive in the European Union. Due to the large number of amendments that have now been made, the directive was rewritten on November 27, 2024 as Directive (EU) 2024/3019, published in the EU Official Journal on December 12, and entered into force on January 1, 2025. The member states now have 31 months, i.e. until July 31, 2027, to adapt their national legislation to the new directive ("implementation of the directive").
The amendment stipulates that, in addition to stricter discharge values for nitrogen and phosphorus, persistent trace substances must at least be partially separated. The target, similar to Switzerland, is that 80% of 6 key substances out of 12 must be removed between discharge into the sewage treatment plant and discharge into the water body. At least 80% of the investments and operating costs for the fourth treatment stage will be passed on to the pharmaceutical and cosmetics industry according to the polluter pays principle in order to relieve the population financially and provide an incentive for the development of more environmentally friendly products. In addition, the municipal wastewater treatment sector is to be energy neutral by 2045 and the emission of microplastics and PFAS is to be monitored.
The implementation of the framework guidelines is staggered until 2045, depending on the size of the sewage treatment plant and its population equivalents (PE). Sewage treatment plants with over 150,000 PE have priority and should be adapted immediately, as a significant proportion of the pollution comes from them. The adjustments are staggered at national level in:
20% of the plants by 31 December 2033,
60% of the plants by 31 December 2039,
100% of the plants by 31 December 2045.
Wastewater treatment plants with 10,000 to 150,000 PE that discharge into coastal waters or sensitive waters are staggered at national level in:
10% of the plants by 31 December 2033,
30% of the plants by 31 December 2036,
60% of the plants by 31 December 2039,
100% of the plants by 31 December 2045.
The latter concerns waters with a low dilution ratio, waters from which drinking water is obtained and those that are coastal waters, or those used as bathing waters or used for mussel farming. Member States will be given the option not to apply fourth treatment in these areas if a risk assessment shows that there is no potential risk from micropollutants to human health and/or the environment.
Such process steps mainly consist of activated carbon filters that adsorb the micropollutants. The combination of advanced oxidation with ozone followed by granular activated carbon (GAC) has been suggested as a cost-effective treatment combination for pharmaceutical residues. For a full reduction of microplasts the combination of ultrafiltration followed by GAC has been suggested. Also the use of enzymes such as laccase secreted by fungi is under investigation. Microbial biofuel cells are investigated for their property to treat organic matter in sewage.
To reduce pharmaceuticals in water bodies, source control measures are also under investigation, such as innovations in drug development or more responsible handling of drugs. In the US, the National Take Back Initiative is a voluntary program with the general public, encouraging people to return excess or expired drugs, and avoid flushing them to the sewage system.
Sludge treatment and disposal
Environmental impacts
Sewage treatment plants can have significant effects on the biotic status of receiving waters and can cause some water pollution, especially if the treatment process used is only basic. For example, for sewage treatment plants without nutrient removal, eutrophication of receiving water bodies can be a problem.
In 2024, The Royal Academy of Engineering released a study into the effects wastewater on public health in the United Kingdom. The study gained media attention, with comments from the UKs leading health professionals, including Sir Chris Whitty. Outlining 15 recommendations for various UK bodies to dramatically reduce public health risks by increasing the water quality in its waterways, such as rivers and lakes.
After the release of the report, The Guardian newspaper interviewed Whitty, who stated that improving water quality and sewage treatment should be a high level of importance and a "public health priority". He compared it to eradicating cholera in the 19th century in the country following improvements to the sewage treatment network. The study also identified that low water flows in rivers saw high concentration levels of sewage, as well as times of flooding or heavy rainfall. While heavy rainfall had always been associated with sewage overflows into streams and rivers, the British media went as far to warn parents of the dangers of paddling in shallow rivers during warm weather.
Whitty's comments came after the study revealed that the UK was experiencing a growth in the number of people that were using coastal and inland waters recreationally. This could be connected to a growing interest in activities such as open water swimming or other water sports. Despite this growth in recreation, poor water quality meant some were becoming unwell during events. Most notably, the 2024 Paris Olympics had to delay numerous swimming-focused events like the triathlon due to high levels of sewage in the River Seine.
Reuse
Irrigation
Increasingly, people use treated or even untreated sewage for irrigation to produce crops. Cities provide lucrative markets for fresh produce, so are attractive to farmers. Because agriculture has to compete for increasingly scarce water resources with industry and municipal users, there is often no alternative for farmers but to use water polluted with sewage directly to water their crops. There can be significant health hazards related to using water loaded with pathogens in this way. The World Health Organization developed guidelines for safe use of wastewater in 2006. They advocate a 'multiple-barrier' approach to wastewater use, where farmers are encouraged to adopt various risk-reducing behaviors. These include ceasing irrigation a few days before harvesting to allow pathogens to die off in the sunlight, applying water carefully so it does not contaminate leaves likely to be eaten raw, cleaning vegetables with disinfectant or allowing fecal sludge used in farming to dry before being used as a human manure.
Reclaimed water
Global situation
Before the 20th century in Europe, sewers usually discharged into a body of water such as a river, lake, or ocean. There was no treatment, so the breakdown of the human waste was left to the ecosystem. This could lead to satisfactory results if the assimilative capacity of the ecosystem is sufficient which is nowadays not often the case due to increasing population density.
Today, the situation in urban areas of industrialized countries is usually that sewers route their contents to a sewage treatment plant rather than directly to a body of water. In many developing countries, however, the bulk of municipal and industrial wastewater is discharged to rivers and the ocean without any treatment or after preliminary treatment or primary treatment only. Doing so can lead to water pollution. Few reliable figures exist on the share of the wastewater collected in sewers that is being treated worldwide. A global estimate by UNDP and UN-Habitat in 2010 was that 90% of all wastewater generated is released into the environment untreated. A more recent study in 2021 estimated that globally, about 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%. As of 2022, without sufficient treatment, more than 80% of all wastewater generated globally is released into the environment. High-income nations treat, on average, 70% of the wastewater they produce, according to UN Water. Only 8% of wastewater produced in low-income nations receives any sort of treatment.
The Joint Monitoring Programme (JMP) for Water Supply and Sanitation by WHO and UNICEF report in 2021 that 82% of people with sewer connections are connected to sewage treatment plants providing at least secondary treatment.However, this value varies widely between regions. For example, in Europe, North America, Northern Africa and Western Asia, a total of 31 countries had universal (>99%) wastewater treatment. However, in Albania, Bermuda, North Macedonia and Serbia "less than 50% of sewered wastewater received secondary or better treatment" and in Algeria, Lebanon and Libya the value was less than 20% of sewered wastewater that was being treated. The report also found that "globally, 594 million people have sewer connections that don't receive sufficient treatment. Many more are connected to wastewater treatment plants that do not provide effective treatment or comply with effluent requirements.".
Global targets
Sustainable Development Goal 6 has a Target 6.3 which is formulated as follows: "By 2030, improve water quality by reducing pollution, eliminating,dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally." The corresponding Indicator 6.3.1 is the "proportion of wastewater safely treated". It is anticipated that wastewater production would rise by 24% by 2030 and by 51% by 2050.
Data in 2020 showed that there is still too much uncollected household wastewater: Only 66% of all household wastewater flows were collected at treatment facilities in 2020 (this is determined from data from 128 countries). Based on data from 42 countries in 2015, the report stated that "32 per cent of all wastewater flows generated from point sources received at least some treatment". For sewage that has indeed been collected at centralized sewage treatment plants, about 79% went on to be safely treated in 2020.
History
The history of sewage treatment had the following developments: It began with land application (sewage farms) in the 1840s in England, followed by chemical treatment and sedimentation of sewage in tanks, then biological treatment in the late 19th century, which led to the development of the activated sludge process starting in 1912.
Regulations
In most countries, sewage collection and treatment are subject to local and national regulations and standards.
Country Examples
Overview
Europe
In the European Union, 0.8% of total energy consumption goes to wastewater treatment facilities. The European Union needs to make extra investments of €90 billion in the water and waste sector to meet its 2030 climate and energy goals.
In October 2021, British Members of Parliament voted to continue allowing untreated sewage from combined sewer overflows to be released into waterways.
Asia
India
The 'Delhi Jal Board' (DJB) is currently operating on the construction of the largest sewage treatment plant in India. It will be operational by the end of 2022 with an estimated capacity of 564 MLD. It is supposed to solve the existing situation wherein untreated sewage water is being discharged directly into the river 'Yamuna'.
Japan
Africa
Libya
Americas
United States
More information
Decentralized wastewater system
List of largest wastewater treatment plants
List of water supply and sanitation by country
Nutrient Recovery and Reuse: producing agricultural nutrients from sewage
Organisms involved in water purification
Sanitary engineering
Waste disposal
References
External links
Water Environment Federation – Professional association focusing on municipal wastewater treatment
Environmental engineering
Pollution control technologies
Sanitation
Treatment
Sewerage infrastructure
Water pollution | Sewage treatment | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 8,920 | [
"Water treatment",
"Chemical engineering",
"Sewerage infrastructure",
"Pollution control technologies",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
16,080,632 | https://en.wikipedia.org/wiki/Bedford%20Research%20Foundation | Bedford Research Foundation is a non-profit Institute that conducts stem cell research for diseases and conditions that currently have no known cure. The institute also created the Special Program of Assisted Reproduction (SPAR), a program that assists serodiscordant couples successfully achieve pregnancy. Dr. Ann Kiessling, the founder of Bedford Stem Cell Research Foundation, is the Laboratory Director.
Background
Bedford Research Foundation was founded to satisfy the need for a research and development clinical laboratory that could facilitate technology transfer from basic science discoveries to clinical test applications. BRF was founded and incorporated in 1996 by Dr. Ann Kiessling and through the efforts of men and women whose lives were altered by blood products tainted with the AIDS virus (Human Immunodeficiency Virus, HIV) and Hepatitis C virus. Faced with unprecedented disease obstacles, the men and women insisted that biomedical technology be developed to fight their infections, and allow them to conceive children of their own. Research to ensure the safety of conception by assisted reproductive technologies in general was not funded by the National Institutes of Health because of the U.S. Congress decisions in 1996 and 1998 that research on fertilized human eggs "...is meritorious and should be done for society..., but will not be funded by taxpayer dollars."
The Foundation conducts research within its own laboratories (Stem Cell, Prostate, Infectious disease) as well as in collaboration with other laboratories and raises money to award research grants to qualified investigators seeking to improve the safety and success of assisted reproduction to mothers and babies. Much of the research supported by the Foundation cannot be funded by federal grants-in-aid because of the U.S. moratorium on funding research on human eggs activated either artificially or by sperm.
For this reason, the men and women themselves raised the money to fund the Special Program of Assisted Reproduction (SPAR). Within two years, technology was developed to protect against virus transmission at conception. As a result, Baby Ryan was born in 1999 to a healthy Mom and a Dad with hemophilia who was infected with Hepatitis C and HIV by tainted blood factors.
In conjunction with stem cell research, Foundation scientists also apply patented processes to help diagnose male reproductive tract disorders. Research done at the Foundation has led to the development of additional tests that may provide valuable information about overall men's health. A current focus is detection of bacteria in semen by molecular biology methods instead of standard laboratory culture. Studies to date reveal that semen contains bacteria not previously identified. Such studies hold the promise of developing new tests for the health of semen producing organs such as the prostate, which is a site of significant disease in men, including infection (prostatitis) and cancer.
SARS2 (Coronavirus) Testing
On April 10, 2020 it was reported that Bedford Research Foundation had expanded its operations to include SARS2 testing, making it one of 66 sites in the United States with a Food and Drug Administration- approved test for COVID-19. The lab began testing samples from Sturdy Hospital in Attleboro and Emerson in Concord. On April 21, 2020, Bedford Research Foundation piloted a program to expand their SARS2 (Coronavirus) testing to the public. The test was well-received and successful. The foundation is currently making plans to expand the program.
References
External links
Bedford Research Foundation
Embryology
Obstetrics and gynaecology organizations
HIV/AIDS research organisations
Non-profit organizations based in Massachusetts
Stem cell research
Medical and health organizations based in Massachusetts
HIV/AIDS organizations in the United States | Bedford Research Foundation | [
"Chemistry",
"Biology"
] | 718 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
16,081,202 | https://en.wikipedia.org/wiki/String%20graph | In graph theory, a string graph is an intersection graph of curves in the plane; each curve is called a "string". Given a graph , is a string graph if and only if there exists a set of curves, or strings, such that the graph having a vertex for each curve and an edge for each intersecting pair of curves is isomorphic to .
Background
described a concept similar to string graphs as they applied to genetic structures. In that context, he also posed the specific case of intersecting intervals on a line, namely the now classical family of interval graphs. Later, specified the same idea to electrical networks and printed circuits. The mathematical study of string graphs began with the paper and
through a collaboration between Sinden and Ronald Graham, where the characterization of string graphs eventually came to be posed as an open question at the 5th Hungarian Colloquium on Combinatorics in 1976. However, the recognition of string graphs was eventually proven to be NP-complete, implying that no simple characterization is likely to exist.
Related graph classes
Every planar graph is a string graph: one may form a string graph representation of an arbitrary plane-embedded graph by drawing a string for each vertex that loops around the vertex and around the midpoint of each adjacent edge, as shown in the figure. For any edge uv of the graph, the strings for u and v cross each other twice near the midpoint of uv, and there are no other crossings, so the pairs of strings that cross represent exactly the adjacent pairs of vertices of the original planar graph. Alternatively, by the circle packing theorem, any planar graph may be represented as a collection of circles, any two of which cross if and only if the corresponding vertices are adjacent; these circles (with a starting and ending point chosen to turn them into open curves) provide a string graph representation of the given planar graph. proved that every planar graph has a string representation in which each pair of strings has at most one crossing point, unlike the representations described above.
Scheinerman's conjecture, now proven, is the even stronger statement that every planar graph may be represented by the intersection graph of straight line segments, a very special case of strings.
If every edge of a given graph G is subdivided, the resulting graph is a string graph if and only if G is planar. In particular, the subdivision of the complete graph K5 shown in the illustration is not a string graph, because K5 is not planar.
Every circle graph, as an intersection graph of line segments (the chords of a circle), is also a string graph. Every chordal graph may be represented as a string graph: chordal graphs are intersection graphs of subtrees of trees, and one may form a string representation of a chordal graph by forming a planar embedding of the corresponding tree and replacing each subtree by a string that traces around the subtree's edges.
The complement graph of every comparability graph is also a string graph.
Other results
showed computing the chromatic number of string graphs to be NP-hard. found that string graphs form an induced minor closed class, but not a minor closed class of graphs.
Every m-edge string graph can be partitioned into two subsets, each a constant fraction the size of the whole graph, by the removal of O(m3/4log1/2m) vertices. It follows that the biclique-free string graphs, string graphs containing no Kt,t subgraph for some constant t, have O(n) edges and more strongly have polynomial expansion.
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Topological graph theory
Intersection classes of graphs
NP-complete problems | String graph | [
"Mathematics"
] | 755 | [
"Graph theory",
"Computational problems",
"Topology",
"Mathematical relations",
"Mathematical problems",
"Topological graph theory",
"NP-complete problems"
] |
16,085,319 | https://en.wikipedia.org/wiki/Rubber%20toughening | Rubber toughening is a process in which rubber nanoparticles are interspersed within a polymer matrix to increase the mechanical robustness, or toughness, of the material. By "toughening" a polymer it is meant that the ability of the polymeric substance to absorb energy and plastically deform without fracture is increased. Considering the significant advantages in mechanical properties that rubber toughening offers, most major thermoplastics are available in rubber-toughened versions; for many engineering applications, material toughness is a deciding factor in final material selection.
The effects of disperse rubber nanoparticles are complex and differ across amorphous and partly crystalline polymeric systems. Rubber particles toughen a system by a variety of mechanisms such as when particulates concentrate stress causing cavitation or initiation of dissipating crazes. However the effects are not one-sided; excess rubber content or debonding between the rubber and polymer can reduce toughness. It is difficult to state the specific effects of a given particle size or interfacial adhesion parameter due to numerous other confounding variables.
The presence of a given failure mechanism is determined by many factors: those intrinsic to the continuous polymer phase, and those that are extrinsic, pertaining to the stress, loading speed, and ambient conditions. The action of a given mechanism in a toughened polymer can be studied with microscopy. The addition of rubbery domains occurs via processes such as melt blending in a Rheomix mixer and atom-transfer radical-polymerization.
Current research focuses on how optimizing the secondary phase composition and dispersion affects mechanical properties of the blend. Questions of interest include those to do with fracture toughness, tensile strength, and glass transition temperature.
Toughening mechanisms
Different theories describe how a dispersed rubber phase toughens a polymeric substance; most employ methods of dissipating energy throughout the matrix. These theories include: microcrack theory, shear-yielding theory, multiple-crazing theory, shear band and crazing interaction theory, and more recently those including the effects of critical ligament thickness, critical plastic area, voiding and cavitation, damage competition and others.
Microcrack theory
In 1956, the microcrack theory became the first to explain the toughening effect of a dispersed rubber phase in a polymer. Two key observations that went into the initial theory and subsequent expansion were as follows: (1) microcracks form voids over which styrene-butadiene copolymer fibrils form to prevent propagation, and (2) energy stored during elongation of toughened epoxies is released upon breaking of rubber particles. The theory concluded that the combined energy to initiate microcracks and the energy to break rubber particles could account for the increased energy absorption of toughened polymers. This theory was limited, only accounting for a small fraction of the observed increase in fracture energy.
Matrix crazing
The matrix crazing theory focuses on explaining the toughening effects of crazing. Crazes start at the equator where principal strain is highest, propagate perpendicular to the stress, and end when they meet another particle. Crazes with perpendicular fibrils can eventually become a crack if the fibrils break. The volume expansion associated with small crazes distributed through a large volume compared to the small volume of a few large cracks in untoughened polymer accounts for a large fraction of the increase in fracture energy.
Interaction between rubber particles and crazes puts elongation pressures onto the particles in the direction of stress. If this force overcomes the surface adhesion between the rubber and polymer, debonding will occur, thereby diminishing the toughening effect associated with crazing. If the particle is harder, it will be less able to deform, and thus debonding occurs under less stress. This is one reason why dispersed rubbers, below their own glass transition temperature, do not toughen plastics effectively.
Shear yielding
Shear yielding theory is one that, like matrix crazing, can account for a large fraction of the increase in energy absorption of a toughened polymer. Evidence of shear yielding in a toughened polymer can be seen where there is "necking, drawing or orientation hardening." Shear yielding will result if rubber particles act as stress concentrators and initiate volume-expansion through crazing, debonding and cavitation, to halt the formation of cracks. Overlapping stress fields from one particle to its neighbor will contribute to a growing shear-yielding region. The closer the particles are the more overlap and the larger shear-yielding region. Shear yielding is an energy absorbing process in itself, but furthermore initiation of shear bands also aids in craze arrest. The occurrence of cavitation is important to shear yielding theory because it acts to lower the yield stress. Cavitation precedes shear yielding, however shear yielding accounts for a much larger increase in toughness than does the cavitation itself.
Cavitation
Cavitation is common in epoxy resins and other craze resistant toughened polymers, and is prerequisite to shearing in Izod impact strength testing. During the deformation and fracture of a toughened polymer, cavitation of the strained rubber particles occurs in crazing-prone and non-crazing-prone plastics, including, ABS, PVC, nylon, high impact polystyrene, and CTBN toughened epoxies. Engineers use an energy-balance approach to model how particle size and rubber modulus factors influence material toughness. Both particle size and modulus show positive correlation with brittle-tough transition temperatures. They are both shown to affect the cavitation process occurring at the crack tip process zone early in deformation, preceding large-scale crazing and shear yielding.
In order to show increased toughness under strain, the volumetric strain must overcome the energy of void formation as modeled by the equation:
"where and are the shear modulus and bulk modulus of the rubber, is the volume strain in the rubber particle, is the surface energy of the rubber phase, and the function is dependent on the failure strain of the rubber under biaxial stretching conditions."
The energy-balancing model applies the physical properties of the whole material to describe the microscopic behavior during triaxial stress. The volume stress and particle radius conditions for cavitation can be calculated, giving the theoretical minimum particle radius for cavitation, useful for practical applications in rubber toughening. Typically cavitation will occur when the average stress on the rubber particles is between 10 and 20 megapascal. The volume strain on the particle is relieved and voiding occurs. The energy absorption due to this increase in volume is theoretically negligible. Instead, it is the consequent shear band formation that accounts for increased toughness. Before debonding, as the strain increases, the rubber phases is forced to stretch further strengthening the matrix. Debonding between the matrix and the rubber reduces the toughness, creating the need for strong adhesion between the polymer and rubber phases.
Damage competition theory
The damage competition theory models the relative contributions of shear yielding and craze failure, when both are present. there are two main assumptions: crazing, microcracks, and cavitation dominate in brittle systems, and shearing dominates in the ductile systems. Systems that are in between brittle and ductile will show a combination of these. The damage competition theory defines the brittle-ductile transition as the point at which the opposite mechanism (shear or yield damage) appears in a system dominated by the other mechanism.
Characterization of failure
The dominant failure mechanism can usually be observed directly using TEM, SEM and light microscopy. If cavitation or crazing is dominant, tensile dilatometry (see dilatometer) can be used to measure the extent of the mechanism by measuring volume strain. However, if multiple dilatational mechanisms are present, it is difficult to measure the separate contributions. Shear yielding is a constant volume process and cannot be measured with tensile dilatometry. Voiding can be seen with optical microscopy, however one of two methods, using polarized light or low angle light scattering are necessary to observe the connection between cavitation and shear bands.
Characteristics of the continuous phase relevant to toughening theory
In order to gauge the toughening effects of a dispersed secondary phase, it is important to understand the relevant characteristics of the continuous polymer phase. The mechanical failure characteristics of the pure polymeric continuous phase will strongly influence how rubber toughened polymer failure occurs. When a polymer usually fails due to crazing, rubber toughening particles will act as craze initiators. When it fails by shear yielding, the rubber particles will initiate shear bands. It is also possible to having multiple mechanisms come into play if the polymer is prone to failing by multiple stresses equally. Polystyrene and styrene-acrylonitrile are brittle materials that are prone to craze failure while polycarbonate, polyamides, and polyethylene terephthalate (PET) are prone to shear yield failure.
Glass transition temperature
Amorphous plastics are used below their glass transition temperature (). They are brittle and notch sensitive but creep resistant. Molecules are immobile and the plastic responds to rapidly applied stress by fracturing. Partly crystalline thermoplastics are used for application in temperature conditions between and (melting temperature). Partly crystalline thermoplastics are tough and creep-prone because the amorphous regions surrounding the rigid crystals are afforded some mobility. Often they are brittle at room temperature because they have high glass transition temperatures. Polyethylene is tough at room temperature because its is lower than room temperature. Polyamide 66 and polyvinylchloride have secondary transitions below their that allows for some energy absorbing molecule mobility.
Chemical structure
There are some general guidelines to follow when trying to determine a plastic's toughness from its chemical structure. Vinyl polymers like polystyrene and styrene-acrylonitrile tend to fail by crazing. They have low crack initiation and propagation energies. Polymers with aromatic backbones, such as polyethylene terephthalate and polycarbonate, tend to fail by shear yielding with high crack initiation energy but low propagation energy. Other polymers, including poly(methyl methacrylate) and polyacetal(polyoxymethylene), are not as brittle as "brittle polymers" and are also not as ductile as "ductile polymers".
Entanglement density and flexibility of unperturbed real chain
The following equations relate the entanglement density and a measure of the flexibility of the unperturbed real chain () of a given plastic to its fracture mechanics:
Where is the mass density of the amorphous polymer, and is the average molecular weight per statistical unit. Crazing stress is related to the entanglement density by:
The normalized stress yield is related to by
is a constant. The ratio of the crazing stress to the normalized stress yield is used to determine whether a polymer fails due to crazing or yield:
When the ratio is higher, the matrix is prone to yielding; when the ratio is lower, the matrix is prone to failure by crazing. These formulas form the base of crazing theory, shear-yielding theory, and damage competition theory.
Relationship between the secondary phase properties and toughening effect
Rubber selection and miscibility with continuous phase
In material selection it is important to look at the interaction between the matrix and the secondary phase. For example, crosslinking within the rubber phase promotes high strength fibril formation that toughens the rubber, preventing particle fracture.
Carboxyl-terminated butadiene-acrylonitrile (CTBN) is often used to toughen epoxies, but using CTBN alone increases the toughness at the cost of stiffness and heat resistance. Amine-terminated butadiene acrylonitrile (ATBN) is also used. Using ultra-fine full-vulcanized powdered rubber (UFPR) researchers have been able to improve all three, toughness, stiffness, and heat resistance simultaneously, resetting the stage for rubber toughening with particles smaller than previously thought to be effective.
In applications where high optical transparency is necessary, examples being poly(methyl methacrylate) and polycarbonate it is important to find a secondary phase that does not scatter light. To do so it is important to match refractive indices of both phases. Traditional rubber particles do not offer this quality. Modifying the surface of nanoparticles with polymers of comparable refractive indices is an interest of current research.
Secondary phase concentration
Increasing the rubber concentration in a nanocomposite decreases the modulus and tensile strength. In one study, looking at PA6-EPDM blend, increasing the concentration of rubber up to 30 percent showed a negative linear relationship with the brittle-tough transition temperature, after which the toughness decreased. This suggests that the toughening effect of adding rubber particles is limited to a critical concentration. This is examined further in a study on PMMA from 1998; using SAXS to analyze crazing density, it was found that crazing density increases and yield stress decreases until the critical point when the relationship flips.
Rubber particle size
A material that is expected to fail by crazing is more likely to benefit from larger particles than a shear prone material, which would benefit from a smaller particle. In materials where crazing and yielding are comparable, a bimodal distribution of particle size may be useful for toughening. At fixed rubber concentrations, one can find that an optimal particle size is a function of the entanglement density of the polymer matrix. The neat polymer entanglement densities of PS, SAN, and PMMA are 0.056, 0.093, and 0.127 respectively. As entanglement density increases, the optimum particle size decreases linearly, ranging between 0.1 and 3 micrometers.
The effect of particle size on toughening is dependent on the type of test performed. This can be explained because for different test conditions, the failure mechanism changes. For impact strength testing on PMMA where failure occurs by shear-yielding, the optimum size of filler PBA-core PMMA-shell particle was shown in one case to be 250 nm. In the three-point bend test, where failure is due to crazing, 2000 nm particles had the most significant toughening effect.
Temperature effects
Temperature has a direct effect on the fracture mechanics. At low temperatures, below the glass transition temperature of the rubber, the dispersed phase behaves like a glass rather than like a rubber that toughens the polymer. As a result, the continuous phase fails by mechanisms characteristic of the pure polymer, as if the rubber was not present. As temperature increases past the glass transition temperature, the rubber phase increases the crack initiation energy. At this point the crack self-propagates due to the stored elastic energy in the material. As temperature rises further past the glass transition of the rubber phase, the impact strength of a rubber-polymer composite still dramatically increases as crack propagation requires additional energy input.
Sample applications
Epoxy resins
Epoxy resins are a highly useful class of materials used in engineering applications. Some of these include use for adhesives, fiber-reinforced composites, and electronics coatings. Their rigidity and low crack propagation resistance makes epoxies a candidate of interest for rubber toughening research to fine-tune the toughening processes.
Some of the factors affecting the toughness of epoxy nanocomposites include the chemical identity of the epoxy curing agent, entanglement density, and interfacial adhesion. Curing epoxy 618 with piperidine, for example, produces tougher epoxies than when boron trifluoride-ethylamine is used. Low entanglement density increases the toughness. Bisphenol A can be added to lower the crosslinking density of epoxy 618, thereby increasing the fracture toughness. Bisphenol A and a rubber filler increase toughness synergistically.
In textbooks and literature before 2002 it was assumed that there is a lower limit for rubber-toughening particle diameter at 200 nm; it was then discovered that ultra-fine full-vulcanized powdered rubber particles with diameter of 90 nm show significant toughening of rubber epoxies. This finding underlines how this field is constantly growing and more work can be done to better model the rubber toughening effect.
ABS
Acrylonitrile butadiene styrene (ABS) polymer is an application of rubber toughening. The properties of this polymer come mainly from rubber toughening. The polybutadiene rubber domains in the main styrene-acrylonitrile matrix act as a stop to crack propagation.
Optically transparent plastics
PMMA’s high optical transparency, low cost, and compressibility make it a viable option for practical applications in architecture and car manufacturing as a substitute for glass when high transparency is necessary. Incorporating a rubber filler phase increases the toughness. Such fillers need to form strong interfacial bonds with the PMMA matrix. In applications where optical transparency is important, measures must be taken to limit light scattering.
It is common in toughening PMMA, and in other composites, to synthesize core-shell particles via atom-transfer radical-polymerization that have an outer polymer layer that has properties similar to those of the primary phase that increases the particle’s adhesion to the matrix. Developing PMMA compatible core-shell particles with low glass transition temperature while maintaining optical transparency is of interest to architects and car companies.
For optimal transparency the disperse rubber phase needs the following:
Small average particle radius
Narrow particle size distribution
Refractive index matching that of matrix across range of temperatures and wavelengths
Strong adhesion to matrix
Similar viscosity to matrix at processing temperature
Cyclic olefin copolymer, an optically transparent plastic with low moisture uptake and solvent resistance among other useful properties, can be toughened effectively with a styrene-butadiene-styrene rubber with the above properties. The Notched-Izod strength more than doubled from 21 J/m to 57 J/m with an optical haze of 5%.
Improving polystyrene
Polystyrene generally has stiffness, transparency, processibility, and dielectric qualities that make it useful. However, its low impact resistance at low temperatures makes catastrophic fracture failure when cold more likely. The most widely used version of toughened polystyrene is called high impact polystyrene or HIPS. Being cheap and easy to thermoform (see thermoforming), it is utilized for many everyday uses. HIPS is made by polymerizing styrene in a polybutadiene rubber solution. After the polymerization reaction begins, the polystyrene and rubber phases separate. When phase separation begins, the two phases compete for volume until phase inversion occurs and the rubber can distribute throughout the matrix. The alternative emulsion polymerization with styrene-butadiene-styrene or styrene-butadiene copolymers allows fine-tuned manipulation of particle size distribution. This method makes use of the core-shell architecture.
In order to study the fracture microstructure of HIPS in a transmission electron microscope it is necessary to stain one of the phases with a heavy metal, Osmium tetroxide for example. This produces substantially different electron density between phases. Given a constant particle size, it is the cross-linking density that determines the toughness of a HIPS material. This can be measured by exploiting the negative relationship between the cis-polybutadiene content of the rubber and the crosslink density that can be measured with the swelling index. Lower crosslink density leads to increased toughness.
The generation of vast quantities of waste rubber from car tires has sparked interest in finding uses for this discarded rubber. The rubber can be turned into a fine powder, which can then be used as a toughening agent for polystyrene. However, poor miscibility between the waste rubber and polystyrene weakens the material. This problem requires the use of a compatibilizer (see compatibilization) in order to reduce interfacial tension and ultimately make rubber toughening of polystyrene effective. A polystyrene/styrene-butadiene copolymer acts to increase the adhesion between the dispersed and continuous phases.
References
Plastics
Polymers
Materials science | Rubber toughening | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,261 | [
"Applied and interdisciplinary physics",
"Unsolved problems in physics",
"Materials science",
"Polymer chemistry",
"nan",
"Polymers",
"Amorphous solids",
"Plastics"
] |
16,085,364 | https://en.wikipedia.org/wiki/Organisms%20involved%20in%20water%20purification | Most organisms involved in water purification originate from the waste, wastewater or water stream itself or arrive as resting spore of some form from the atmosphere. In a very few cases, mostly associated with constructed wetlands, specific organisms are planted to maximise the efficiency of the process.
Role of biota
Biota are an essential component of most sewage treatment processes and many water purification systems. Most of the organisms involved are derived from the waste, wastewater or water stream itself or from the atmosphere or soil water. However some processes, especially those involved in removing very low concentrations of contaminants, may use engineered eco-systems created by the introduction of specific plants and sometimes animals. Some full scale sewage treatment plants also use constructed wetlands to provide treatment.
Pollutants in wastewater
Pathogens
Parasites, bacteria and viruses may be injurious to the health of people or livestock ingesting the polluted water. These pathogens may have originated from sewage or from domestic or wild bird or mammal feces. Pathogens may be killed by ingestion by larger organisms, oxidation, infection by phages or irradiation by ultraviolet sunlight unless that sunlight is blocked by plants or suspended solids.
Suspended solids
Particles of soil or organic matter may be suspended in the water. Such materials may give the water a cloudy or turbid appearance. The anoxic decomposition of some organic materials may give rise to obnoxious or unpleasant smells as sulphur containing compounds are released.
Nutrients
Compounds containing nitrogen, potassium or phosphorus may encourage growth of aquatic plants and thus increase the available energy in the local food-web. this can lead to increased concentrations of suspended organic material. In some cases specific micro-nutrients may be required to allow the available nutrients to be fully utilised by living organisms. In other cases, the presence of specific chemical species may produce toxic effects limiting growth and abundance of living matter.
Metals
Many dissolved or suspended metal salts exert harmful effects in the environment sometimes at very low concentrations. Some aquatic plants are able to remove very low metal concentrations, with the metals ending up bound to clay or other mineral particles.
Organisms
Saprophytic bacteria and fungi can convert organic matter into living cell mass, carbon dioxide, water and a range of metabolic by-products. These saprophytic organisms may then be predated upon by protozoa, rotifers and, in cleaner waters, Bryozoa which consume suspended organic particles including viruses and pathogenic bacteria. Clarity of the water may begin to improve as the protozoa are subsequently consumed by rotifers and cladocera. Purifying bacteria, protozoa, and rotifers must either be mixed throughout the water or have the water circulated past them to be effective. Sewage treatment plants mix these organisms as activated sludge or circulate water past organisms living on trickling filters or rotating biological contactors.
Aquatic vegetation may provide similar surface habitat for purifying bacteria, protozoa, and rotifers in a pond or marsh setting; although water circulation is often less effective. Plants and algae have the additional advantage of removing nutrients from the water; but some of those nutrients will be returned to the water when the plants die unless the plants are removed from the water. Because of the complex chemistry of Phosphorus much of this element is in an unavailable form unless decomposition creates anoxic conditions which render the phosphorus available for re-uptake. Plants also provide shade, a refuge for fish, and oxygen for aerobic bacteria. In addition, fish can limit pests such as mosquitoes. Fish and waterfowl feces return waste to the water, and their feeding habits may increase turbidity. Cyanobacteria have the disadvantageous ability to add nutrients from the air to the water being purified and to generate toxins in some cases.
The choice of organism depends on the local climate different species and other factors. Indigenous species usually tend to be better adapted to the local environment.
Macrophytes
The choice of plants in engineered wet-lands or managed lagoons is dependent on the purification requirements of the system and this may involve plantings of varying plant species at a range of depths to achieve the required goal.
Plants purify water by consuming excess nutrients and by providing surfaces upon which a wide range of other purifying organisms can live. They also are effective oxygenators in sunlight. They also have the ability to translocate chemicals between their submerged foliage and their root systems and this is of significance in engineered wet-lands designed to de-toxify waste waters. Plants that have been used in temperate climates include Nymphea alba, Phragmites australis, Sparganium erectum, Iris pseudacorus, Schoenoplectus lacustris and Carex acutiformis.
Where oxygenation is a critical requirement Stratiotes aloides, Hydrocharis morsus-ranae, Acorus calamus, Myriophyllum species and Elodea have been used.
Hydrocharis morsus-ranae and Nuphar lutea have been used where shade and cover are required.
Fish
Fish are frequently the top level predators in a managed treatment eco-system and in some case may simply be a mono-culture of herbivorous species. Management of multi-species fisheries requires careful management and may involve a range of fish species including bottom-feeders and predatory species to limit population growth of the herbivorous fish.
Rotifers
Rotifers are microscopic complex organisms and are filter feeders removing fine particulate matter from water. They occur naturally in aerobic lagoons, activated sludge processes, in trickling filters and in final settlement tanks and are a significant factor in removing suspended bacterial cells and algae from the water column.
Annelids
Annelid worms are essential to the effective operation of trickling filters helping to remove excess bio-mass and enhancing natural sloughing of the bio-film. Supernumerary worms are very commonly found in the drainage troughs around trickling filters and in the final settlement sludge. Annelids also play a key role in lagoon treatment systems and in the effective working or engineered wet-lands. In this environment worms are a principal force in mixing in the upper few centimetres of the sediment layer exposing organic material to both oxidative and anoxic environments aiding the complete breakdown of most organics. They are also a key ingredient in the food-chain transferring energy upwards to fish and aquatic birds.
Protozoa
The range of protozoan species found is very wide but may include species of the following genera:
Amoeba
Arcella
Blepharisma
Didinium
Euglena
Hypotrich
Paramecium
Suctoria
Stylonychia
Vorticella
Insects
Chironomidae bloodworm larva
Podura aquatica water springtail
Psychodidae drain fly or filter fly larva
Bacteria
Bacteria are probably the most significant group of organisms involved in water purification and are ubiquitous in all biological purification environments. Some such as Sphaerotilus natans are typically associated with grossly polluted waters, but even in such environments the bacteria are degrading the organic material present.
See also
Aquatic plant
Water purification
Treatment pond
Detoxification
Sources
Fair, Gordon Maskew, Geyer, John Charles & Okun, Daniel Alexander Water and Wastewater Engineering (Volume 2) John Wiley & Sons (1968)
Hammer, Mark J. Water and Waste-Water Technology John Wiley & Sons (1975)
Metcalf & Eddy Wastewater Engineering McGraw-Hill (1972)
Notes
Anaerobic digestion
Sewerage
Water technology
Water pollution
Water treatment | Organisms involved in water purification | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,547 | [
"Water treatment",
"Water pollution",
"Sewerage",
"Anaerobic digestion",
"Environmental engineering",
"Water technology"
] |
16,085,921 | https://en.wikipedia.org/wiki/National%20Environmental%20Engineering%20Research%20Institute | The National Environmental Engineering Research Institute (NEERI) in Nagpur was originally established in 1958 as the Central Public Health Engineering Research Institute (CPHERI). It has been described as the "premier and oldest institute in India." It is an institution listed on the Integrated Government Online Directory. It operates under the aegis of the Council of Scientific and Industrial Research (CSIR), based in New Delhi. Indira Gandhi, the Prime Minister of India at the time, renamed the Institute NEERI in 1974.
The Institute primarily focused on human health issues related to water supply, sewage disposal, diseases, and industrial pollution.
NEERI operates as a laboratory in the field of environmental science and engineering and is one of the constituent laboratories of the Council of Scientific and Industrial Research (CSIR). The institute has six zonal laboratories located in Chennai, Delhi, Hyderabad, Kolkata, Nagpur, and Mumbai. NEERI operates under the Ministry of Science and Technology of the Indian government. NEERI is a partner organization of India's POP National Implementation Plan (NIP).
History
In 1958, the Central Public Health Engineering Research Institute (CPHERI) was established. It was created by the Council of Scientific and Industrial Research (CSIR). In 1974, after participating in the "United Nations Inter-Governmental Conference on Human Environment" and with its renaming by Prime Minister Indira Gandhi, CPHERI became the National Environmental Engineering Research Institute (NEERI). NEERI has headquarters in Nagpur and five zonal laboratories in Mumbai, Kolkata, Delhi, Chennai, and Hyderabad.
The study for the location of a new municipal solid waste landfill site in Kolkata used the institute's 2005 guidelines.
During the COVID-19 crisis, the institute developed a saline gargling sample method to trace the disease.
Fields
Environmental monitoring
Since 1978, the institute has operated a nationwide air quality monitoring network. Sponsored by the Central Pollution Control Board (CPCB) since 1990. Receptor modelling techniques are used. CSIR-NEERI is involved in the design and development of air pollution control systems.
The institute has also developed a water purification system called 'NEERI ZAR'. In the 1960s and 1970s, the Institute developed guidelines for Defluorination techniques. They have sometimes formed a departure point for the development of other techniques. The Institute tests samples for research on Defluorination and the measurement of particulate matter in air.
The institute has been entrusted by the courts to provide an inspection of the current environmental and legal framework.
Skill development
The institute has set up a Centre for Skill Development, offering certificate courses in the areas of environmental impact and water quality assessment. Prof. V. Rajagopalan (1993 Vice President of the World Bank) had in his time (1955–65) with the Institute created a national program for water industry professionals. Graduate programmers were established in Public Health Engineering at the Guindy Engineering College, Madras, Roorkee Engineering University, and VJTI in Mumbai.
Assessment of research
In 1989–2013, 1,236 publications of the National Environmental Engineering Research Institute were assessed. The institute technique for enrichment of ilmenite with titanium dioxide has been evaluated externally.
Patent development
The institute has national and international patents for a method to manufacture zeolite-A using flash instead of sodium silicate and aluminate.
Selected publications
Kumar, A., et al. "Sustainability in Environmental Engineering and Science." (2021): 253–262.
Sharma, Abhinav. "Effect of ozone pretreatment on biodegradability enhancement and biogas production of biomethane distillery effluent."
Sharma, Asheesh, et al. "NutriL-GIS: A Tool for Assessment of Agricultural Runoff and Nutrient Pollution in a Watershed." National Environmental Engineering Research Institute (NEERI). India (2010).
Sinnarkar, S. N., and Rajesh Kumar Lohiya. "External user in an environmental research library." Annals of library and information studies 55.4 (2008): 275–280.
Schools, Greywater Reuse In Rural. "Guidance Manual." National Environmental Engineering Research Institute (2007).
Thawale, P. R., Asha A. Juwarkar, and S. K. Singh. "Resource conservation through land treatment of municipal wastewater." Current Science (2006): 704–711.
Rao, Padma S., et al. "Performance evaluation of a green belt in a petroleum refinery: a case study". Ecological engineering 23.2 (2004): 77–84.
Murty , K. S. "Groundwater in India." Studies in Environmental Science. Vol. 17. Elsevier, 1981. 733–736.
References
Research institutes in Nagpur
Council of Scientific and Industrial Research
Environmental engineering
Science and technology in Maharashtra
Ministry of Science and Technology (India)
Research institutes established in 1958
1958 establishments in Bombay State | National Environmental Engineering Research Institute | [
"Chemistry",
"Engineering"
] | 1,007 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
593,680 | https://en.wikipedia.org/wiki/Statistical%20process%20control | Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical methods to monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste scrap. SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.
SPC must be practiced in two phases: The first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures).
An advantage of SPC over other methods of quality control, such as "inspection," is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped.
History
Statistical process control was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George D. Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II.
W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939), which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming travelled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry.
'Common' and 'special' sources of variation
Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the same way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control.
Application to non-manufacturing processes
Statistical process control is appropriate to support any repetitive process, and has been implemented in many settings where for example ISO 9000 quality management systems are used, including financial auditing and accounting, IT operations, health care processes, and clerical processes such as loan arrangement and administration, customer billing etc. Despite criticism of its use in design and development, it is well-placed to manage semi-automated data governance of high-volume data processing operations, for example in an enterprise data warehouse, or an enterprise data quality management system.
In the 1988 Capability Maturity Model (CMM) the Software Engineering Institute suggested that SPC could be applied to software engineering processes. The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept.
The application of SPC to non-repetitive, knowledge-intensive processes, such as research and development or systems engineering, has encountered skepticism and remains controversial.
In No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the software development than in, e.g., manufacturing.
Variation in manufacturing
In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article.
Any source of variation at any point of time in a process will fall into one of two classes.
(1) Common causes 'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time.
(2) Special causes 'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable.
Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs.
For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights.
If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced).
From an SPC perspective, if the weight of each cereal box varies randomly, some higher and some lower, always within an acceptable range, then the process is considered stable. If the cams and pulleys of the machinery start to wear out, the weights of the cereal box might not be random. The degraded functionality of the cams and pulleys may lead to a non-random linear pattern of increasing cereal box weights. We call this common cause variation. If, however, all the cereal boxes suddenly weighed much more than average because of an unexpected malfunction of the cams and pulleys, this would be considered a special cause variation.
Application
The application of SPC involves three main phases of activity:
Understanding the process and the specification limits.
Eliminating assignable (special) sources of variation, so that the process is stable.
Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation.
The proper implementation of SPC has been limited, in part due to a lack of statistical expertise at many organizations.
Control charts
The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time.
Stable process
When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future.
A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index.
Excessive variations
When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation.
The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs.
Process stability metrics
When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger.
They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups.
Mathematics of control charts
Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example,
derived value = last value + average absolute difference between the last N numbers.
See also
ANOVA Gauge R&R
Distribution-free control chart
Electronic design automation
Industrial engineering
Process Window Index
Process capability index
Quality assurance
Reliability engineering
Six sigma
Stochastic control
Total quality management
References
Bibliography
External links
MIT Course - Control of Manufacturing Processes | Statistical process control | [
"Engineering"
] | 2,348 | [
"Statistical process control",
"Engineering statistics"
] |
593,693 | https://en.wikipedia.org/wiki/Point%20%28geometry%29 | In geometry, a point is an abstract idealization of an exact position, without size, in physical space, or its generalization to other kinds of mathematical spaces. As zero-dimensional objects, points are usually taken to be the fundamental indivisible elements comprising the space, of which one-dimensional curves, two-dimensional surfaces, and higher-dimensional objects consist; conversely, a point can be determined by the intersection of two curves or three surfaces, called a vertex or corner.
In classical Euclidean geometry, a point is a primitive notion, defined as "that which has no part". Points and other primitive notions are not defined in terms of other concepts, but only by certain formal properties, called axioms, that they must satisfy; for example, "there is exactly one straight line that passes through two distinct points". As physical diagrams, geometric figures are made with tools such as a compass, scriber, or pen, whose pointed tip can mark a small dot or prick a small hole representing a point, or can be drawn across a surface to represent a curve.
Since the advent of analytic geometry, points are often defined or represented in terms of numerical coordinates. In modern mathematics, a space of points is typically treated as a set, a point set.
An isolated point is an element of some subset of points which has some neighborhood containing no other points of the subset.
Points in Euclidean geometry
Points, considered within the framework of Euclidean geometry, are one of the most fundamental objects. Euclid originally defined the point as "that which has no part". In the two-dimensional Euclidean plane, a point is represented by an ordered pair (, ) of numbers, where the first number conventionally represents the horizontal and is often denoted by , and the second number conventionally represents the vertical and is often denoted by . This idea is easily generalized to three-dimensional Euclidean space, where a point is represented by an ordered triplet (, , ) with the additional third number representing depth and often denoted by . Further generalizations are represented by an ordered tuplet of terms, where is the dimension of the space in which the point is located.
Many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points; As an example, a line is an infinite set of points of the form
where through and are constants and is the dimension of the space. Similar constructions exist that define the plane, line segment, and other related concepts. A line segment consisting of only a single point is called a degenerate line segment.
In addition to defining points and constructs related to points, Euclid also postulated a key idea about points, that any two points can be connected by a straight line. This is easily confirmed under modern extensions of Euclidean geometry, and had lasting consequences at its introduction, allowing the construction of almost all the geometric concepts known at the time. However, Euclid's postulation of points was neither complete nor definitive, and he occasionally assumed facts about points that did not follow directly from his axioms, such as the ordering of points on the line or the existence of specific points. In spite of this, modern expansions of the system serve to remove these assumptions.
Dimension of a point
There are several inequivalent definitions of dimension in mathematics. In all of the common definitions, a point is 0-dimensional.
Vector space dimension
The dimension of a vector space is the maximum size of a linearly independent subset. In a vector space consisting of a single point (which must be the zero vector 0), there is no linearly independent subset. The zero vector is not itself linearly independent, because there is a non-trivial linear combination making it zero: .
Topological dimension
The topological dimension of a topological space is defined to be the minimum value of n, such that every finite open cover of admits a finite open cover of which refines in which no point is included in more than n+1 elements. If no such minimal n exists, the space is said to be of infinite covering dimension.
A point is zero-dimensional with respect to the covering dimension because every open cover of the space has a refinement consisting of a single open set.
Hausdorff dimension
Let X be a metric space. If and , the d-dimensional Hausdorff content of S is the infimum of the set of numbers such that there is some (indexed) collection of balls covering S with for each that satisfies
The Hausdorff dimension of X is defined by
A point has Hausdorff dimension 0 because it can be covered by a single ball of arbitrarily small radius.
Geometry without points
Although the notion of a point is generally considered fundamental in mainstream geometry and topology, there are some systems that forgo it, e.g. noncommutative geometry and pointless topology. A "pointless" or "pointfree" space is defined not as a set, but via some structure (algebraic or logical respectively) which looks like a well-known function space on the set: an algebra of continuous functions or an algebra of sets respectively. More precisely, such structures generalize well-known spaces of functions in a way that the operation "take a value at this point" may not be defined. A further tradition starts from some books of A. N. Whitehead in which the notion of region is assumed as a primitive together with the one of inclusion or connection.
Point masses and the Dirac delta function
Often in physics and mathematics, it is useful to think of a point as having non-zero mass or charge (this is especially common in classical electromagnetism, where electrons are idealized as points with non-zero charge). The Dirac delta function, or function, is (informally) a generalized function on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line. The delta function is sometimes thought of as an infinitely high, infinitely thin spike at the origin, with total area one under the spike, and physically represents an idealized point mass or point charge. It was introduced by theoretical physicist Paul Dirac. In the context of signal processing it is often referred to as the unit impulse symbol (or function). Its discrete analog is the Kronecker delta function which is usually defined on a finite domain and takes values 0 and 1.
See also
Accumulation point
Affine space
Boundary point
Critical point
Cusp
Foundations of geometry
Position (geometry)
Point at infinity
Point cloud
Point process
Point set registration
Pointwise
Singular point of a curve
Whitehead point-free geometry
Notes
References
. 2004 paperback, Prometheus Books. Being the 1919 Tarner Lectures delivered at Trinity College.
External links | Point (geometry) | [
"Mathematics"
] | 1,370 | [
"Point (geometry)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.