id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
51,776 | https://en.wikipedia.org/wiki/Hydrostatic%20equilibrium | In fluid mechanics, hydrostatic equilibrium (hydrostatic balance, hydrostasy) is the condition of a fluid or plastic solid at rest, which occurs when external forces, such as gravity, are balanced by a pressure-gradient force. In the planetary physics of Earth, the pressure-gradient force prevents gravity from collapsing the planetary atmosphere into a thin, dense shell, whereas gravity prevents the pressure-gradient force from diffusing the atmosphere into outer space. In general, it is what causes objects in space to be spherical.
Hydrostatic equilibrium is the distinguishing criterion between dwarf planets and small solar system bodies, and features in astrophysics and planetary geology. Said qualification of equilibrium indicates that the shape of the object is symmetrically rounded, mostly due to rotation, into an ellipsoid, where any irregular surface features are consequent to a relatively thin solid crust. In addition to the Sun, there are a dozen or so equilibrium objects confirmed to exist in the Solar System.
Mathematical consideration
For a hydrostatic fluid on Earth:
Derivation from force summation
Newton's laws of motion state that a volume of a fluid that is not in motion or that is in a state of constant velocity must have zero net force on it. This means the sum of the forces in a given direction must be opposed by an equal sum of forces in the opposite direction. This force balance is called a hydrostatic equilibrium.
The fluid can be split into a large number of cuboid volume elements; by considering a single element, the action of the fluid can be derived.
There are three forces: the force downwards onto the top of the cuboid from the pressure, P, of the fluid above it is, from the definition of pressure,
Similarly, the force on the volume element from the pressure of the fluid below pushing upwards is
Finally, the weight of the volume element causes a force downwards. If the density is ρ, the volume is V and g the standard gravity, then:
The volume of this cuboid is equal to the area of the top or bottom, times the height – the formula for finding the volume of a cube.
By balancing these forces, the total force on the fluid is
This sum equals zero if the fluid's velocity is constant. Dividing by A,
Or,
Ptop − Pbottom is a change in pressure, and h is the height of the volume element—a change in the distance above the ground. By saying these changes are infinitesimally small, the equation can be written in differential form.
Density changes with pressure, and gravity changes with height, so the equation would be:
Derivation from Navier–Stokes equations
Note finally that this last equation can be derived by solving the three-dimensional Navier–Stokes equations for the equilibrium situation where
Then the only non-trivial equation is the -equation, which now reads
Thus, hydrostatic balance can be regarded as a particularly simple equilibrium solution of the Navier–Stokes equations.
Derivation from general relativity
By plugging the energy–momentum tensor for a perfect fluid
into the Einstein field equations
and using the conservation condition
one can derive the Tolman–Oppenheimer–Volkoff equation for the structure of a static, spherically symmetric relativistic star in isotropic coordinates:
In practice, Ρ and ρ are related by an equation of state of the form f(Ρ,ρ) = 0, with f specific to makeup of the star. M(r) is a foliation of spheres weighted by the mass density ρ(r), with the largest sphere having radius r:
Per standard procedure in taking the nonrelativistic limit, we let , so that the factor
Therefore, in the nonrelativistic limit the Tolman–Oppenheimer–Volkoff equation reduces to Newton's hydrostatic equilibrium:
(we have made the trivial notation change h = r and have used f(Ρ,ρ) = 0 to express ρ in terms of P). A similar equation can be computed for rotating, axially symmetric stars, which in its gauge independent form reads:
Unlike the TOV equilibrium equation, these are two equations (for instance, if as usual when treating stars, one chooses spherical coordinates as basis coordinates , the index i runs for the coordinates r and ).
Applications
Fluids
The hydrostatic equilibrium pertains to hydrostatics and the principles of equilibrium of fluids. A hydrostatic balance is a particular balance for weighing substances in water. Hydrostatic balance allows the discovery of their specific gravities. This equilibrium is strictly applicable when an ideal fluid is in steady horizontal laminar flow, and when any fluid is at rest or in vertical motion at constant speed. It can also be a satisfactory approximation when flow speeds are low enough that acceleration is negligible.
Astrophysics and planetary science
From the time of Isaac Newton much work has been done on the subject of the equilibrium attained when a fluid rotates in space. This has application to both stars and objects like planets, which may have been fluid in the past or in which the solid material deforms like a fluid when subjected to very high stresses.
In any given layer of a star, there is a hydrostatic equilibrium between the outward-pushing pressure gradient and the weight of the material above pressing inward. One can also study planets under the assumption of hydrostatic equilibrium. A rotating star or planet in hydrostatic equilibrium is usually an oblate spheroid, an ellipsoid in which two of the principal axes are equal and longer than the third.
An example of this phenomenon is the star Vega, which has a rotation period of 12.5 hours. Consequently, Vega is about 20% larger at the equator than from pole to pole.
In his 1687 Philosophiæ Naturalis Principia Mathematica Newton correctly stated that a rotating fluid of uniform density under the influence of gravity would take the form of a spheroid and that the gravity (including the effect of centrifugal force) would be weaker at the equator than at the poles by an amount equal (at least asymptotically) to five fourths the centrifugal force at the equator. In 1742, Colin Maclaurin published his treatise on fluxions in which he showed that the spheroid was an exact solution. If we designate the equatorial radius by the polar radius by and the eccentricity by with
he found that the gravity at the poles is
where is the gravitational constant, is the (uniform) density, and is the total mass. The ratio of this to the gravity if the fluid is not rotating, is asymptotic to
as goes to zero, where is the flattening:
The gravitational attraction on the equator (not including centrifugal force) is
Asymptotically, we have:
Maclaurin showed (still in the case of uniform density) that the component of gravity toward the axis of rotation depended only on the distance from the axis and was proportional to that distance, and the component in the direction toward the plane of the equator depended only on the distance from that plane and was proportional to that distance. Newton had already pointed out that the gravity felt on the equator (including the lightening due to centrifugal force) has to be in order to have the same pressure at the bottom of channels from the pole or from the equator to the centre, so the centrifugal force at the equator must be
Defining the latitude to be the angle between a tangent to the meridian and axis of rotation, the total gravity felt at latitude (including the effect of centrifugal force) is
This spheroid solution is stable up to a certain (critical) angular momentum (normalized by ), but in 1834, Carl Jacobi showed that it becomes unstable once the eccentricity reaches 0.81267 (or reaches 0.3302).
Above the critical value, the solution becomes a Jacobi, or scalene, ellipsoid (one with all three axes different). Henri Poincaré in 1885 found that at still higher angular momentum it will no longer be ellipsoidal but piriform or oviform. The symmetry drops from the 8-fold D point group to the 4-fold C, with its axis perpendicular to the axis of rotation. Other shapes satisfy the equations beyond that, but are not stable, at least not near the point of bifurcation. Poincaré was unsure what would happen at higher angular momentum but concluded that eventually the blob would split into two.
The assumption of uniform density may apply more or less to a molten planet or a rocky planet but does not apply to a star or to a planet like the earth which has a dense metallic core. In 1737, Alexis Clairaut studied the case of density varying with depth. Clairaut's theorem states that the variation of the gravity (including centrifugal force) is proportional to the square of the sine of the latitude, with the proportionality depending linearly on the flattening () and the ratio at the equator of centrifugal force to gravitational attraction. (Compare with the exact relation above for the case of uniform density.) Clairaut's theorem is a special case for an oblate spheroid of a connexion found later by Pierre-Simon Laplace between the shape and the variation of gravity.
If the star has a massive nearby companion object, tidal forces come into play as well, which distort the star into a scalene shape if rotation alone would make it a spheroid. An example of this is Beta Lyrae.
Hydrostatic equilibrium is also important for the intracluster medium, where it restricts the amount of fluid that can be present in the core of a cluster of galaxies.
We can also use the principle of hydrostatic equilibrium to estimate the velocity dispersion of dark matter in clusters of galaxies. Only baryonic matter (or, rather, the collisions thereof) emits X-ray radiation. The absolute X-ray luminosity per unit volume takes the form where and are the temperature and density of the baryonic matter, and is some function of temperature and fundamental constants. The baryonic density satisfies the above equation
The integral is a measure of the total mass of the cluster, with being the proper distance to the center of the cluster. Using the ideal gas law ( is the Boltzmann constant and is a characteristic mass of the baryonic gas particles) and rearranging, we arrive at
Multiplying by and differentiating with respect to yields
If we make the assumption that cold dark matter particles have an isotropic velocity distribution, the same derivation applies to these particles, and their density satisfies the non-linear differential equation
With perfect X-ray and distance data, we could calculate the baryon density at each point in the cluster and thus the dark matter density. We could then calculate the velocity dispersion of the dark matter, which is given by
The central density ratio is dependent on the redshift of the cluster and is given by
where is the angular width of the cluster and the proper distance to the cluster. Values for the ratio range from 0.11 to 0.14 for various surveys.
Planetary geology
The concept of hydrostatic equilibrium has also become important in determining whether an astronomical object is a planet, dwarf planet, or small Solar System body. According to the definition of planet that was adopted by the International Astronomical Union in 2006, one defining characteristic of planets and dwarf planets is that they are objects that have sufficient gravity to overcome their own rigidity and assume hydrostatic equilibrium. Such a body often has the differentiated interior and geology of a world (a planemo), but near-hydrostatic or formerly hydrostatic bodies such as the proto-planet 4 Vesta may also be differentiated and some hydrostatic bodies (notably Callisto) have not thoroughly differentiated since their formation. Often, the equilibrium shape is an oblate spheroid, as is the case with Earth. However, in the cases of moons in synchronous orbit, nearly unidirectional tidal forces create a scalene ellipsoid. Also, the purported dwarf planet is scalene because of its rapid rotation though it may not currently be in equilibrium.
Icy objects were previously believed to need less mass to attain hydrostatic equilibrium than rocky objects. The smallest object that appears to have an equilibrium shape is the icy moon Mimas at 396 km, but the largest icy object known to have an obviously non-equilibrium shape is the icy moon Proteus at 420 km, and the largest rocky bodies in an obviously non-equilibrium shape are the asteroids Pallas and Vesta at about 520 km. However, Mimas is not actually in hydrostatic equilibrium for its current rotation. The smallest body confirmed to be in hydrostatic equilibrium is the dwarf planet Ceres, which is icy, at 945 km, and the largest known body to have a noticeable deviation from hydrostatic equilibrium is Iapetus being made of mostly permeable ice and almost no rock. At 1,469 km Iapetus is neither spherical nor ellipsoid. Instead, it is rather in a strange walnut-like shape due to its unique equatorial ridge. Some icy bodies may be in equilibrium at least partly due to a subsurface ocean, which is not the definition of equilibrium used by the IAU (gravity overcoming internal rigid-body forces). Even larger bodies deviate from hydrostatic equilibrium, although they are ellipsoidal: examples are Earth's Moon at 3,474 km (mostly rock), and the planet Mercury at 4,880 km (mostly metal).
In 2024, Kiss et al. found that has an ellipsoidal shape incompatible with hydrostatic equilibrium for its current spin. They hypothesised that Quaoar originally had a rapid rotation and was in hydrostatic equilibrium but that its shape became "frozen in" and did not change as it spun down because of tidal forces from its moon Weywot. If so, this would resemble the situation of Iapetus, which is too oblate for its current spin. Iapetus is generally still considered a planetary-mass moon nonetheless though not always.
Solid bodies have irregular surfaces, but local irregularities may be consistent with global equilibrium. For example, the massive base of the tallest mountain on Earth, Mauna Kea, has deformed and depressed the level of the surrounding crust and so the overall distribution of mass approaches equilibrium.
Atmospheric modeling
In the atmosphere, the pressure of the air decreases with increasing altitude. This pressure difference causes an upward force called the pressure-gradient force. The force of gravity balances this out, keeps the atmosphere bound to Earth and maintains pressure differences with altitude.
See also
List of gravitationally rounded objects of the Solar System; a list of objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium)
Statics
Two-balloon experiment
References
External links
Strobel, Nick. (May, 2001). Nick Strobel's Astronomy Notes.
by Richard Pogge, Ohio State University, Department of Astronomy
Concepts in astrophysics
Concepts in astronomy
Definition of planet
Fluid mechanics
Hydrostatics | Hydrostatic equilibrium | [
"Physics",
"Astronomy",
"Engineering"
] | 3,110 | [
"Definition of planet",
"Concepts in astrophysics",
"Concepts in astronomy",
"Astrophysics",
"Civil engineering",
"Astronomical controversies",
"Astronomical classification systems",
"Fluid mechanics"
] |
51,784 | https://en.wikipedia.org/wiki/Map%20projection | In cartography, a map projection is any of a broad set of transformations employed to represent the curved two-dimensional surface of a globe on a plane. In a map projection, coordinates, often expressed as latitude and longitude, of locations from the surface of the globe are transformed to coordinates on a plane.
Projection is a necessary step in creating a two-dimensional map and is one of the essential elements of cartography.
All projections of a sphere on a plane necessarily distort the surface in some way. Depending on the purpose of the map, some distortions are acceptable and others are not; therefore, different map projections exist in order to preserve some properties of the sphere-like body at the expense of other properties. The study of map projections is primarily about the characterization of their distortions. There is no limit to the number of possible map projections.
More generally, projections are considered in several fields of pure mathematics, including differential geometry, projective geometry, and manifolds. However, the term "map projection" refers specifically to a cartographic projection.
Despite the name's literal meaning, projection is not limited to perspective projections, such as those resulting from casting a shadow on a screen, or the rectilinear image produced by a pinhole camera on a flat film plate. Rather, any mathematical function that transforms coordinates from the curved surface distinctly and smoothly to the plane is a projection. Few projections in practical use are perspective.
Most of this article assumes that the surface to be mapped is that of a sphere. The Earth and other large celestial bodies are generally better modeled as oblate spheroids, whereas small objects such as asteroids often have irregular shapes. The surfaces of planetary bodies can be mapped even if they are too irregular to be modeled well with a sphere or ellipsoid. Therefore, more generally, a map projection is any method of flattening a continuous curved surface onto a plane.
The most well-known map projection is the Mercator projection. This map projection has the property of being conformal. However, it has been criticized throughout the 20th century for enlarging regions further from the equator. To contrast, equal-area projections such as the Sinusoidal projection and the Gall–Peters projection show the correct sizes of countries relative to each other, but distort angles. The National Geographic Society and most atlases favor map projections that compromise between area and angular distortion, such as the Robinson projection and the Winkel tripel projection.
Metric properties of maps
Many properties can be measured on the Earth's surface independently of its geography:
Area
Shape
Direction
Bearing
Distance
Map projections can be constructed to preserve some of these properties at the expense of others. Because the Earth's curved surface is not isometric to a plane, preservation of shapes inevitably requires a variable scale and, consequently, non-proportional presentation of areas. Similarly, an area-preserving projection can not be conformal, resulting in shapes and bearings distorted in most places of the map. Each projection preserves, compromises, or approximates basic metric properties in different ways. The purpose of the map determines which projection should form the base for the map. Because maps have many different purposes, a diversity of projections have been created to suit those purposes.
Another consideration in the configuration of a projection is its compatibility with data sets to be used on the map. Data sets are geographic information; their collection depends on the chosen datum (model) of the Earth. Different datums assign slightly different coordinates to the same location, so in large scale maps, such as those from national mapping systems, it is important to match the datum to the projection. The slight differences in coordinate assignation between different datums is not a concern for world maps or those of large regions, where such differences are reduced to imperceptibility.
Distortion
Carl Friedrich Gauss's Theorema Egregium proved that a sphere's surface cannot be represented on a plane without distortion. The same applies to other reference surfaces used as models for the Earth, such as oblate spheroids, ellipsoids, and geoids. Since any map projection is a representation of one of those surfaces on a plane, all map projections distort.
The classical way of showing the distortion inherent in a projection is to use Tissot's indicatrix. For a given point, using the scale factor h along the meridian, the scale factor k along the parallel, and the angle θ′ between them, Nicolas Tissot described how to construct an ellipse that illustrates the amount and orientation of the components of distortion. By spacing the ellipses regularly along the meridians and parallels, the network of indicatrices shows how distortion varies across the map.
Other distortion metrics
Many other ways have been described of showing the distortion in projections. Like Tissot's indicatrix, the Goldberg-Gott indicatrix is based on infinitesimals, and depicts flexion and skewness (bending and lopsidedness) distortions.
Rather than the original (enlarged) infinitesimal circle as in Tissot's indicatrix, some visual methods project finite shapes that span a part of the map.
For example, a small circle of fixed radius (e.g., 15 degrees angular radius). Sometimes spherical triangles are used.
In the first half of the 20th century, projecting a human head onto different projections was common to show how distortion varies across one projection as compared to another.
In dynamic media, shapes of familiar coastlines and boundaries can be dragged across an interactive map to show how the projection distorts sizes and shapes according to position on the map.
Another way to visualize local distortion is through grayscale or color gradations whose shade represents the magnitude of the angular deformation or areal inflation. Sometimes both are shown simultaneously by blending two colors to create a bivariate map.
To measure distortion globally across areas instead of at just a single point necessarily involves choosing priorities to reach a compromise. Some schemes use distance distortion as a proxy for the combination of angular deformation and areal inflation; such methods arbitrarily choose what paths to measure and how to weight them in order to yield a single result. Many have been described.
Design and construction
The creation of a map projection involves two steps:
Selection of a model for the shape of the Earth or planetary body (usually choosing between a sphere or ellipsoid). Because the Earth's actual shape is irregular, information is lost in this step.
Transformation of geographic coordinates (longitude and latitude) to Cartesian (x,y) or polar (r, θ) plane coordinates. In large-scale maps, Cartesian coordinates normally have a simple relation to eastings and northings defined as a grid superimposed on the projection. In small-scale maps, eastings and northings are not meaningful, and grids are not superimposed.
Some of the simplest map projections are literal projections, as obtained by placing a light source at some definite point relative to the globe and projecting its features onto a specified surface. Although most projections are not defined in this way, picturing the light source-globe model can be helpful in understanding the basic concept of a map projection.
Choosing a projection surface
A surface that can be unfolded or unrolled into a plane or sheet without stretching, tearing or shrinking is called a developable surface. The cylinder, cone and the plane are all developable surfaces. The sphere and ellipsoid do not have developable surfaces, so any projection of them onto a plane will have to distort the image. (To compare, one cannot flatten an orange peel without tearing and warping it.)
One way of describing a projection is first to project from the Earth's surface to a developable surface such as a cylinder or cone, and then to unroll the surface into a plane. While the first step inevitably distorts some properties of the globe, the developable surface can then be unfolded without further distortion.
Aspect of the projection
Once a choice is made between projecting onto a cylinder, cone, or plane, the aspect of the shape must be specified. The aspect describes how the developable surface is placed relative to the globe: it may be normal (such that the surface's axis of symmetry coincides with the Earth's axis), transverse (at right angles to the Earth's axis) or oblique (any angle in between).
Notable lines
The developable surface may also be either tangent or secant to the sphere or ellipsoid. Tangent means the surface touches but does not slice through the globe; secant means the surface does slice through the globe. Moving the developable surface away from contact with the globe never preserves or optimizes metric properties, so that possibility is not discussed further here.
Tangent and secant lines (standard lines) are represented undistorted. If these lines are a parallel of latitude, as in conical projections, it is called a standard parallel. The central meridian is the meridian to which the globe is rotated before projecting. The central meridian (usually written λ) and a parallel of origin (usually written φ) are often used to define the origin of the map projection.
Scale
A globe is the only way to represent the Earth with constant scale throughout the entire map in all directions. A map cannot achieve that property for any area, no matter how small. It can, however, achieve constant scale along specific lines.
Some possible properties are:
The scale depends on location, but not on direction. This is equivalent to preservation of angles, the defining characteristic of a conformal map.
Scale is constant along any parallel in the direction of the parallel. This applies for any cylindrical or pseudocylindrical projection in normal aspect.
Combination of the above: the scale depends on latitude only, not on longitude or direction. This applies for the Mercator projection in normal aspect.
Scale is constant along all straight lines radiating from a particular geographic location. This is the defining characteristic of an equidistant projection such as the azimuthal equidistant projection. There are also projections (Maurer's two-point equidistant projection, Close) where true distances from two points are preserved.
Choosing a model for the shape of the body
Projection construction is also affected by how the shape of the Earth or planetary body is approximated. In the following section on projection categories, the earth is taken as a sphere in order to simplify the discussion. However, the Earth's actual shape is closer to an oblate ellipsoid. Whether spherical or ellipsoidal, the principles discussed hold without loss of generality.
Selecting a model for a shape of the Earth involves choosing between the advantages and disadvantages of a sphere versus an ellipsoid. Spherical models are useful for small-scale maps such as world atlases and globes, since the error at that scale is not usually noticeable or important enough to justify using the more complicated ellipsoid. The ellipsoidal model is commonly used to construct topographic maps and for other large- and medium-scale maps that need to accurately depict the land surface. Auxiliary latitudes are often employed in projecting the ellipsoid.
A third model is the geoid, a more complex and accurate representation of Earth's shape coincident with what mean sea level would be if there were no winds, tides, or land. Compared to the best fitting ellipsoid, a geoidal model would change the characterization of important properties such as distance, conformality and equivalence. Therefore, in geoidal projections that preserve such properties, the mapped graticule would deviate from a mapped ellipsoid's graticule. Normally the geoid is not used as an Earth model for projections, however, because Earth's shape is very regular, with the undulation of the geoid amounting to less than 100 m from the ellipsoidal model out of the 6.3 million m Earth radius. For irregular planetary bodies such as asteroids, however, sometimes models analogous to the geoid are used to project maps from.
Other regular solids are sometimes used as generalizations for smaller bodies' geoidal equivalent. For example, Io is better modeled by triaxial ellipsoid or prolated spheroid with small eccentricities. Haumea's shape is a Jacobi ellipsoid, with its major axis twice as long as its minor and with its middle axis one and half times as long as its minor.
See map projection of the triaxial ellipsoid for further information.
Classification
One way to classify map projections is based on the type of surface onto which the globe is projected. In this scheme, the projection process is described as placing a hypothetical projection surface the size of the desired study area in contact with part of the Earth, transferring features of the Earth's surface onto the projection surface, then unraveling and scaling the projection surface into a flat map. The most common projection surfaces are cylindrical (e.g., Mercator), conic (e.g., Albers), and planar (e.g., stereographic). Many mathematical projections, however, do not neatly fit into any of these three projection methods. Hence other peer categories have been described in the literature, such as pseudoconic, pseudocylindrical, pseudoazimuthal, retroazimuthal, and polyconic.
Another way to classify projections is according to properties of the model they preserve. Some of the more common categories are:
Preserving direction (azimuthal or zenithal), a trait possible only from one or two points to every other point
Preserving shape locally (conformal or orthomorphic)
Preserving area (equal-area or equiareal or equivalent or authalic)
Preserving distance (equidistant), a trait possible only between one or two points and every other point
Preserving shortest route, a trait preserved only by the gnomonic projection
Because the sphere is not a developable surface, it is impossible to construct a map projection that is both equal-area and conformal.
Projections by surface
The three developable surfaces (plane, cylinder, cone) provide useful models for understanding, describing, and developing map projections. However, these models are limited in two fundamental ways. For one thing, most world projections in use do not fall into any of those categories. For another thing, even most projections that do fall into those categories are not naturally attainable through physical projection. As L. P. Lee notes,
Lee's objection refers to the way the terms cylindrical, conic, and planar (azimuthal) have been abstracted in the field of map projections. If maps were projected as in light shining through a globe onto a developable surface, then the spacing of parallels would follow a very limited set of possibilities. Such a cylindrical projection (for example) is one which:
Is rectangular;
Has straight vertical meridians, spaced evenly;
Has straight parallels symmetrically placed about the equator;
Has parallels constrained to where they fall when light shines through the globe onto the cylinder, with the light source someplace along the line formed by the intersection of the prime meridian with the equator, and the center of the sphere.
(If you rotate the globe before projecting then the parallels and meridians will not necessarily still be straight lines. Rotations are normally ignored for the purpose of classification.)
Where the light source emanates along the line described in this last constraint is what yields the differences between the various "natural" cylindrical projections. But the term cylindrical as used in the field of map projections relaxes the last constraint entirely. Instead the parallels can be placed according to any algorithm the designer has decided suits the needs of the map. The famous Mercator projection is one in which the placement of parallels does not arise by projection; instead parallels are placed how they need to be in order to satisfy the property that a course of constant bearing is always plotted as a straight line.
Cylindrical
Normal cylindrical
A normal cylindrical projection is any projection in which meridians are mapped to equally spaced vertical lines and circles of latitude (parallels) are mapped to horizontal lines.
The mapping of meridians to vertical lines can be visualized by imagining a cylinder whose axis coincides with the Earth's axis of rotation. This cylinder is wrapped around the Earth, projected onto, and then unrolled.
By the geometry of their construction, cylindrical projections stretch distances east-west. The amount of stretch is the same at any chosen latitude on all cylindrical projections, and is given by the secant of the latitude as a multiple of the equator's scale. The various cylindrical projections are distinguished from each other solely by their north-south stretching (where latitude is given by φ):
North-south stretching equals east-west stretching (sec φ): The east-west scale matches the north-south scale: conformal cylindrical or Mercator; this distorts areas excessively in high latitudes.
North-south stretching grows with latitude faster than east-west stretching (sec φ): The cylindric perspective (or central cylindrical) projection; unsuitable because distortion is even worse than in the Mercator projection.
North-south stretching grows with latitude, but less quickly than the east-west stretching: such as the Miller cylindrical projection (sec φ).
North-south distances neither stretched nor compressed (1): equirectangular projection or "plate carrée".
North-south compression equals the cosine of the latitude (the reciprocal of east-west stretching): equal-area cylindrical. This projection has many named specializations differing only in the scaling constant, such as the Gall–Peters or Gall orthographic (undistorted at the 45° parallels), Behrmann (undistorted at the 30° parallels), and Lambert cylindrical equal-area (undistorted at the equator). Since this projection scales north-south distances by the reciprocal of east-west stretching, it preserves area at the expense of shapes.
In the first case (Mercator), the east-west scale always equals the north-south scale. In the second case (central cylindrical), the north-south scale exceeds the east-west scale everywhere away from the equator. Each remaining case has a pair of secant lines—a pair of identical latitudes of opposite sign (or else the equator) at which the east-west scale matches the north-south-scale.
Normal cylindrical projections map the whole Earth as a finite rectangle, except in the first two cases, where the rectangle stretches infinitely tall while retaining constant width.
Transverse cylindrical
A transverse cylindrical projection is a cylindrical projection that in the tangent case uses a great circle along a meridian as contact line for the cylinder.
See: transverse Mercator.
Oblique cylindrical
An oblique cylindrical projection aligns with a great circle, but not the equator and not a meridian.
Pseudocylindrical
Pseudocylindrical projections represent the central meridian as a straight line segment. Other meridians are longer than the central meridian and bow outward, away from the central meridian. Pseudocylindrical projections map parallels as straight lines. Along parallels, each point from the surface is mapped at a distance from the central meridian that is proportional to its difference in longitude from the central meridian. Therefore, meridians are equally spaced along a given parallel. On a pseudocylindrical map, any point further from the equator than some other point has a higher latitude than the other point, preserving north-south relationships. This trait is useful when illustrating phenomena that depend on latitude, such as climate. Examples of pseudocylindrical projections include:
Sinusoidal, which was the first pseudocylindrical projection developed. On the map, as in reality, the length of each parallel is proportional to the cosine of the latitude. The area of any region is true.
Collignon projection, which in its most common forms represents each meridian as two straight line segments, one from each pole to the equator.
Hybrid
The HEALPix projection combines an equal-area cylindrical projection in equatorial regions with the Collignon projection in polar areas.
Conic
The term "conic projection" is used to refer to any projection in which meridians are mapped to equally spaced lines radiating out from the apex and circles of latitude (parallels) are mapped to circular arcs centered on the apex.
When making a conic map, the map maker arbitrarily picks two standard parallels. Those standard parallels may be visualized as secant lines where the cone intersects the globe—or, if the map maker chooses the same parallel twice, as the tangent line where the cone is tangent to the globe. The resulting conic map has low distortion in scale, shape, and area near those standard parallels. Distances along the parallels to the north of both standard parallels or to the south of both standard parallels are stretched; distances along parallels between the standard parallels are compressed. When a single standard parallel is used, distances along all other parallels are stretched.
Conic projections that are commonly used are:
Equidistant conic, which keeps parallels evenly spaced along the meridians to preserve a constant distance scale along each meridian, typically the same or similar scale as along the standard parallels.
Albers conic, which adjusts the north-south distance between non-standard parallels to compensate for the east-west stretching or compression, giving an equal-area map.
Lambert conformal conic, which adjusts the north-south distance between non-standard parallels to equal the east-west stretching, giving a conformal map.
Pseudoconic
Bonne, an equal-area projection on which most meridians and parallels appear as curved lines. It has a configurable standard parallel along which there is no distortion.
Werner cordiform, upon which distances are correct from one pole, as well as along all parallels.
American polyconic and other projections in the polyconic projection class.
Azimuthal (projections onto a plane)
Azimuthal projections have the property that directions from a central point are preserved and therefore great circles through the central point are represented by straight lines on the map. These projections also have radial symmetry in the scales and hence in the distortions: map distances from the central point are computed by a function r(d) of the true distance d, independent of the angle; correspondingly, circles with the central point as center are mapped into circles which have as center the central point on the map.
The mapping of radial lines can be visualized by imagining a plane tangent to the Earth, with the central point as tangent point.
The radial scale is r′(d) and the transverse scale r(d)/(R sin ) where R is the radius of the Earth.
Some azimuthal projections are true perspective projections; that is, they can be constructed mechanically, projecting the surface of the Earth by extending lines from a point of perspective (along an infinite line through the tangent point and the tangent point's antipode) onto the plane:
The gnomonic projection displays great circles as straight lines. Can be constructed by using a point of perspective at the center of the Earth. r(d) = c tan ; so that even just a hemisphere is already infinite in extent.
The orthographic projection maps each point on the Earth to the closest point on the plane. Can be constructed from a point of perspective an infinite distance from the tangent point; r(d) = c sin . Can display up to a hemisphere on a finite circle. Photographs of Earth from far enough away, such as the Moon, approximate this perspective.
Near-sided perspective projection, which simulates the view from space at a finite distance and therefore shows less than a full hemisphere, such as used in The Blue Marble 2012).
The General Perspective projection can be constructed by using a point of perspective outside the Earth. Photographs of Earth (such as those from the International Space Station) give this perspective. It is a generalization of near-sided perspective projection, allowing tilt.
The stereographic projection, which is conformal, can be constructed by using the tangent point's antipode as the point of perspective. r(d) = c tan ; the scale is c/(2R cos ). Can display nearly the entire sphere's surface on a finite circle. The sphere's full surface requires an infinite map.
Other azimuthal projections are not true perspective projections:
Azimuthal equidistant: r(d) = cd; it is used by amateur radio operators to know the direction to point their antennas toward a point and see the distance to it. Distance from the tangent point on the map is proportional to surface distance on the Earth (; for the case where the tangent point is the North Pole, see the flag of the United Nations)
Lambert azimuthal equal-area. Distance from the tangent point on the map is proportional to straight-line distance through the Earth: r(d) = c sin
Logarithmic azimuthal is constructed so that each point's distance from the center of the map is the logarithm of its distance from the tangent point on the Earth. r(d) = c ln ); locations closer than at a distance equal to the constant d0 are not shown.
Polyhedral
Polyhedral map projections use a polyhedron to subdivide the globe into faces, and then projects each face to the globe. The most well-known polyhedral map projection is Buckminster Fuller's Dymaxion map.
Projections by preservation of a metric property
Conformal
Conformal, or orthomorphic, map projections preserve angles locally, implying that they map infinitesimal circles of constant size anywhere on the Earth to infinitesimal circles of varying sizes on the map. In contrast, mappings that are not conformal distort most such small circles into ellipses of distortion. An important consequence of conformality
is that relative angles at each point of the map are correct, and the local scale (although varying throughout the map) in every direction around any one point is constant. These are some conformal projections:
Mercator: Rhumb lines are represented by straight segments
Transverse Mercator
Stereographic: Any circle of a sphere, great and small, maps to a circle or straight line.
Roussilhe
Lambert conformal conic
Peirce quincuncial projection
Adams hemisphere-in-a-square projection
Guyou hemisphere-in-a-square projection
Equal-area
Equal-area maps preserve area measure, generally distorting shapes in order to do so. Equal-area maps are also called equivalent or authalic. These are some projections that preserve area:
Albers conic
Boggs eumorphic
Bonne
Bottomley
Collignon
Cylindrical equal-area
Eckert II, IV and VI
Equal Earth
Gall orthographic (also known as Gall–Peters, or Peters, projection)
Goode's homolosine
Hammer
Hobo–Dyer
Lambert azimuthal equal-area
Lambert cylindrical equal-area
Mollweide
Sinusoidal
Strebe 1995
Snyder's equal-area polyhedral projection, used for geodesic grids.
Tobler hyperelliptical
Werner
Equidistant
If the length of the line segment connecting two projected points on the plane is proportional to the geodesic (shortest surface) distance between the two unprojected points on the globe, then we say that distance has been preserved between those two points. An equidistant projection preserves distances from one or two special points to all other points. The special point or points may get stretched into a line or curve segment when projected. In that case, the point on the line or curve segment closest to the point being measured to must be used to measure the distance.
Plate carrée: Distances from the two poles are preserved, in equatorial aspect.
Azimuthal equidistant: Distances from the center and edge are preserved.
Equidistant conic: Distances from the two poles are preserved, in equatorial aspect.
Werner cordiform Distances from the North Pole are preserved, in equatorial aspect.
Two-point equidistant: Two "control points" are arbitrarily chosen by the map maker; distances from each control point are preserved.
Gnomonic
Great circles are displayed as straight lines:
Gnomonic projection
Retroazimuthal
Direction to a fixed location B (the bearing at the starting location A of the shortest route) corresponds to the direction on the map from A to B:
Littrow—the only conformal retroazimuthal projection
Hammer retroazimuthal—also preserves distance from the central point
Craig retroazimuthal aka Mecca or Qibla—also has vertical meridians
Compromise projections
Compromise projections give up the idea of perfectly preserving metric properties, seeking instead to strike a balance between distortions, or to simply make things look right. Most of these types of projections distort shape in the polar regions more than at the equator. These are some compromise projections:
Robinson
van der Grinten
Miller cylindrical
Winkel Tripel
Buckminster Fuller's Dymaxion
B. J. S. Cahill's Butterfly Map
Kavrayskiy VII projection
Wagner VI projection
Chamberlin trimetric
Oronce Finé's cordiform
AuthaGraph projection
Suitability of projections for application
The mathematics of projection do not permit any particular map projection to be best for everything. Something will always be distorted. Thus, many projections exist to serve the many uses of maps and their vast range of scales.
Modern national mapping systems typically employ a transverse Mercator or close variant for large-scale maps in order to preserve conformality and low variation in scale over small areas. For smaller-scale maps, such as those spanning continents or the entire world, many projections are in common use according to their fitness for the purpose, such as Winkel tripel, Robinson and Mollweide. Reference maps of the world often appear on compromise projections. Due to distortions inherent in any map of the world, the choice of projection becomes largely one of aesthetics.
Thematic maps normally require an equal area projection so that phenomena per unit area are shown in correct proportion.
However, representing area ratios correctly necessarily distorts shapes more than many maps that are not equal-area.
The Mercator projection, developed for navigational purposes, has often been used in world maps where other projections would have been more appropriate. This problem has long been recognized even outside professional circles. For example, a 1943 New York Times editorial states:
A controversy in the 1980s over the Peters map motivated the American Cartographic Association (now the Cartography and Geographic Information Society) to produce a series of booklets (including Which Map Is Best) designed to educate the public about map projections and distortion in maps. In 1989 and 1990, after some internal debate, seven North American geographic organizations adopted a resolution recommending against using any rectangular projection (including Mercator and Gall–Peters) for reference maps of the world.
See also
References
Citations
Sources
Fran Evanisko, American River College, lectures for Geography 20: "Cartographic Design for GIS", Fall 2002
Map Projections—PDF versions of numerous projections, created and released into the Public Domain by Paul B. Anderson ... member of the International Cartographic Association's Commission on Map Projections
External links
, U.S. Geological Survey Professional Paper 1453, by John P. Snyder (USGS) and Philip M. Voxland (U. Minnesota), 1989.
A Cornucopia of Map Projections, a visualization of distortion on a vast array of map projections in a single image.
G.Projector, free software can render many projections (NASA GISS).
Color images of map projections and distortion (Mapthematics.com).
Geometric aspects of mapping: map projection (KartoWeb.itc.nl).
Java world map projections, Henry Bottomley (SE16.info).
Map Projections (MathWorld).
MapRef: The Internet Collection of MapProjections and Reference Systems in Europe
PROJ.4 – Cartographic Projections Library.
Projection Reference Table of examples and properties of all common projections (RadicalCartography.net).
, Melita Kennedy (Esri).
World Map Projections, Stephen Wolfram based on work by Yu-Sung Chang (Wolfram Demonstrations Project).
Compare Map Projections
"the true size" page show size of countries without distortion from Mercator projection
Cartography
Infographics
Descriptive geometry
Geodesy | Map projection | [
"Mathematics"
] | 6,685 | [
"Map projections",
"Applied mathematics",
"Geodesy",
"Coordinate systems"
] |
51,791 | https://en.wikipedia.org/wiki/E.164 | E.164 is an international standard (ITU-T Recommendation), titled The international public telecommunication numbering plan, that defines a numbering plan for the worldwide public switched telephone network (PSTN) and some other data networks.
E.164 defines a general format for international telephone numbers. Plan-conforming telephone numbers are limited to only digits and to a maximum of fifteen digits. The specification divides the digit string into a country code of one to three digits, and the subscriber telephone number of a maximum of twelve digits.
Alternative formats (with area codes and country specific numbers) are available. Any country-specific international call prefixes are not contained in the specification.
The title of the original version and first revision of the E.164 standard was Numbering Plan for the ISDN Era
Recommendations
E.163
E.163 was the former ITU-T recommendation for describing telephone numbers for the public switched telephone network (PSTN). In the United States, this was formerly referred to as a directory number. E.163 was withdrawn, and some recommendations were incorporated into revision 1 of E.164 in 1997.
E.164.1
This recommendation describes the procedures and criteria for the reservation, assignment, and reclamation of E.164 country codes and associated identification code (IC) assignments. The criteria and procedures are provided as a basis for the effective and efficient utilization of the available E.164 numbering resources.
E.164.2
This recommendation contains the criteria and procedures for an applicant to be temporarily assigned a three-digit identification code within the shared E.164 country code +991 for the purpose of conducting an international non-commercial trial.
E.164.3
This recommendation describes the principles, criteria, and procedures for the assignment and reclamation of resources within a shared E.164 country code for groups of countries. These shared country codes will coexist with all other E.164-based country codes assigned by the ITU. The resource of the shared country code consists of a country code and a group identification code (CC + GIC) and provides the capability for a group of countries to provide telecommunication services within the group. The Secretariat of the ITU Standardization Sector (ITU-T), the Telecommunication Standardization Bureau (TSB) is responsible for the assignment of the CC + GIC.
Numbering formats
The E.164 recommendation provides the telephone number structure and functionality for five categories of telephone numbers used in international public telecommunications.
For each of the categories, it details the components of the numbering structure and the digit analysis required for successful routing of calls. Annex A provides additional information on the structure and function of E.164 numbers. Annex B provides information on network identification, service parameters, calling/connected line identity, dialing procedures, and addressing for Geographic-based ISDN calls. Specific E.164-based applications which differ in usage are defined in separate recommendations.
The number categories are all based on a fifteen-digit numbering space. Before 1997, only twelve digits were allowed. The definition does not include any international call prefixes, necessary for a call to reach international circuits from inside the country of call origination.
Geographic areas
Global services
Figure 2
Networks
Groups of countries
Trials
Uses of E.164 numbers
E.164 numbers were originally defined for use in the worldwide public switched telephone network (PSTN). The early PSTN collected routing digits from users (e.g. on a dial pad), signaled those digits to each telephony switch, and used the numbers to determine how to ultimately reach the called party.
ITU-T E.123 entitled Notation for national and international telephone numbers, e-mail addresses and web addresses provides guidance when printing E.164 telephone numbers. This format includes the recommendation of prefixing international telephone numbers with a plus sign (+) and using only spaces for digit grouping.
The presentation of a telephone number with the plus sign (+) indicates that the number should be dialed with an international calling prefix, in place of the plus sign. The number is presented starting the country calling code. This is called the globalized format of an E.164 number, and is defined in the Internet Engineering Task Force . The international calling prefix is a trunk code to reach an international circuit in the country of call origination.
DNS Mapping of E.164 numbers
Some national telephone administrations and telephone companies have implemented an Internet-based database for their numbering spaces. E.164 numbers may be registered in the Domain Name System (DNS) of the Internet in which the second-level domain e164.arpa has been reserved for telephone number mapping (ENUM). In the system, any telephone number may be mapped into a domain name using a reverse sequence of subdomains for each digit. For example, the telephone number translates to the domain name . When a number is mapped, a DNS query may be used to locate the service facilities on the Internet that accept and process telephone calls to the owner of record of the number, using, for example, the Session Initiation Protocol (SIP), a call-signaling VoIP protocol whose SIP addresses are similar in format (user@domain...) to e-mail addresses. This allows a direct, end-to-end Internet connection without passing through the public switched telephone network.
See also
Carrier of Record
E.123
List of country calling codes
External Sources
ITU National Number Plans Reference
References
External links
Text of the Recommendation, Amd. 1 and supplement 6 (E.164)
List of ITU-T Recommendation E.164 assigned country codes as of 15 December 2016
List of ITU-T Recommendation E.164 Dialling Procedures as of 15 December 2011
Numbering plan for the international telephone service (E.163) (incorporated in E.164)
World Telephone Numbering Guide
Telephone numbers
Identifiers
ITU-T E Series Recommendations | E.164 | [
"Mathematics"
] | 1,187 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
51,834 | https://en.wikipedia.org/wiki/Geneva%20Protocol | The Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or other Gases, and of Bacteriological Methods of Warfare, usually called the Geneva Protocol, is a treaty prohibiting the use of chemical and biological weapons in international armed conflicts. It was signed at Geneva on 17 June 1925 and entered into force on 8 February 1928. It was registered in League of Nations Treaty Series on 7 September 1929. The Geneva Protocol is a protocol to the Convention for the Supervision of the International Trade in Arms and Ammunition and in Implements of War signed on the same date, and followed the Hague Conventions of 1899 and 1907.
It prohibits the use of "asphyxiating, poisonous or other gases, and of all analogous liquids, materials or devices" and "bacteriological methods of warfare". This is now understood to be a general prohibition on chemical weapons and biological weapons between state parties, but has nothing to say about production, storage or transfer. Later treaties did cover these aspects – the 1972 Biological Weapons Convention (BWC) and the 1993 Chemical Weapons Convention (CWC).
A number of countries submitted reservations when becoming parties to the Geneva Protocol, declaring that they only regarded the non-use obligations as applying to other parties and that these obligations would cease to apply if the prohibited weapons were used against them.
Negotiation history
In the Hague Conventions of 1899 and 1907, the use of dangerous chemical agents was outlawed. In spite of this, the First World War saw large-scale chemical warfare. France used tear gas in 1914, but the first large-scale successful deployment of chemical weapons was by the German Empire in Ypres, Belgium in 1915, when chlorine gas was released as part of a German attack at the Battle of Gravenstafel. Following this, a chemical arms race began, with the United Kingdom, Russia, Austria-Hungary, the United States, and Italy joining France and Germany in the use of chemical weapons.
This resulted in the development of a range of horrific chemicals affecting lungs, skin, or eyes. Some were intended to be lethal on the battlefield, like hydrogen cyanide, and efficient methods of deploying agents were invented. At least 124,000 tons were produced during the war. In 1918, about one grenade out of three was filled with dangerous chemical agents. Around 500k-1.3 million casualties of the conflict were attributed to the use of gas, and the psychological effect on troops may have had a much greater effect. A few thousand civilians also became casualties as collateral damage or due to production accidents.
The Treaty of Versailles included some provisions that banned Germany from either manufacturing or importing chemical weapons. Similar treaties banned the First Austrian Republic, the Kingdom of Bulgaria, and the Kingdom of Hungary from chemical weapons, all belonging to the losing side, the Central powers. Russian bolsheviks and Britain continued the use of chemical weapons in the Russian Civil War and possibly in the Middle East in 1920.
Three years after World War I, the Allies wanted to reaffirm the Treaty of Versailles, and in 1922 the United States introduced the Treaty relating to the Use of Submarines and Noxious Gases in Warfare at the Washington Naval Conference. Four of the war victors, the United States, the United Kingdom, the Kingdom of Italy and the Empire of Japan, gave consent for ratification, but it failed to enter into force as the French Third Republic objected to the submarine provisions of the treaty.
At the 1925 Geneva Conference for the Supervision of the International Traffic in Arms the French suggested a protocol for non-use of poisonous gases. The Second Polish Republic suggested the addition of bacteriological weapons. It was signed on 17 June.
Historical assessment
Eric Croddy, assessing the Protocol in 2005, took the view that the historic record showed it had been largely ineffectual. Specifically it does not prohibit:
use against not-ratifying parties
retaliation using such weapons, so effectively making it a no-first-use agreement
use within a state's own borders in a civil conflict
research and development of such weapons, or stockpiling them
In light of these shortcomings, Jack Beard notes that "the Protocol (...) resulted in a legal framework that allowed states to conduct [biological weapons] research, develop new biological weapons, and ultimately engage in [biological weapons] arms races".
As such, the use of chemical weapons inside the nation's own territory against its citizens or subjects employed by Spain in the Rif War until 1927, Japan against Seediq indigenous rebels in Taiwan (then part of the Japanese colonial empire) in 1930 during the Musha Incident, Iraq against ethnic Kurdish civilians in the 1988 attack on Halabja during the Iran–Iraq War, and Syria or Syrian opposition forces during the Syrian civil war, nor use on Black Lives Matter protestors in the United States did not breach the Geneva Protocol.
Despite the U.S. having been a proponent of the protocol, the U.S. military and American Chemical Society lobbied against it, causing the U.S. Senate not to ratify the protocol until 1975, the same year when the United States ratified the Biological Weapons Convention.
Violations
Several state parties have deployed chemical weapons for combat in spite of the treaty. Italy used mustard gas against the Ethiopian Empire in the Second Italo-Ethiopian War. In World War II, Germany employed chemical weapons in combat on several occasions along the Black Sea, notably in Sevastopol, where they used toxic smoke to force Russian resistance fighters out of caverns below the city. They also used asphyxiating gas in the catacombs of Odesa in November 1941, following their capture of the city, and in late May 1942 during the Battle of the Kerch Peninsula in eastern Crimea, perpetrated by the Wehrmacht's Chemical Forces and organized by a special detail of SS troops with the help of a field engineer battalion. After the battle in mid-May 1942, the Germans gassed and killed almost 3,000 of the besieged and non-evacuated Red Army soldiers and Soviet civilians hiding in a series of caves and tunnels in the nearby Adzhimushkay quarry.
During the 1980-1988 Iran-Iraq War, Iraq is known to have employed a variety of chemical weapons against Iranian forces. Some 100,000 Iranian troops were casualties of Iraqi chemical weapons during the war.
Subsequent interpretation of the protocol
In 1966, United Nations General Assembly resolution 2162B called for, without any dissent, all states to strictly observe the protocol. In 1969, United Nations General Assembly resolution 2603 (XXIV) declared that the prohibition on use of chemical and biological weapons in international armed conflicts, as embodied in the protocol (though restated in a more general form), were generally recognized rules of international law. Following this, there was discussion of whether the main elements of the protocol now form part of customary international law, and now this is widely accepted to be the case.
There have been differing interpretations over whether the protocol covers the use of harassing agents, such as adamsite and tear gas, and defoliants and herbicides, such as Agent Orange, in warfare. The 1977 Environmental Modification Convention prohibits the military use of environmental modification techniques having widespread, long-lasting or severe effects. Many states do not regard this as a complete ban on the use of herbicides in warfare, but it does require case-by-case consideration. The 1993 Chemical Weapons Convention effectively banned riot control agents from being used as a method of warfare, though still permitting it for riot control.
In recent times, the protocol had been interpreted to cover non-international armed conflicts as well international ones. In 1995, an appellate chamber in the International Criminal Tribunal for the former Yugoslavia stated that "there had undisputedly emerged a general consensus in the international community on the principle that the use of chemical weapons is also prohibited in internal armed conflicts." In 2005, the International Committee of the Red Cross concluded that customary international law includes a ban on the use of chemical weapons in internal as well as international conflicts.
However, such views drew general criticism from legal authors. They noted that much of the chemical arms control agreements stems from the context of international conflicts. Furthermore, the application of customary international law to banning chemical warfare in non-international conflicts fails to meet two requirements: state practice and opinio juris. Jillian Blake & Aqsa Mahmud cited the periodic use of chemical weapons in non-international conflicts since the end of WWI (as stated above) as well as the lack of existing international humanitarian law (such as the Geneva Conventions) and national legislation and manuals prohibiting using them in such conflicts. Anne Lorenzat stated the 2005 ICRC study was rooted in "'political and operational issues rather than legal ones".
State parties
To become party to the Protocol, states must deposit an instrument with the government of France (the depositary power). Thirty-eight states originally signed the Protocol. France was the first signatory to ratify the Protocol on 10 May 1926. El Salvador, the final signatory to ratify the Protocol, did so on 26 February 2008. As of April 2021, 146 states have ratified, acceded to, or succeeded to the Protocol, most recently Colombia on 24 November 2015.
Reservations
A number of countries submitted reservations when becoming parties to the Geneva Protocol, declaring that they only regarded the non-use obligations as applying with respect to other parties to the Protocol and/or that these obligations would cease to apply with respect to any state, or its allies, which used the prohibited weapons. Several Arab states also declared that their ratification did not constitute recognition of, or diplomatic relations with, Israel, or that the provision of the Protocol were not binding with respect to Israel.
Generally, reservations not only modify treaty provisions for the reserving party, but also symmetrically modify the provisions for previously ratifying parties in dealing with the reserving party. Subsequently, numerous states have withdrawn their reservations, including the former Czechoslovakia in 1990 prior to its dissolution, or the Russian reservation on biological weapons that "preserved the right to retaliate in kind if attacked" with them, which was dissolved by President Yeltsin.
According to the Vienna Convention on Succession of States in respect of Treaties, states which succeed to a treaty after gaining independence from a state party "shall be considered as maintaining any reservation to that treaty which was applicable at the date of the succession of States in respect of the territory to which the succession of States relates unless, when making the notification of succession, it expresses a contrary intention or formulates a reservation which relates to the same subject matter as that reservation." While some states have explicitly either retained or renounced their reservations inherited on succession, states which have not clarified their position on their inherited reservations are listed as "implicit" reservations.
Reservations
Notes
Non-signatory states
The remaining UN member states and UN observers that have not acceded or succeeded to the Protocol are:
Chemical weapons prohibitions
References
Further reading
Bunn, George. "Gas and germ warfare: international legal history and present status." Proceedings of the National Academy of Sciences of the United States of America 65.1 (1970): 253+. online
Webster, Andrew. "Making Disarmament Work: The implementation of the international disarmament provisions in the League of Nations Covenant, 1919–1925." Diplomacy and Statecraft 16.3 (2005): 551–569.
External links
The text of the protocol
Weapons of War: Poison Gas
Biological warfare
Chemical warfare
Chemical weapons demilitarization
Arms control treaties
Human rights instruments
Hague Conventions of 1899 and 1907
Treaties concluded in 1925
Treaties entered into force in 1928
Treaties of the Democratic Republic of Afghanistan
Treaties of the People's Socialist Republic of Albania
Treaties of Algeria
Treaties of the People's Republic of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Australia
Treaties of the First Austrian Republic
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belgium
Treaties of the People's Republic of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of the military dictatorship in Brazil
Treaties of the Kingdom of Bulgaria
Treaties of Burkina Faso
Treaties of the People's Republic of Kampuchea
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of the Central African Republic
Treaties of Chile
Treaties of the Republic of China (1912–1949)
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Czechoslovakia
Treaties of Denmark
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of the Kingdom of Egypt
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Estonia
Treaties of the Ethiopian Empire
Treaties of Fiji
Treaties of Finland
Treaties of the French Third Republic
Treaties of the Gambia
Treaties of the Weimar Republic
Treaties of Ghana
Treaties of the Kingdom of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea-Bissau
Treaties of the Holy See
Treaties of the Hungarian People's Republic
Treaties of Iceland
Treaties of British India
Treaties of Indonesia
Treaties of Pahlavi Iran
Treaties of Mandatory Iraq
Treaties of Ireland
Treaties of Israel
Treaties of the Kingdom of Italy (1861–1946)
Treaties of Jamaica
Treaties of Japan
Treaties of Jordan
Treaties of Kenya
Treaties of North Korea
Treaties of South Korea
Treaties of Kuwait
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Republic
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Malta
Treaties of Mauritius
Treaties of Mexico
Treaties of Moldova
Treaties of Monaco
Treaties of Mongolia
Treaties of Morocco
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niger
Treaties of Nigeria
Treaties of Norway
Treaties of Pakistan
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of the Second Polish Republic
Treaties of the Estado Novo (Portugal)
Treaties of Qatar
Treaties of the Kingdom of Romania
Treaties of Russia
Treaties of Rwanda
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of the Union of South Africa
Treaties of the Soviet Union
Treaties of Spain under the Restoration
Treaties of the Dominion of Ceylon
Treaties of the Democratic Republic of the Sudan
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tanzania
Treaties of Thailand
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Venezuela
Treaties of Vietnam
Treaties of the Yemen Arab Republic
Treaties of South Yemen
Treaties extended to Curaçao and Dependencies
Treaties extended to Greenland
Treaties extended to the Faroe Islands
Treaties extended to the Dutch East Indies
Treaties extended to Surinam (Dutch colony)
Treaties concluded in Geneva | Geneva Protocol | [
"Chemistry",
"Biology"
] | 2,989 | [
"Biological warfare",
"Chemical weapons demilitarization",
"nan",
"Chemical weapons"
] |
51,836 | https://en.wikipedia.org/wiki/Chemical%20Weapons%20Convention | The Chemical Weapons Convention (CWC), officially the Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction, is an arms control treaty administered by the Organisation for the Prohibition of Chemical Weapons (OPCW), an intergovernmental organization based in The Hague, The Netherlands. The treaty entered into force on 29 April 1997. It prohibits the use of chemical weapons, and the large-scale development, production, stockpiling, or transfer of chemical weapons or their precursors, except for very limited purposes (research, medical, pharmaceutical or protective). The main obligation of member states under the convention is to effect this prohibition, as well as the destruction of all current chemical weapons. All destruction activities must take place under OPCW verification.
193 states have become parties to the CWC and accept its obligations. Israel has signed but not ratified the agreement, while three other UN member states (Egypt, North Korea and South Sudan) have neither signed nor acceded to the treaty. Most recently, the State of Palestine deposited its instrument of accession to the CWC on 17 May 2018. In September 2013, Syria acceded to the convention as part of an agreement for the destruction of Syria's chemical weapons.
As of February 2021, 98.39% of the world's declared chemical weapons stockpiles had been destroyed. The convention has provisions for systematic evaluation of chemical production facilities, as well as for investigations of allegations of use and production of chemical weapons based on the intelligence of other state parties.
Some chemicals which have been used extensively in warfare but have numerous large-scale industrial uses (such as phosgene) are highly regulated; however, certain notable exceptions exist. Chlorine gas is highly toxic, but being a pure element and widely used for peaceful purposes, is not officially listed as a chemical weapon. Certain state powers (e.g. the former Assad regime of Syria) continue to regularly manufacture and implement such chemicals in combat munitions. Although these chemicals are not specifically listed as controlled by the CWC, the use of any toxic chemical as a weapon (when used to produce fatalities solely or mainly through its toxic action) is in and of itself forbidden by the treaty. Other chemicals, such as white phosphorus, are highly toxic but are legal under the CWC when they are used by military forces for reasons other than their toxicity.
History
The CWC augments the Geneva Protocol of 1925, which bans the use of chemical and biological weapons in international armed conflicts, but not their development or possession. The CWC also includes extensive verification measures such as on-site inspections, in stark contrast to the 1975 Biological Weapons Convention (BWC), which lacks a verification regime.
After several changes of name and composition, the ENDC evolved into the Conference on Disarmament (CD) in 1984. On 3 September 1992 the CD submitted to the U.N. General Assembly its annual report, which contained the text of the Chemical Weapons Convention. The General Assembly approved the convention on 30 November 1992, and the U.N. Secretary-General then opened the convention for signature in Paris on 13 January 1993. The CWC remained open for signature until its entry into force on 29 April 1997, 180 days after the deposit at the UN by Hungary of the 65th instrument of ratification.
Organisation for the Prohibition of Chemical Weapons (OPCW)
The convention is administered by the Organisation for the Prohibition of Chemical Weapons (OPCW), which acts as the legal platform for specification of the CWC provisions. The Conference of the States Parties is mandated to change the CWC and pass regulations on the implementation of CWC requirements. The Technical Secretariat of the organization conducts inspections to ensure compliance of member states. These inspections target destruction facilities (where constant monitoring takes place during destruction), chemical weapons production facilities which have been dismantled or converted for civil use, as well as inspections of the chemical industry. The Secretariat may furthermore conduct "investigations of alleged use" of chemical weapons and give assistance after use of chemical weapons.
The 2013 Nobel Peace Prize was awarded to the organization because it had, with the Chemical Weapons Convention, "defined the use of chemical weapons as a taboo under international law" according to Thorbjørn Jagland, Chairman of the Norwegian Nobel Committee.
Key points of the Convention
Prohibition of production and use of chemical weapons
Destruction (or monitored conversion to other functions) of chemical weapons production facilities
Destruction of all chemical weapons (including chemical weapons abandoned outside the state parties territory)
Assistance between State Parties and the OPCW in the case of use of chemical weapons
An OPCW inspection regime for the production of chemicals which might be converted to chemical weapons
International cooperation in the peaceful use of chemistry in relevant areas
Controlled substances
The convention distinguishes three classes of controlled substance, chemicals that can either be used as weapons themselves or used in the manufacture of weapons. The classification is based on the quantities of the substance produced commercially for legitimate purposes. Each class is split into Part A, which are chemicals that can be used directly as weapons, and Part B, which are chemicals useful in the manufacture of chemical weapons. Separate from the precursors, the convention defines toxic chemicals as "[a]ny chemical which through its chemical action on life processes can cause death, temporary incapacitation or permanent harm to humans or animals. This includes all such chemicals, regardless of their origin or of their method of production, and regardless of whether they are produced in facilities, in munitions or elsewhere."
Schedule 1 chemicals have few, or no uses outside chemical weapons. These may be produced or used for research, medical, pharmaceutical or chemical weapon defence testing purposes but production at sites producing more than 100 grams per year must be declared to the OPCW. A country is limited to possessing a maximum of 1 tonne of these materials. Examples are sulfur mustard and nerve agents, and substances which are solely used as precursor chemicals in their manufacture. A few of these chemicals have very small scale non-military applications, for example, milligram quantities of nitrogen mustard are used to treat certain cancers.
Schedule 2 chemicals have legitimate small-scale applications. Manufacture must be declared and there are restrictions on export to countries that are not CWC signatories. An example is thiodiglycol which can be used in the manufacture of mustard agents, but is also used as a solvent in inks.
Schedule 3 chemicals have large-scale uses apart from chemical weapons. Plants which manufacture more than 30 tonnes per year must be declared and can be inspected, and there are restrictions on export to countries which are not CWC signatories. Examples of these substances are phosgene (the most lethal chemical weapon employed in WWI), which has been used as a chemical weapon but which is also a precursor in the manufacture of many legitimate organic compounds (e.g. pharmaceutical agents and many common pesticides), and triethanolamine, used in the manufacture of nitrogen mustard but also commonly used in toiletries and detergents.
Many of the chemicals named in the schedules are simply examples from a wider class, defined with Markush like language. For example, all chemicals in the class "O-Alkyl (<=C10, incl. cycloalkyl) alkyl (Me, Et, n-Pr or i-Pr)- phosphonofluoridates chemicals" are controlled, despite only a few named examples being given, such as Soman.
This can make it more challenging for companies to identify if chemicals they handle are subject to the CWC, especially Schedule 2 and 3 chemicals (such as Alkylphosphorus chemicals). For example, Amgard 1045 is a flame retardant, but falls within Schedule 2B as part of Alkylphosphorus chemical class. This approach is also used in controlled drug legislation in many countries and are often termed "class wide controls" or "generic statements".
Due to the added complexity these statements bring in identifying regulated chemicals, many companies choose to carry out these assessments computationally, examining the chemicals structure using in silico tools which compare them to the legislation statements, either with in house systems maintained a company or by the use commercial compliance software solutions.
A treaty party may declare a "single small-scale facility" that produces up to 1 tonne of Schedule 1 chemicals for research, medical, pharmaceutical or protective purposes each year, and also another facility may produce 10 kg per year for protective testing purposes. An unlimited number of other facilities may produce Schedule 1 chemicals, subject to a total 10 kg annual limit, for research, medical or pharmaceutical purposes, but any facility producing more than 100 grams must be declared.
The treaty also deals with carbon compounds called in the treaty "discrete organic chemicals", the majority of which exhibit moderate-high direct toxicity or can be readily converted into compounds with toxicity sufficient for practical use as a chemical weapon. These are any carbon compounds apart from long chain polymers, oxides, sulfides and metal carbonates, such as organophosphates. The OPCW must be informed of, and can inspect, any plant producing (or expecting to produce) more than 200 tonnes per year, or 30 tonnes if the chemical contains phosphorus, sulfur or fluorine, unless the plant solely produces explosives or hydrocarbons.
Category definitions
Chemical weapons are divided into three categories:
Category 1 - based on Schedule 1 substances
Category 2 - based on non-Schedule 1 substances
Category 3 - devices and equipment designed to use chemical weapons, without the substances themselves
Member states
Before the CWC came into force in 1997, 165 states signed the convention, allowing them to ratify the agreement after obtaining domestic approval. Following the treaty's entry into force, it was closed for signature and the only method for non-signatory states to become a party was through accession. As of March 2021, 193 states, representing over 98 percent of the world's population, are party to the CWC. Of the four United Nations member states that are not parties to the treaty, Israel has signed but not ratified the treaty, while Egypt, North Korea, and South Sudan have neither signed nor acceded to the convention. Taiwan, though not a member state, has stated on 27 August 2002 that it fully complies with the treaty.
Key organizations of member states
Member states are represented at the OPCW by their Permanent Representative. This function is generally combined with the function of Ambassador. For the preparation of OPCW inspections and preparation of declarations, member states have to constitute a National Authority.
World stockpile of chemical weapons
A total of 72,304 metric tonnes of chemical agent, and 97 production facilities have been declared to OPCW.
Treaty deadlines
The treaty set up several steps with deadlines toward complete destruction of chemical weapons, with a procedure for requesting deadline extensions. No country reached total elimination by the original treaty date although several have finished under allowed extensions.
Progress of destruction
At the end of 2019, 70,545 of 72,304 (97.51%) metric tonnes of chemical agent have been verifiably destroyed. More than 57% (4.97 million) of chemical munitions and containers have been destroyed.
Seven state parties have completed the destruction of their declared stockpiles: Albania, India, Iraq, Libya, Syria, the United States, and an unspecified state party (believed to be South Korea). Russia also completed the destruction of its declared stockpile. According to the US Arms Control Association, the poisoning of Sergei and Yulia Skripal in 2018 and the poisoning of Alexei Navalny in 2020 indicated that Russia maintained an illicit chemical weapons program.
Japan and China in October 2010 began the destruction of World War II era chemical weapons abandoned by Japan in China by means of mobile destruction units and reported destruction of 35,203 chemical weapons (75% of the Nanjing stockpile).
Iraqi stockpile
The U.N. Security Council ordered the dismantling of Iraq's chemical weapon stockpile in 1991. By 1998, UNSCOM inspectors had accounted for the destruction of 88,000 filled and unfilled chemical munitions, over 690 metric tons of weaponized and bulk chemical agents, approximately 4,000 tonnes of precursor chemicals, and 980 pieces of key production equipment. The UNSCOM inspectors left in 1998.
In 2009, before Iraq joined the CWC, the OPCW reported that the United States military had destroyed almost 5,000 old chemical weapons in open-air detonations since 2004. These weapons, produced before the 1991 Gulf War, contained sarin and mustard agents but were so badly corroded that they could not have been used as originally intended.
When Iraq joined the CWC in 2009, it declared "two bunkers with filled and unfilled chemical weapons munitions, some precursors, as well as five former chemical weapons production facilities" according to OPCW Director General Rogelio Pfirter. The bunker entrances were sealed with 1.5 meters of reinforced concrete in 1994 under UNSCOM supervision. As of 2012, the plan to destroy the chemical weapons was still being developed, in the face of significant difficulties. In 2014, ISIS took control of the site.
On 13 March 2018, the Director-General of the Organisation for the Prohibition of Chemical Weapons (OPCW), Ambassador Ahmet Üzümcü, congratulated the Government of Iraq on the completion of the destruction of the country's chemical weapons remnants.
Syrian destruction
Following the August 2013 Ghouta chemical attack, Syria, which had long been suspected of possessing chemical weapons, acknowledged them in September 2013 and agreed to put them under international supervision. On 14 September Syria deposited its instrument of accession to the CWC with the United Nations as the depositary and agreed to its provisional application pending entry into force effective 14 October. An accelerated destruction schedule was devised by Russia and the United States on 14 September, and was endorsed by United Nations Security Council Resolution 2118 and the OPCW Executive Council Decision EC-M-33/DEC.1. Their deadline for destruction was the first half of 2014. Syria gave the OPCW an inventory of its chemical weapons arsenal and began its destruction in October 2013, 2 weeks before its formal entry into force, while applying the convention provisionally. All declared Category 1 materials were destroyed by August 2014. However, the Khan Shaykhun chemical attack in April 2017 indicated that undeclared stockpiles probably remained in the country. A chemical attack on Douma occurred on 7 April 2018 that killed at least 49 civilians with scores injured, and which has been blamed on the Assad government.
Controversy arose in November 2019 over the OPCW's finding on the Douma chemical weapons attack when Wikileaks published emails by an OPCW staff member saying a report on this incident "misrepresents the facts" and contains "unintended bias". The OPCW staff member questioned the report's finding that OPCW's inspectors had "sufficient evidence at this time to determine that chlorine, or another reactive chlorine-containing chemical, was likely released from cylinders". The staff member alleged this finding was "highly misleading and not supported by the facts" and said he would attach his own differing observations if this version of the report was released. On 25 November 2019, OPCW Director General Fernando Arias, in a speech to the OPCW's annual conference in The Hague, defended the Organization's report on the Douma incident, stating "While some of these diverse views continue to circulate in some public discussion forums, I would like to reiterate that I stand by the independent, professional conclusion" of the probe.
Financial support for destruction
Financial support for the Albanian and Libyan stockpile destruction programmes was provided by the United States. Russia received support from a number of countries, including the United States, the United Kingdom, Germany, the Netherlands, Italy and Canada; with some $2 billion given by 2004. Costs for Albania's program were approximately US$48 million. The United States has spent $20 billion and expected to spend a further $40 billion.
Known chemical weapons production facilities
Fourteen states parties declared chemical weapons production facilities (CWPFs):
{|width=100% style="background:transparent"
|- valign=top
| style="width:25%;"|
| style="width:25%;"|
| style="width:25%;"|
| style="width:25%;"|
|}
1 non-disclosed state party (referred to as "A State Party" in OPCW-communications; said to be South Korea)
Currently all 97 declared production facilities have been deactivated and certified as either destroyed (74) or converted (23) to civilian use.
See also
Related international law
Australia Group of countries and the European Commission that helps member nations identify exports which need to be controlled so as not to contribute to the spread of chemical and biological weapons
1990 US-Soviet Arms Control Agreement
General-purpose criterion, a concept in international law that broadly governs international agreements with respect to chemical weapons
Geneva Protocol, a treaty prohibiting the use of chemical and biological weapons among signatory states in international armed conflicts
Worldwide treaties for other types of weapons of mass destruction
Biological Weapons Convention (BWC) (states parties)
Nuclear Non-Proliferation Treaty (NPT) (states parties)
Treaty on the Prohibition of Nuclear Weapons (TPNW) (states parties)
Chemical weapons
Chemical warfare
Weapons of mass destruction
Tear gas
Related remembrance day
Day of Remembrance for all Victims of Chemical Warfare
References
External links
Full text of the Chemical Weapons Convention, OPCW
Online text of the Chemical Weapons Convention: Articles, Annexes including Chemical Schedules, OPCW
Fact Sheets , OPCW
Chemical Weapons Convention: Ratifying Countries, OPCW
Chemical Weapons Convention Website, United States
The Chemical Weapons Convention at a Glance , Arms Control Association
Chemical Warfare Chemicals and Precursors, Chemlink Pty Ltd, Australia
Introductory note by Michael Bothe, procedural history note and audiovisual material on the Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction in the Historic Archives of the United Nations Audiovisual Library of International Law
Lecture by Santiago Oñate Laborde entitled The Chemical Weapons Convention: an Overview in the Lecture Series of the United Nations Audiovisual Library of International Law
Arms control treaties
Chemical warfare
Human rights instruments
Chemical weapons demilitarization
Non-proliferation treaties
Treaties concluded in 1993
Treaties entered into force in 1997
Treaties of the Afghan Transitional Administration
Treaties of Albania
Treaties of Algeria
Treaties of Andorra
Treaties of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belarus
Treaties of Belgium
Treaties of Belize
Treaties of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of Brazil
Treaties of Brunei
Treaties of Bulgaria
Treaties of Burkina Faso
Treaties of Myanmar
Treaties of Burundi
Treaties of Cambodia
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of the Central African Republic
Treaties of Chad
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of the Comoros
Treaties of the Republic of the Congo
Treaties of the Democratic Republic of the Congo
Treaties of the Cook Islands
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Denmark
Treaties of the Dominican Republic
Treaties of Djibouti
Treaties of Dominica
Treaties of Ecuador
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Eritrea
Treaties of Estonia
Treaties of Ethiopia
Treaties of the Federated States of Micronesia
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of Germany
Treaties of Ghana
Treaties of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea
Treaties of Guinea-Bissau
Treaties of Guyana
Treaties of Haiti
Treaties of the Holy See
Treaties of Honduras
Treaties of Hungary
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Iran
Treaties of Iraq
Treaties of Ireland
Treaties of Italy
Treaties of Jamaica
Treaties of Jordan
Treaties of Japan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kiribati
Treaties of Kuwait
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Jamahiriya
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties of North Macedonia
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Mali
Treaties of Malta
Treaties of the Marshall Islands
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of Moldova
Treaties of Monaco
Treaties of Mongolia
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of Namibia
Treaties of Nauru
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niger
Treaties of Nigeria
Treaties of Niue
Treaties of Norway
Treaties of Oman
Treaties of Pakistan
Treaties of Palau
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of Poland
Treaties of Portugal
Treaties of Qatar
Treaties of Romania
Treaties of Russia
Treaties of Rwanda
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of Samoa
Treaties of San Marino
Treaties of São Tomé and Príncipe
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia and Montenegro
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of Somalia
Treaties of South Africa
Treaties of South Korea
Treaties of Spain
Treaties of Sri Lanka
Treaties of the Republic of the Sudan (1985–2011)
Treaties of Suriname
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Tanzania
Treaties of Thailand
Treaties of Timor-Leste
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Turkmenistan
Treaties of Tuvalu
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Uzbekistan
Treaties of Vanuatu
Treaties of Venezuela
Treaties of Vietnam
Treaties of Yemen
Treaties of Zambia
Treaties of Zimbabwe
Treaties establishing intergovernmental organizations
Treaties extended to Aruba
Treaties extended to the Netherlands Antilles
Treaties extended to Guernsey
Treaties extended to Jersey
Treaties extended to the Isle of Man
Treaties extended to Anguilla
Treaties extended to Bermuda
Treaties extended to the British Antarctic Territory
Treaties extended to the British Indian Ocean Territory
Treaties extended to the British Virgin Islands
Treaties extended to the Cayman Islands
Treaties extended to the Falkland Islands
Treaties extended to Gibraltar
Treaties extended to Montserrat
Treaties extended to the Pitcairn Islands
Treaties extended to Saint Helena, Ascension and Tristan da Cunha
Treaties extended to South Georgia and the South Sandwich Islands
Treaties extended to Akrotiri and Dhekelia
Treaties extended to the Turks and Caicos Islands
Treaties extended to Greenland
Treaties extended to the Faroe Islands | Chemical Weapons Convention | [
"Chemistry"
] | 4,650 | [
"Chemical weapons demilitarization",
"nan",
"Chemical weapons"
] |
5,540,555 | https://en.wikipedia.org/wiki/Yotari | The yotari mouse is an autosomal recessive mutant. It has a mutated disabled homolog 1 (Dab1) gene. This mutant mouse is recognized by unstable gait ("Yota-ru" in Japanese means "unstable gait") and tremor and by early deaths around the time of weaning. The cytoarchitectures of cerebellar and cerebral cortices and hippocampal formation of the yotari mouse are abnormal. These malformations resemble those of reeler mouse.
References
Molecular neuroscience
Molecular genetics | Yotari | [
"Chemistry",
"Biology"
] | 119 | [
"Molecular neuroscience",
"Molecular genetics",
"Molecular biology"
] |
5,540,651 | https://en.wikipedia.org/wiki/Microwave%20transmission | Microwave transmission is the transmission of information by electromagnetic waves with wavelengths in the microwave frequency range of 300 MHz to 300 GHz (1 m - 1 mm wavelength) of the electromagnetic spectrum. Microwave signals are normally limited to the line of sight, so long-distance transmission using these signals requires a series of repeaters forming a microwave relay network. It is possible to use microwave signals in over-the-horizon communications using tropospheric scatter, but such systems are expensive and generally used only in specialist roles.
Although an experimental microwave telecommunication link across the English Channel was demonstrated in 1931, the development of radar in World War II provided the technology for practical exploitation of microwave communication. During the war, the British Army introduced the Wireless Set No. 10, which used microwave relays to multiplex eight telephone channels over long distances. A link across the English Channel allowed General Bernard Montgomery to remain in continual contact with his group headquarters in London.
In the post-war era, the development of microwave technology was rapid, which led to the construction of several transcontinental microwave relay systems in North America and Europe. In addition to carrying thousands of telephone calls at a time, these networks were also used to send television signals for cross-country broadcast, and later, computer data. Communication satellites took over the television broadcast market during the 1970s and 80s, and the introduction of long-distance fibre optic systems in the 1980s and especially 90s led to the rapid rundown of the relay networks, most of which are abandoned.
In recent years, there has been an explosive increase in use of the microwave spectrum by new telecommunication technologies such as wireless networks, and direct-broadcast satellites which broadcast television and radio directly into consumers' homes. Larger line-of-sight links are once again popular for handing connections between mobile telephone towers, although these are generally not organized into long relay chains.
Uses
Microwaves are widely used for point-to-point communications because their small wavelength allows conveniently-sized antennas to direct them in narrow beams, which can be pointed directly at the receiving antenna. This use of tightly-focused direct beams allows microwave transmitters in the same area to use the same frequencies, without interfering with each other as lower frequency radio waves would. This frequency reuse conserves scarce radio spectrum bandwidth. Another advantage is that the high frequency of microwaves gives the microwave band a very large information-carrying capacity; the microwave band has a bandwidth 30 times that of all the rest of the radio spectrum below it. A disadvantage is that microwaves are limited to line of sight propagation; they cannot pass around hills or mountains as lower frequency radio waves can.
Microwave radio transmission is commonly used in point-to-point communication systems on the surface of the Earth, in satellite communications, and in deep space radio communications. Other parts of the microwave radio band are used for radars, radio navigation systems, sensor systems, and radio astronomy.
The next higher frequency band of the radio spectrum, between 30 GHz and 300 GHz, are called "millimeter waves" because their wavelengths range from 10 mm to 1 mm. Radio waves in the millimeter wave band are strongly attenuated by the gases of the atmosphere, which limits their practical transmission distance to a few kilometers, not enough for long-distance communication. The electronic technologies needed in the millimeter wave band are also in an earlier state of development than those of the microwave band.
Wireless transmission of information
One-way and two-way telecommunication using communications satellite
Terrestrial microwave relay links in telecommunications networks including backbone or backhaul carriers in cellular networks
More recently, microwaves have been used for wireless power transmission.
Microwave radio relay
Microwave radio relay is a technology widely used in the 1950s and 1960s for transmitting information, such as long-distance telephone calls and television programs between two terrestrial points on a narrow beam of microwaves. In microwave radio relay, a microwave transmitter and directional antenna transmits a narrow beam of microwaves carrying many channels of information on a line of sight path to another relay station where it is received by a directional antenna and receiver, forming a fixed radio connection between the two points. The link was often bidirectional, using a transmitter and receiver at each end to transmit data in both directions. The requirement of a line of sight limits the separation between stations to the visual horizon, about . For longer distances, the receiving station could function as a relay, retransmitting the received information to another station along its journey. Chains of microwave relay stations were used to transmit telecommunication signals over transcontinental distances. Microwave relay stations were often located on tall buildings and mountaintops, with their antennas on towers to get maximum range.
Beginning in the 1950s, networks of microwave relay links, such as the AT&T Long Lines system in the U.S., carried long-distance telephone calls and television programs between cities. The first system, dubbed TDX and built by AT&T, connected New York and Boston in 1947 with a series of eight radio relay stations. Through the 1950s, they deployed a network of a slightly improved version across the U.S., known as TD2. These included long daisy-chained links that traversed mountain ranges and spanned continents. The launch of communication satellites in the 1970s provided a cheaper alternative. Much of the transcontinental traffic is now carried by satellites and optical fibers, but microwave relay remains important for shorter distances.
Planning
Because in microwave transmission the waves travel in narrow beams confined to a line-of-sight path from one antenna to the other, they do not interfere with other microwave equipment, so nearby microwave links can use the same frequencies. The antennas must therefore be highly directional (high gain), and are installed in elevated locations such as large radio towers in order to be able to avoid the obstructions closer to the ground and transmit across long distances. Typical types of antenna used in radio relay link installations are parabolic antennas, dielectric lens, and horn-reflector antennas, which have a diameter of up to . Highly directive antennas permit an economical use of the available frequency spectrum, despite long transmission distances.
Because of the high frequencies used, a line-of-sight path between the stations is required. Additionally, in order to avoid attenuation of the beam, an area around the beam called the first Fresnel zone must be free from obstacles. Obstacles in the signal field cause unwanted attenuation. High mountain peaks or ridges are often ideal positions for the antennas.
In addition to the use of conventional repeaters with back-to-back radios transmitting on different frequencies, obstructions in microwave paths can also be dealt with by using Passive repeater or on-frequency repeaters.
Obstacles, the curvature of the Earth, the geography of the area and reception issues arising from the use of nearby land (such as in manufacturing and forestry) are important issues to consider when planning radio links. In the planning process, it is essential that "path profiles" are produced, which provide information about the terrain and Fresnel zones affecting the transmission path. The presence of a water surface, such as a lake or river, along the path also must be taken into consideration since it can reflect the beam, and the direct and reflected beam can interfere with each other at the receiving antenna, causing multipath fading. Multipath fades are usually deep only in a small spot and a narrow frequency band, so space and/or frequency diversity schemes can be applied to mitigate these effects.
The effects of atmospheric stratification cause the radio path to bend downward in a typical situation so a major distance is possible as the earth equivalent curvature increases from to about (a 4/3 equivalent radius effect). Rare events of temperature, humidity and pressure profile versus height, may produce large deviations and distortion of the propagation and affect transmission quality. High-intensity rain and snow making rain fade must also be considered as an impairment factor, especially at frequencies above 10 GHz. All of the detrimental factors mentioned in this section, collectively known as path loss, make it necessary to compute suitable power margins, in order to maintain the link operative for a high percentage of time, like the standard 99.99% or 99.999% used in 'carrier class' services of most telecommunication operators.
The longest known microwave radio relay crosses the Red Sea with a hop between Jebel Erba ( a.s.l., , Sudan) and Jebel Dakka ( a.s.l., , Saudi Arabia). The link was built in 1979 by Telettra to transmit 300 telephone channels and one TV signal, in the 2 GHz frequency band. (Hop distance is the distance between two microwave stations).
Previous considerations represent typical problems characterizing terrestrial radio links using microwaves for the so-called backbone networks: hop lengths of a few tens of kilometers (typically ) were largely used until the 1990s. Frequency bands below 10 GHz, and above all, the information to be transmitted, were a stream containing a fixed capacity block. The target was to supply the requested availability for the whole block (Plesiochronous digital hierarchy, PDH, or synchronous digital hierarchy, SDH). Fading and/or multipath affecting the link for short time period during the day had to be counteracted by the diversity architecture. During 1990s microwave radio links begun widely to be used for urban links in cellular network. Requirements regarding link distance changed to shorter hops (less than , typically ), and frequency increased to bands between 11 and 43 GHz and more recently, up to 86 GHz (E-band). Furthermore, link planning deals more with intense rainfall and less with multipath, so diversity schemes became less used. Another big change that occurred during the last decade was an evolution toward packet radio transmission. Therefore, new countermeasures, such as adaptive modulation, have been adopted.
The emitted power is regulated for cellular and microwave systems. These microwave transmissions use emitted power typically from 0.03 to 0.30 W, radiated by a parabolic antenna on a narrow beam diverging by a few degrees (1 to 3-4). The microwave channel arrangement is regulated by International Telecommunication Union (ITU-R) and local regulations (ETSI, FCC). In the last decade the dedicated spectrum for each microwave band has become extremely crowded, motivating the use of techniques to increase transmission capacity such as frequency reuse, polarization-division multiplexing, XPIC, MIMO.
History
The history of radio relay communication began in 1898 with the publication by Johann Mattausch in the Austrian journal, Zeitschrift für Elektrotechnik. But his proposal was primitive and not suitable for practical use. The first experiments with radio repeater stations to relay radio signals were done in 1899 by Emile Guarini-Foresio. However the low frequency and medium frequency radio waves used during the first 40 years of radio proved to be able to travel long distances by ground wave and skywave propagation.
In 1931, an Anglo-French consortium headed by Andre C. Clavier demonstrated an experimental microwave relay link across the English Channel using dishes. Telephony, telegraph, and facsimile data was transmitted over the bidirectional 1.7 GHz beams between Dover, UK, and Calais, France. The radiated power, produced by a miniature Barkhausen–Kurz tube located at the dish's focus, was one-half watt. A 1933 military microwave link between airports at St. Inglevert, France, and Lympne, UK, a distance of , was followed in 1935 by a 300 MHz telecommunication link, the first commercial microwave relay system.
The development of radar during World War II provided much of the microwave technology which made practical microwave communication links possible, particularly the klystron oscillator and techniques of designing parabolic antennas. Though not commonly known, the British Army used the Wireless Set Number 10 in this role during World War II. The need for radio relay did not really begin until the 1940s exploitation of microwaves, which traveled by line of sight and so were limited to a propagation distance of about by the visual horizon.
After the war, telephone companies used this technology to build large microwave radio relay networks to carry long-distance telephone calls. During the 1950s a unit of the US telephone carrier, AT&T Long Lines, built a transcontinental system of microwave relay links across the US which grew to carry the majority of US long distance telephone traffic, as well as television network signals. The main motivation in 1946 to use microwave radio instead of cable was that a large capacity could be installed quickly and at less cost. It was expected at that time that the annual operating costs for microwave radio would be greater than for cable. There were two main reasons that a large capacity had to be introduced suddenly: Pent-up demand for long-distance telephone service, because of the hiatus during the war years, and the new medium of television, which needed more bandwidth than radio. The prototype was called TDX and was tested with a connection between New York City and Murray Hill, the location of Bell Laboratories in 1946. The TDX system was set up between New York and Boston in 1947. The TDX was upgraded to the TD2 system, which used [the Morton tube, 416B and later 416C, manufactured by Western Electric] in the transmitters, and then later to TD3 that used solid-state electronics.
Remarkable were the microwave relay links to West Berlin during the Cold War, which had to be built and operated due to the large distance between West Germany and Berlin at the edge of the technical feasibility. In addition to the telephone network, also microwave relay links for the distribution of TV and radio broadcasts. This included connections from the studios to the broadcasting systems distributed across the country, as well as between the radio stations, for example for program exchange.
Military microwave relay systems continued to be used into the 1960s, when many of these systems were supplanted with tropospheric scatter or communication satellite systems. When the NATO military arm was formed, much of this existing equipment was transferred to communications groups. The typical communications systems used by NATO during that time period consisted of the technologies which had been developed for use by the telephone carrier entities in host countries. One example from the USA is the RCA CW-20A 1–2 GHz microwave relay system which utilized flexible UHF cable rather than the rigid waveguide required by higher frequency systems, making it ideal for tactical applications. The typical microwave relay installation or portable van had two radio systems (plus backup) connecting two line of sight sites. These radios would often carry 24 telephone channels frequency-division multiplexed on the microwave carrier (i.e. Lenkurt 33C FDM). Any channel could be designated to carry up to 18 teletype communications instead. Similar systems from Germany and other member nations were also in use.
Long-distance microwave relay networks were built in many countries until the 1980s, when the technology lost its share of fixed operation to newer technologies such as fiber-optic cable and communication satellites, which offer a lower cost per bit.
During the Cold War, the US intelligence agencies, such as the National Security Agency (NSA), were reportedly able to intercept Soviet microwave traffic using satellites such as Rhyolite/Aquacade. Much of the beam of a microwave link passes the receiving antenna and radiates toward the horizon, into space. By positioning a geosynchronous satellite in the path of the beam, the microwave beam can be received.
At the turn of the 21st century, microwave radio relay systems were used increasingly in portable radio applications. The technology is particularly suited to this application because of lower operating costs, a more efficient infrastructure, and provision of direct hardware access to the portable radio operator.
Microwave link
A microwave link is a communications system that uses a beam of radio waves in the microwave frequency range to transmit video, audio, or data between two locations, which can be from just a few feet or meters to several miles or kilometers apart. Microwave links are commonly used by television broadcasters to transmit programmes across a country, for instance, or from an outside broadcast back to a studio.
Mobile units can be camera mounted, allowing cameras the freedom to move around without trailing cables. These are often seen on the touchlines of sports fields on Steadicam systems.
Properties of microwave links
Involve line of sight (LOS) communication technology
Affected greatly by environmental constraints, including rain fade
Have very limited penetration capabilities through obstacles such as hills, buildings and trees
Sensitive to high pollen count
Signals can be degraded during Solar proton events
Propagation delays are lower than in fiber optic networks because the speed of light in air is faster than in optical cable
Uses of microwave links
In communications between satellites and base stations
As backbone carriers for cellular systems
In short-range indoor communications
Linking remote and regional telephone exchanges to larger (main) exchanges without the need for copper/optical fibre lines
Measuring the intensity of rain between two locations
To give financial advantage to high frequency traders at one stock exchange via faster knowledge of price changes at a distant exchange
Troposcatter
Terrestrial microwave relay links are limited in distance to the visual horizon, a few tens of miles or kilometers depending on tower height. Tropospheric scatter ("troposcatter" or "scatter") was a technology developed in the 1950s to allow microwave communication links beyond the horizon, to a range of several hundred kilometers. The transmitter radiates a beam of microwaves into the sky, at a shallow angle above the horizon toward the receiver. As the beam passes through the troposphere a small fraction of the microwave energy is scattered back toward the ground by water vapor and dust in the air. A sensitive receiver beyond the horizon picks up this reflected signal. Signal clarity obtained by this method depends on the weather and other factors, and as a result, a high level of technical difficulty is involved in the creation of a reliable over horizon radio relay link. Troposcatter links are therefore only used in special circumstances where satellites and other long-distance communication channels cannot be relied on, such as in military communications.
See also
Wireless power transfer
Fresnel zone
Passive repeater
Radio repeater
Relay (disambiguation)
Transmitter station
Path loss
British Telecom microwave network
Trans Canada Microwave
Antenna array
References
Microwave Radio Transmission Design Guide, Trevor Manning, Artech House, 1999
External links
RF / Microwave Design at Oxford University
AT&T's Microwave Radio-Relay Skyway introduced in 1951
Bell System 1951 magazine ad for Microwave Radio-Relay systems.
RCA vintage magazine ad for Microwave-Radio Relay equipment used for Western Union Telegraph Co.
AT&T Long Lines Microwave Towers Remembered
AT&T Long Lines
IEEE Global History Network Microwave Link Networks (Wollschlager, Anthony)
Electromagnetic radiation
Energy development
Wireless energy transfer
Microwave technology
Wireless networking
Television technology
Television terminology | Microwave transmission | [
"Physics",
"Technology",
"Engineering"
] | 3,820 | [
"Information and communications technology",
"Physical phenomena",
"Television technology",
"Electromagnetic radiation",
"Wireless networking",
"Computer networks engineering",
"Radiation"
] |
5,541,228 | https://en.wikipedia.org/wiki/Fragrance%20extraction | Fragrance extraction refers to the separation process of aromatic compounds from raw materials, using methods such as distillation, solvent extraction, expression, sieving, or enfleurage. The results of the extracts are either essential oils, absolutes, concretes, or butters, depending on the amount of waxes in the extracted product.
To a certain extent, all of these techniques tend to produce an extract with an aroma that differs from the aroma of the raw materials. Heat, chemical solvents, or exposure to oxygen in the extraction process may denature some aromatic compounds, either changing their odour character or rendering them odourless, and the proportion of each aromatic component that is extracted can differ.
Maceration/solvent extraction
Certain plant materials contain too little volatile oil to undergo expression, or their chemical components are too delicate and easily denatured by the high heat used in hydrodistillation. Instead, the oils are extracted using their solvent properties.
Organic solvent extraction
Organic solvent extraction is the most common and most economically important technique for extracting aromatics in the modern perfume industry. Raw materials are submerged and agitated in a solvent that can dissolve the desired aromatic compounds. Commonly used solvents for maceration/solvent extraction include hexane, and dimethyl ether.
In organic solvent extraction, aromatic compounds as well as other hydrophobic soluble substances such as wax and pigments are also obtained. The extract is subjected to vacuum processing, which removes the solvent for re-use. The process can last anywhere from hours to months. Fragrant compounds for woody and fibrous plant materials are often obtained in this matter as are all aromatics from animal sources. The technique can also be used to extract odorants that are too volatile for distillation or easily denatured by heat. The remaining waxy mass is known as a concrete, which is a mixture of essential oil, waxes, resins, and other lipophilic (oil-soluble) plant material, since these solvents effectively remove all hydrophobic compounds in the raw material. The solvent is then removed by a lower temperature distillation process and reclaimed for re-use.
Although highly fragrant, concretes are too viscous – even solid – at room temperature to be useful. This is due to the presence of high-molecular-weight, non-fragrant waxes and resins. Another solvent, often ethyl alcohol, which only dissolves the fragrant low-molecular weight compounds, must be used to extract the fragrant oil from the concrete. The alcohol is removed by a second distillation, leaving behind the absolute. These extracts from plants such as jasmine and rose, are called absolutes.
Due to the low temperatures in this process, the absolute may be more faithful to the original scent of the raw material, which is subjected to high heat during the distillation process.
Supercritical fluid extraction
Supercritical fluid extraction is a relatively new technique for extracting fragrant compounds from a raw material, which often employs supercritical CO2 as the extraction solvent. When carbon dioxide is put under high pressure at slightly above room temperature, a supercritical fluid forms (Under normal pressure CO2 changes directly from a solid to a gas in a process known as sublimation.) Since CO2 in a non-polar compound has low surface tension and wets easily, it can be used to extract the typically hydrophobic aromatics from the plant material. This process is identical to one of the techniques for making decaffeinated coffee.
In supercritical fluid extraction, high pressure carbon dioxide gas (up to 100 atm.) is used as a solvent. Due to the low heat of process and the relatively unreactive solvent used in the extraction, the fragrant compounds derived often closely resemble the original odour of the raw material. Like solvent extraction, the CO2 extraction takes place at a low temperature, extracts a wide range of compounds, and leaves the aromatics unaltered by heat, rendering an essence more faithful to the original. Since CO2 is gas at normal atmospheric pressure, it also leaves no trace of itself in the final product, thus allowing one to get the absolute directly without having to deal with a concrete. It is a low-temperature process, and the solvents are easily removed. Extracts produced using this process are known as CO2 extracts.
Ethanol extraction
Ethanol extraction is a type of solvent extraction used to extract fragrant compounds directly from dry raw materials, as well as the impure oils or concrete resulting from organic solvent extraction, expression, or enfleurage. Ethanol extracts from dry materials are called tinctures, while ethanol washes for purifying oils and concretes are called absolutes.
The impure substances or oils are mixed with ethanol, which is less hydrophobic than solvents used for organic extraction, dissolves more of the oxidized aromatic constituents (alcohols, aldehydes, etc.), leaving behind the wax, fats, and other generally hydrophobic substances. The alcohol is evaporated under low-pressure, leaving behind absolute. The absolute may be further processed to remove any impurities that are still present from the solvent extraction.
Ethanol extraction is not typically used to extract fragrance from fresh plant materials; these contain large quantities of water, which will be extracted into the ethanol, although this is sometimes not a concern.
Distillation
Distillation is a common technique for obtaining aromatic compounds from plants, such as orange blossoms and roses. The raw material is heated and the fragrant compounds are re-collected through condensation of the distilled vapor. Distilled products, whether through steam or dry distillation are known either as essential oils or ottos.
Today, most common essential oils, such as lavender, peppermint, and eucalyptus, are distilled. Raw plant material, consisting of the flowers, leaves, wood, bark, roots, seeds, or peel, is put into an alembic (distillation apparatus) over water.
Steam distillation
Steam from boiling water is passed through the raw material for 60–105 minutes, which drives out most of their volatile fragrant compounds. The condensate from distillation, which contain both water and the aromatics, is settled in a Florentine flask. This allows for the easy separation of the fragrant oils from the water as the oil will float to the top of the distillate where it is removed, leaving behind the watery distillate. The water collected from the condensate, which retains some of the fragrant compounds and oils from the raw material, is called hydrosol and is sometimes sold for consumer and commercial use. This method is most commonly used for fresh plant materials such as flowers, leaves, and stems. Popular hydrosols are rose water, lavender water, and orange blossom water. Many plant hydrosols have unpleasant smells and are therefore not sold.
Most oils are distilled in a single process. One exception is Ylang-ylang (Cananga odorata), which takes 22 hours to complete distillation. It is fractionally distilled, producing several grades (Ylang-Ylang "extra", I, II, III and "complete", in which the distillation is run from start to finish with no interruption).
Dry/destructive distillation
Also known as rectification, the raw materials are directly heated in a still without a carrier solvent such as water. Fragrant compounds that are released from the raw material by the high heat often undergo anhydrous pyrolysis, which results in the formation of different fragrant compounds, and thus different fragrant notes. This method is used to obtain fragrant compounds from fossil amber and fragrant woods (such as birch tar) where an intentional "burned" or "toasted" odour is desired.
Fractionation distillation
Through the use of a fractionation column, different fractions distilled from a material can be selectively excluded to manipulate the scent of the final product. Although the product is more expensive, this is sometimes performed to remove unpleasant or undesirable scents of a material and affords the perfumer more control over their composition process. This is often performed as a second step on material that has already been extracted rather than on raw material.
Expression
Expression as a method of fragrance extraction where raw materials are pressed, squeezed or compressed and the essential oils are collected. In contemporary times, the only fragrant oils obtained using this method are the peels of fruits in the citrus family. This is due to the large quantity of oil is present in the peels of these fruits as to make this extraction method economically feasible. Citrus peel oils are expressed mechanically, or cold-pressed. Due to the large quantities of oil in citrus peel and the relatively low cost to grow and harvest the raw materials, citrus-fruit oils are cheaper than most other essential oils to the extent that purified limonene extracted from these fruit is available as an affordable naturally-derived solvent. Lemon or sweet orange oils that are obtained as by-products of the commercial citrus industry are among the cheapest citrus oils.
Expression was mainly used prior to the discovery of distillation, and this is still the case in cultures such as Egypt. Traditional Egyptian practice involves pressing the plant material, then burying it in unglazed ceramic vessels in the desert for a period of months to drive out water. The water has a smaller molecular size, so it diffuses through the ceramic vessels, while the larger essential oils do not. The lotus oil in Tutankhamen's tomb, which retained its scent after 3000 years sealed in alabaster vessels, was pressed in this manner.
Enfleurage
Enfleurage is a process in which the odour of aromatic materials is absorbed into wax or fat, which is then often extracted with alcohol. Extraction by enfleurage was commonly used when distillation was not possible because some fragrant compounds denature through high heat. This technique is not commonly used in modern industry, due to both its prohibitive cost and the existence of more efficient and effective extraction methods.
See also
Perfume
Rose oil
Clove oil
References
Oils
Aromatherapy
Perfumery
Flavor technology | Fragrance extraction | [
"Chemistry"
] | 2,100 | [
"Essential oils",
"Oils",
"Carbohydrates",
"Natural products"
] |
5,541,517 | https://en.wikipedia.org/wiki/POTS%20codec | A POTS codec is a type of audio coder-decoder (codec) that uses digital signal processing to transmit audio digitally over standard telephone lines (plain old telephone service, POTS) at a higher level of audio quality than the telephone line would normally provide in its analog mode. The POTS codec is one of a family of broadcast codecs differentiated by the type of telecommunications circuit used for transmission. The ISDN codec, which instead uses ISDN lines, and the IP codec which uses private or public IP networks are also common.
Primarily used in broadcast engineering to link remote broadcast locations to the host studio, a hardware codec, implemented with digital signal processing, is used to compress the audio data enough to travel through a pair of a 33.6k modems.
POTS codecs have the disadvantages of being restricted to relatively low bit rates and being susceptible to variable line quality. ISDN and IP codecs have the advantage of being natively digital, and operate at much higher bitrates, which results in fewer compression artifacts. Special lines must be run to a location, however, and must be ordered well in advance of the event so that there is ample time for installation of equipment. Since POTS lines are almost universally available, the POTS codec can be set up nearly anywhere with little or no notice.
Functions
Codecs usually come in two types of units: rackmount for the studio and portable for the remote. Audio can be sent in either direction, and most can also pass low-speed non-audio data, allowing the remote DJ to control broadcast automation or other studio equipment via RS-232. Many have an automatic redial if the line should become disconnected. The remote unit usually has some basic mixer functions, while the studio unit usually has some kind of digital output.
Some codecs can be configured to use ISDN, POTS or IP rather than requiring a different device for each network, while others are exclusively designed for POTS operation. ISDN and IP connections implement algorithms like G.722, MPEG, AAC, aacPlus, Apt-X and AAC-LD (low-delay), while POTS connections almost always use proprietary low-bitrate algorithms. Consequently, while ISDN connections can usually be established between codecs from different manufacturers, POTS connections (and usually IP connections) can only be established between codecs from the same family. Some codecs can use GSM networks, and some have variable bitrate to compensate for poor connections. It is sometimes possible to bond two POTS lines together for redundancy and fault tolerance, and improved bandwidth.
Codecs are made by Comrex, Sonifex, Tieline, APT, Telos and Prodys among others.
References
https://www.radioworld.com/news-and-business/get-the-most-out-of-your-pots-codec
http://www.tieline.com/manuals/TLR5200D/en/v2_14/about_pots_modules.htm
https://www.telosalliance.com/legacy/telos-zephyr-xport
https://www.comrex.com/support/access-2usb/pots-primer/
Broadcast engineering
Telephony equipment | POTS codec | [
"Engineering"
] | 686 | [
"Broadcast engineering",
"Electronic engineering"
] |
5,541,533 | https://en.wikipedia.org/wiki/Bull%20of%20the%20Woods%20Wilderness | The Bull of the Woods Wilderness is a wilderness area located in the Mount Hood National Forest in the northwestern Cascades of Oregon, United States. It was created in 1984 and consists of including prime low-elevation old-growth forest, about a dozen lakes of at least and many large creeks and streams. Adjacent areas, including Opal Creek Wilderness to the west, create a pristine area of nearly . There are seven trails that access the wilderness area with an additional seven trails within the protection boundaries themselves. Combined the system provides
of challenging terrain for both pedestrian and equestrian recreation.
The name of the peak and thus the wilderness area comes from logging jargon in which the "bull of the woods" was the most experienced logging foreman in an operation.
Topography
tall Battle Ax summit is the highest point in the Wilderness. Among other tall peaks are Schreiner Peak, Big Slide Mountain and Bull of the Woods Mountain, from which the area derives its name. An abandoned fire lookout stands at the top of Bull of the Woods Mountain, from which views of the Cascades and the surrounding territory can be seen. The mountain slopes are quite steep, with lower inclines ranging from 30 to 60 degrees and upper inclines from 60 to 90 degrees. The wilderness contains the headwaters of the Collawash, and Little North Santiam rivers.
Vegetation
The forest consists almost solely of coniferous species such as Douglas Fir, Western Hemlock, and Western Red Cedar, but deciduous red alder is also prevalent along creeks. Pacific yew is also common in certain parts of the wilderness, and rhododendrons can be seen blooming profusely throughout many areas around early June. Bull of the Woods contains one of the last stands of old growth in western Oregon, and is home to the northern spotted owl.
Recreation
Primary recreational activities in Bull of the Woods include camping, hiking, wildlife watching, and soaking in the hot springs. It is possible to see relics of the 19th century gold rush, such as deserted mine shafts and old mining equipment. Various trails lead to a fire lookout at the peak of Bull of the Woods Mountain, with fantastic views of the Wilderness.
Gallery
See also
List of Oregon Wildernesses
List of U.S. Wilderness Areas
List of old growth forests
Wilderness Act
References
External links
Cascade Range
Protected areas of Clackamas County, Oregon
Protected areas of Marion County, Oregon
Old-growth forests
Wilderness areas of Oregon
Mount Hood National Forest
1984 establishments in Oregon
Protected areas established in 1984 | Bull of the Woods Wilderness | [
"Biology"
] | 498 | [
"Old-growth forests",
"Ecosystems"
] |
5,541,680 | https://en.wikipedia.org/wiki/Race%20to%20Space | Race to Space is a 2001 fictional American family drama film. The film was shot on location at Cape Canaveral and Cocoa Beach and Edwards AFB in cooperation with NASA and the U.S. Air Force.
Plot
During the 1960s space race between the United States and the Soviet Union, Dr. Wilhelm von Huber, a top NASA scientist, relocates to Cape Canaveral with his 12-year-old son, Billy. Their relationship has become strained in the wake of the recent death of Billy's mother, and the ever-widening gap between father and son has become obvious.
Billy finds his father old-fashioned and boring. He wants to lead an exciting life: to be a hero like the astronaut Alan Shepard.
However, Billy's life takes an exciting turn when he is hired by Dr. Donni McGuinness, the Director of Veterinary Sciences, to help train the chimpanzees for NASA space missions. Billy begins to develop a close bond with one particular chimpanzee named Mac. With Billy's help and companionship, Mac is chosen to become the first American astronaut launched into space.
All seems like a wonderful game until Billy realizes that his new friend is being prepared to be hurled hundreds of miles into orbit on a historical mission - and that someone at NASA is about to sabotage the mission. Mac's big chance to explore the farthest frontier and hurtle America ahead in the race to space might easily cost him his life.
Cast
James Woods as Dr. Wilhelm von Huber
Annabeth Gish as Dr. Donni McGuinness
Alex D. Linz as Wilhelm 'Billy' von Huber
William Devane as Roger Thornhill
William Atherton as Ralph Stanton
Wesley Mann as Rudolph
Mark Moses as Alan Shepard
Tony Jay as Narrator
Reception
Common Sense Media rated the film 3 out of 5 stars.
See also
Monkeys and apes in space
Ham, the first chimpanzee in space
Enos, the second chimpanzee in space and only one to orbit the Earth
References
External links
2001 films
2001 drama films
American drama films
Films directed by Sean McNamara
Brookwell McNamara Entertainment films
Films set in 1961
Films set in Florida
Films about space programs
Animals in space
2000s English-language films
Films about father–son relationships
2000s American films | Race to Space | [
"Chemistry",
"Biology"
] | 462 | [
"Animal testing",
"Space-flown life",
"Animals in space"
] |
5,542,058 | https://en.wikipedia.org/wiki/Isoleucine%20%28data%20page%29 |
References
Chemical data pages
Chemical data pages cleanup | Isoleucine (data page) | [
"Chemistry"
] | 10 | [
"Chemical data pages",
"nan"
] |
5,542,380 | https://en.wikipedia.org/wiki/Uim | uim (short for "universal input method") is a multilingual input method framework. Applications can use it through so-called bridges.
Supported applications
uim supports the X Window System legacy XIM (short for X Input Method) through the uim-xim bridge. Many X applications are written in either GTK+ or Qt, which have their own modules dealing with input methods, and uim supports both of these with its GTK+ and Qt immodules.
uim has a bridge for the console (uim-fep), Emacs (uim.el), and macOS (MacUIM).
See also
List of input methods for UNIX platforms
References
External links
Homepage
Source code repository
Mailing list
Bug tracking system
Freedesktop.org
Software using the BSD license | Uim | [
"Technology"
] | 174 | [
"Input methods",
"Natural language and computing"
] |
5,542,455 | https://en.wikipedia.org/wiki/Outrageous%20Betrayal | Outrageous Betrayal: The Dark Journey of Werner Erhard from est to Exile is a non-fiction book written by freelance journalist Steven Pressman and first published in 1993 by St. Martin's Press. The book gives an account of Werner Erhard's early life as Jack Rosenberg, his exploration of various forms of self-help techniques, and his foundation of Erhard Seminars Training "est" and later of Werner Erhard and Associates and of the est successor course, "The Forum". Pressman details the rapid financial success Erhard had with these companies, as well as controversies relating to litigation involving former participants in his courses. The work concludes by going over the impact of a March 3, 1991 60 Minutes broadcast on CBS where members of Erhard's family made allegations against him, and Erhard's decision to leave the United States.
Representatives of Werner Erhard and of Landmark Worldwide, the successor company to The Forum, regarded the book as being "defamatory".
Author
Pressman worked as a journalist after graduating from college in 1977. He worked as a journalist for Orange City News, the Los Angeles Daily Journal, California Lawyer magazine, and Congressional Quarterly's Weekly Report. During his time performing research for and writing Outrageous Betrayal, Pressman published articles for the Legal Times newspaper and wrote articles and served as a senior editor for California Republic. In 1993, Pressman worked as a San Francisco-based legal journalist for California Lawyer.
Research
In the "Acknowledgments" section of Outrageous Betrayal, Pressman wrote that he relied upon both named and unnamed sources for information in the book, in addition to "previously published accounts, court transcripts, depositions, and other documents in which various individuals have recounted earlier conversations". In an article on fair use for Columbia Journalism Review, Pressman noted that he "gathered reams of written materials -- some of it private and confidential -- that were helpful in drawing a comprehensive portrait of my subject". In the Daily Journal, Pressman wrote that legal counsel for the book's publisher insisted on numerous changes to the book "in order to reduce, if not eliminate, the possibility of a successful suit for copyright infringement".
By 1993, Pressman and St. Martin's Press had received approximately two dozen letters from Erhard's attorney Walter Maksym, though Erhard's representatives had yet to see the book itself. Maksym told the San Francisco Daily Journal in March 1993 that he wanted to "fact check the book", because he believed that "this is a first-time unknown author who apparently has interviewed only people who have negative things to say", and stated "We have cautioned the publisher that they are responsible for the accuracy of the book." Charlie Spicer, a senior editor at St. Martin's Press, described the actions of Erhard's representatives with regard to the book as "a desperate campaign by someone with something to hide". The author himself made specific reference to his legal support, mentioning "the potential legal rapids that confront authors writing these days about controversial subjects".
Contents
In Outrageous Betrayal, Steven Pressman gives a chronological account of Erhard's life and businesses, from high-school years through his formation of companies that delivered awareness training and the later controversies surrounding his business and family life. The book goes into detail regarding his early life as Jack Rosenberg and his name-change to Werner Erhard, his move to California, and the initial inspirations behind the training that would become "est". Pressman writes that Erhard took inspiration from the self-help course Mind Dynamics, cybernetics, from the books Think and Grow Rich by Napoleon Hill, and Psycho-Cybernetics by Maxwell Maltz, and from Scientology and the writings of L. Ron Hubbard. He also notes how an attorney skilled in tax law helped Erhard in forming his first awareness-training company, Erhard Seminars Training.
Pressman notes how Erhard and his businesses became successful within two years of foundation, and writes that his awareness-training programs trained over half a million people in his courses and brought in tens of millions of dollars in revenue. The book then describes controversies relating to both Erhard's businesses and his reported treatment of his family. Pressman also describes the successor company to Est, Werner Erhard and Associates, and Erhard's decision to sell the "technology" of his course The Forum to his employees and to leave the United States. The book's epilogue includes a firsthand account of a Landmark Forum seminar led by the former Est-trainer Laurel Scheaf in 1992.
Reception
St. Martin's Press first published Outrageous Betrayal in 1993, and Random House published a second edition of the text in 1995.
An analysis in Kirkus Reviews, noting the choice of title by the author, asserted that Pressman: "makes no pretense to objectivity here." Kirkus Reviews criticized the book, saying "What the author dramatically fails to provide by bearing down on the negative (to the extent that nearly all his informants denounce est and its founder) is any real understanding of est's teachings--and of why they appealed so deeply to so many." Paul S. Boyer, professor of history at the University of Wisconsin–Madison, reviewed the book in The Washington Post. Boyer wrote that the book "nicely recounts the bizarre tale" of Werner Erhard, saying "Pressman tells his fascinating story well." However he also commented that the book gives "only the sketchiest historical context" of est and its roots in societal experiences.
A review by Mary Carroll published in the American Library Association's Booklist noted that the controversy surrounding Erhard was not new, but she wrote that "Pressman pulls the details together effectively." Carroll went on to comment: "Outrageous Betrayal is a disturbing but fascinating object lesson in the power of charisma divorced from conscience." Frances Halpern of the Los Angeles Times called the book a "damning biography".
In 1995, Outrageous Betrayal was cited in a report on the United States Department of Transportation by the United States House of Representatives Committee on Appropriations in a case unrelated to Erhard or Est. This was in reference to a Congressional investigation of Gregory May and controversial trainings given by his company Gregory May Associates (GMA) to the Federal Aviation Administration. The testimony given stated that, according to Outrageous Betrayal, a member of GMA's board had been influenced by Erhard Seminars Training and the Church of Scientology.
See also
Human Potential Movement
Journalism sourcing
Large Group Awareness Training
Notes
References
1993 non-fiction books
Human Potential Movement
Personal development
Werner Erhard | Outrageous Betrayal | [
"Biology"
] | 1,353 | [
"Personal development",
"Behavior",
"Human behavior"
] |
5,542,720 | https://en.wikipedia.org/wiki/Nitramex%20and%20Nitramon%20Explosives | Nitramex and Nitramon Explosives are compositions of various chemical compounds. They are explosives based on ammonium nitrate, with other ingredients such as paraffin wax, aluminum and dinitrotoluene. The inclusion of these additional ingredients creates a more stable explosive. Nitramex and Nitramon have in modern times been replaced by more advanced high explosives based on ammonium nitrate, such as ANFO.
Nitramon
A typical nitramon formula contains approximately 92 percent ammonium nitrate, 4 percent dinitrotoluene and 4 percent paraffin wax.
Nitramon is insensitive to shock, friction, flame and impact. It can't be detonated by blasting cap and requires booster to set it off. Different grades of Nitramon were produced, including S and WW for seismic exploration and HH for blasting in high temperature environments, like coal-seam fires.
Nitramex
Nitramex has much the same formula as nitramon but with the addition of trinitrotoluene (TNT). It has higher density and explosive strength than Nitramon. Nitramex was developed for blasting hard rock.
This explosive was used in the removal of Ripple Rock. Large quantities of Nitramex 2H (over a thousand tonnes) were packed into tunnels. The explosion, in 1958, was one of the largest non-nuclear explosions in history.
References
Explosive chemicals | Nitramex and Nitramon Explosives | [
"Chemistry"
] | 292 | [
"Explosive chemicals"
] |
5,542,769 | https://en.wikipedia.org/wiki/Neutral%20mutation | Neutral mutations are changes in DNA sequence that are neither beneficial nor detrimental to the ability of an organism to survive and reproduce. In population genetics, mutations in which natural selection does not affect the spread of the mutation in a species are termed neutral mutations. Neutral mutations that are inheritable and not linked to any genes under selection will be lost or will replace all other alleles of the gene. That loss or fixation of the gene proceeds based on random sampling known as genetic drift. A neutral mutation that is in linkage disequilibrium with other alleles that are under selection may proceed to loss or fixation via genetic hitchhiking and/or background selection.
While many mutations in a genome may decrease an organism’s ability to survive and reproduce, also known as fitness, those mutations are selected against and are not passed on to future generations. The most commonly-observed mutations that are detectable as variation in the genetic makeup of organisms and populations appear to have no visible effect on the fitness of individuals and are therefore neutral. The identification and study of neutral mutations has led to the development of the neutral theory of molecular evolution, which is an important and often-controversial theory that proposes that most molecular variation within and among species is essentially neutral and not acted on by selection. Neutral mutations are also the basis for using molecular clocks to identify such evolutionary events as speciation and adaptive or evolutionary radiations.
History
Charles Darwin commented on the idea of neutral mutation in his work, hypothesizing that mutations that do not give an advantage or disadvantage may fluctuate or become fixed apart from natural selection. "Variations neither useful nor injurious would not be affected by natural selection, and would be left either a fluctuating element, as perhaps we see in certain polymorphic species, or would ultimately become fixed, owing to the nature of the organism and the nature of the conditions." While Darwin is widely credited with introducing the idea of natural selection which was the focus of his studies, he also saw the possibility for changes that did not benefit or hurt an organism.
Darwin's view of change being mostly driven by traits that provide advantage was widely accepted until the 1960s. While researching mutations that produce nucleotide substitutions in 1968, Motoo Kimura found that the rate of substitution was so high that if each mutation improved fitness, the gap between the most fit and typical genotype would be implausibly large. However, Kimura explained this rapid rate of mutation by suggesting that the majority of mutations were neutral, i.e. had little or no effect on the fitness of the organism. Kimura developed mathematical models of the behavior of neutral mutations subject to random genetic drift in biological populations. This theory has become known as the neutral theory of molecular evolution.
As technology has allowed for better analysis of genomic data, research has continued in this area. While natural selection may encourage adaptation to a changing environment, neutral mutation may push divergence of species due to nearly random genetic drift.
Impact on evolutionary theory
Neutral mutation has become a part of the neutral theory of molecular evolution, proposed in the 1960s. This theory suggests that neutral mutations are responsible for a large portion of DNA sequence changes in a species. For example, bovine and human insulin, while differing in amino acid sequence are still able to perform the same function. The amino acid substitutions between species were seen therefore to be neutral or not impactful to the function of the protein. Neutral mutation and the neutral theory of molecular evolution are not separate from natural selection but add to Darwin's original thoughts. Mutations can give an advantage, create a disadvantage, or make no measurable difference to an organism's survival.
A number of observations associated with neutral mutation were predicted in neutral theory including: amino acids with similar biochemical properties should be substituted more often than biochemically different amino acids; synonymous base substitutions should be observed more often than nonsynonymous substitutions; introns should evolve at the same rate as synonymous mutations in coding exons; and pseudogenes should also evolve at a similar rate. These predictions have been confirmed with the introduction of additional genetic data since the theory’s introduction.
Types
Synonymous mutation of bases
When an incorrect nucleotide is inserted during replication or transcription of a coding region, it can affect the eventual translation of the sequence into amino acids. Since multiple codons are used for the same amino acids, a change in a single base may still lead to translation of the same amino acid. This phenomenon is referred to as degeneracy and allows for a variety of codon combinations leading to the same amino acid being produced. For example, the codes TCT, TCC, TCA, TCG, AGT, and AGC all code for the amino acid serine. This can be explained by the wobble concept. Francis Crick proposed this theory to explain why specific tRNA molecules could recognize multiple codons. The area of the tRNA that recognizes the codon called the anticodon is able to bind multiple interchangeable bases at its 5' end due to its spatial freedom. A fifth base called inosine can also be substituted on a tRNA and is able to bind with A, U, or C. This flexibility allows for changes in bases in codons leading to translation of the same amino acid. The changing of a base in a codon without the changing of the translated amino acid is called a synonymous mutation. Since the amino acid translated remains the same a synonymous mutation has traditionally been considered a neutral mutation. Some research has suggested that there is bias in selection of base substitution in synonymous mutation. This could be due to selective pressure to improve translation efficiency associated with the most available tRNAs or simply mutational bias. If these mutations influence the rate of translation or an organism’s ability to manufacture protein they may actually influence the fitness of the affected organism.
Neutral amino acid substitution
While substitution of a base in a noncoding area of a genome may make little difference and be considered neutral, base substitutions in or around genes may impact the organism. Some base substitutions lead to synonymous mutation and no difference in the amino acid translated as noted above. However, a base substitution can also change the genetic code so that a different amino acid is translated. This sort of substitution usually has a negative effect on the protein being formed and will be eliminated from the population through purifying selection. However, if the change has a positive influence, the mutation may become more and more common in a population until it becomes a fixed genetic piece of that population. Organisms changing via these two options comprise the classic view of natural selection. A third possibility is that the amino acid substitution makes little or no positive or negative difference to the affected protein. Proteins demonstrate some tolerance to changes in amino acid structure. This is somewhat dependent on where in the protein the substitution takes place. If it occurs in an important structural area or in the active site, one amino acid substitution may inactivate or substantially change the functionality of the protein. Substitutions in other areas may be nearly neutral and drift randomly over time.
Identification and measurement of neutrality
Neutral mutations are measured in population and evolutionary genetics often by looking at variation in populations. These have been measured historically by gel electrophoresis to determine allozyme frequencies. Statistical analyses of this data is used to compare variation to predicted values based on population size, mutation rates and effective population size. Early observations that indicated higher than expected heterozygosity and overall variation within the protein isoforms studied, drove arguments as to the role of selection in maintaining this variation versus the existence of variation through the effects of neutral mutations arising and their random distribution due to genetic drift. The accumulation of data based on observed polymorphism led to the formation of the neutral theory of evolution. According to the neutral theory of evolution, the rate of fixation in a population of a neutral mutation will be directly related to the rate of formation of the neutral allele.
In Kimura’s original calculations, mutations with |2 Ns|<1 or |s|≤1/(2N) are defined as neutral. In this equation, N is the effective population size and is a quantitative measurement of the ideal population size that assumes such constants as equal sex ratios and no emigration, migration, mutation nor selection. Conservatively, it is often assumed that effective population size is approximately one fifth of the total population size. s is the selection coefficient and is a value between 0 and 1. It is a measurement of the contribution of a genotype to the next generation where a value of 1 would be completely selected against and make no contribution and 0 is not selected against at all. This definition of neutral mutation has been criticized due to the fact that very large effective population sizes can make mutations with small selection coefficients appear non neutral. Additionally, mutations with high selection coefficients can appear neutral in very small populations. The testable hypothesis of Kimura and others showed that polymorphism within species are approximately that which would be expected in a neutral evolutionary model.
For many molecular biology approaches, as opposed to mathematical genetics, neutral mutations are generally assumed to be those mutations that cause no appreciable effect on gene function. This simplification eliminates the effect of minor allelic differences in fitness and avoids problems when a selection has only a minor effect.
Early convincing evidence of this definition of neutral mutation was shown through the lower mutational rates in functionally important parts of genes such as cytochrome c versus less important parts and the functionally interchangeable nature of mammalian cytochrome c in in vitro studies. Nonfunctional pseudogenes provide more evidence for the role of neutral mutations in evolution. The rates of mutation in mammalian globin pseudogenes has been shown to be much higher than rates in functional genes. According to neo-Darwinian evolution, such mutations should rarely exist as these sequences are functionless and positive selection would not be able to operate.
The McDonald-Kreitman test has been used to study selection over long periods of evolutionary time. This is a statistical test that compares polymorphism in neutral and functional sites and estimates what fraction of substitutions have been acted on by positive selection. The test often uses synonymous substitutions in protein coding genes as the neutral component; however, synonymous mutations have been shown to be under purifying selection in many instances.
Molecular clocks
Molecular clocks can be used to estimate the amount of time since divergence of two species and for placing evolutionary events in time. Pauling and Zuckerkandl, proposed the idea of the molecular clock in 1962 based on the observation that the random mutation process occurs at an approximate constant rate. Individual proteins were shown to have linear rates of amino acid changes over evolutionary time. Despite controversy from some biologists arguing that morphological evolution would not proceed at a constant rate, many amino acid changes were shown to accumulate in a constant fashion. Kimura and Ohta explained these rates as part of the framework of the neutral theory. These mutations were reasoned to be neutral as positive selection should be rare and deleterious mutations should be eliminated quickly from a population. By this reasoning, the accumulation of these neutral mutations should only be influenced by the mutation rate. Therefore, the neutral mutation rate in individual organisms should match the molecular evolution rate in species over evolutionary time. The neutral mutation rate is affected by the amount of neutral sites in a protein or DNA sequence versus the amount of mutation in sites that are functionally constrained. By quantifying these neutral mutations in protein and/or DNA and comparing them between species or other groups of interest, rates of divergence can be determined.
Molecular clocks have caused controversy due to the dates they derive for events such as explosive radiations seen after extinction events like the Cambrian explosion and the radiations of mammals and birds. Two-fold differences exist in dates derived from molecular clocks and the fossil record. While some paleontologists argue that molecular clocks are systemically inaccurate, others attribute the discrepancies to lack of robust fossil data and bias in sampling. While not without constancy and discrepancies with the fossil record, the data from molecular clocks have shown how evolution is dominated by the mechanisms of a neutral model and is less influenced by the action of natural selection.
See also
Codon degeneracy
Silent mutation
References
External links
Standard and generalized McDonald-Kreitman test
Neutrality and Molecular Clocks
Mutation
Evolutionary biology
Neutral theory | Neutral mutation | [
"Biology"
] | 2,509 | [
"Evolutionary biology",
"Non-Darwinian evolution",
"Neutral theory",
"Biology theories"
] |
5,543,013 | https://en.wikipedia.org/wiki/Ragone%20plot | A Ragone plot ( ) is a plot used for comparing the energy density of various energy-storing devices. On such a chart the values of specific energy (in W·h/kg) are plotted versus specific power (in W/kg). Both axes are logarithmic, which allows comparing performance of very different devices. Ragone plots can reveal information about gravimetric energy density, but do not convey details about volumetric energy density.
The Ragone plot was first used to compare performance of batteries. However, it is suitable for comparing any energy-storage devices, as well as energy devices such as engines, gas turbines, and fuel cells. The plot is named after David V. Ragone.
Conceptually, the vertical axis describes how much energy is available per unit mass, while the horizontal axis shows how quickly that energy can be delivered, otherwise known as power per unit mass. A point in a Ragone plot represents a particular energy device or technology.
The amount of time (in hours) during which a device can be operated at its rated power is given as the ratio between the specific energy (Y-axis) and the specific power (X-axis). This is true regardless of the overall scale of the device, since a larger device would have proportional increases in both power and energy. Consequently, the iso curves (constant operating time) in a Ragone plot are straight lines.
For electrical systems, the following equations are relevant:
where V is voltage (V), I electric current (A), t time (s) and m mass (kg).
References
Capacitors
Electric battery
Charts | Ragone plot | [
"Physics"
] | 331 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
5,543,603 | https://en.wikipedia.org/wiki/Tripedalism | Tripedalism (from the Latin tri = three + ped = foot) is locomotion by the use of three limbs. Real-world tripedalism is rare, in contrast to the common bipedalism of two-legged animals and quadrupedalism of four-legged animals. Bilateral symmetry seems to have become entrenched very early in evolution, appearing even before appendages like legs, fins or flippers had evolved.
In nature
It has been said that parrots (birds of the order Psittaciformes) display tripedalism during climbing gaits, which was tested and proven in a 2022 paper on the subject, making parrots the only creatures to truly use tripedal forms of locomotion. Tripedal gaits were also observed by K. Hunt in primates. This is usually observed when the animal is using one limb to grasp a carried object and is thus a non-standard gait. Apart from climbing in parrots, there are no known animal behaviours where the same three extremities are routinely used to contact environmental supports, although the movement of some macropods such as kangaroos, which can alternate between resting their weight on their muscular tails and their two hind legs and hop on all three, may be an example of tripedal locomotion in animals. There are also the tripod fish. Several species of these fish rest on the ocean bottom on two rays from their two pelvic fins and one ray from their caudal fin.
Quadrupedal amputees and mutations
There are some three-legged creatures in the world today, namely four-legged animals (such as pet dogs and cats) which have had one limb amputated. With proper medical treatment most of these injured animals can go on to live fairly normal lives, despite being artificially tripedal. There are also cases of mutations or birth abnormalities in animals (including humans) which have resulted in three legs.
See also
Bipedalism
Quadrupedalism
Terrestrial locomotion
Tetrapod
Uniped
References
Ethology
Terrestrial locomotion
Animal anatomy
Pedalism | Tripedalism | [
"Biology"
] | 435 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
5,544,616 | https://en.wikipedia.org/wiki/Near-threatened%20species | A near-threatened species is a species which has been categorized as "Near Threatened" (NT) by the International Union for Conservation of Nature (IUCN) as that may be vulnerable to endangerment in the near future, but it does not currently qualify for the threatened status.
The IUCN notes the importance of reevaluating near-threatened taxa at appropriate intervals.
The rationale used for near-threatened taxa usually includes the criteria of vulnerable which are plausible or nearly met, such as reduction in numbers or range. Those designated since 2001 that depend on conservation efforts to not become threatened are no longer separately considered conservation-dependent species.
IUCN Categories and Criteria version 2.3
Before 2001, the IUCN used the version 2.3 Categories and Criteria to assign conservation status, which included a separate category for conservation-dependent species ("Conservation Dependent", LR/cd). With this category system, Near Threatened and Conservation Dependent were both subcategories of the category "Lower Risk". Taxa which were last evaluated before 2001 may retain their LR/cd or LR/nt status, although had the category been assigned with the same information today the species would be designated simply "Near Threatened (NT)" in either case.
Gallery
See also
IUCN Red List near threatened species, ordered by taxonomic rank.
:Category:IUCN Red List near threatened species, ordered alphabetically.
List of near threatened amphibians
List of near threatened arthropods
List of near threatened birds
List of near threatened fishes
List of near threatened insects
List of near threatened invertebrates
List of near threatened mammals
List of near threatened molluscs
List of near threatened reptiles
References
External links
List of Near Threatened species as identified by the IUCN Red List of Threatened Species
Biota by conservation status
IUCN Red List
Environmental conservation | Near-threatened species | [
"Biology"
] | 360 | [
"Near threatened species",
"Biota by conservation status",
"Biodiversity"
] |
5,544,986 | https://en.wikipedia.org/wiki/Ingress%20router | An ingress router is a label switch router that is a starting point (source) for a given label-switched path (LSP). An ingress router may be an egress router or an intermediate router for any other LSP(s). Hence the role of ingress and egress routers is LSP specific. Usually, the MPLS label is attached with an IP packet at the ingress router and removed at the egress router, whereas label swapping is performed on the intermediate routers. However, in special cases (such as LSP Hierarchy in RFC 4206, LSP Stitching and MPLS local protection) the ingress router could be pushing label in label stack of an already existing MPLS packet (instead of an IP packet). Note that, although the ingress router is the starting point of an LSP, it may or may not be the source of the under-lying IP packets.
MPLS networking | Ingress router | [
"Technology"
] | 207 | [
"Computing stubs",
"Computer network stubs"
] |
5,544,993 | https://en.wikipedia.org/wiki/Florentine%20flask | A florentine flask, also known as florentine receiver, florentine separator or essencier (from the French), other shapes called florentine vase or florentine vessel, is an oil–water separator fed with condensed vapors of a steam distillation in a fragrance extraction process.
Description
When the raw material is heated with steam from boiling water, volatile fragrant compounds and steam leave the still. The vapours are cooled in the condenser and become liquid. The liquid runs into the florentine receiver where the water and essential oil phases separate.
The essential oils phase separates from water because the oils have a different density than water, and are not water-soluble.
There are two main types of florentines in use. One separates essential oils of lower density than water, for example lavender oil, accumulating in a layer floating on the water. This kind of florentine has to be airtight to reduce the loss of volatile substances. The other type is intended for oils that are denser than water, where the oil accumulates beneath the water phase, for instance cinnamon, wintergreen, vetiver, patchouli or cloves. The floating water phase avoids the loss of volatile compounds from the oil. There are also florentines that are able to accommodate oils that are denser or less dense than water.
The separated water is a herbal distillate and can be fed back into the still or may in some cases be sold as herbal water. Because small droplets of oil are entrained with the water flowing out of the florentine, the yield can sometimes be increased by using more than one florentine in series.
For laboratory use, a small glass florentine without a base is called a florentine vase, as it has a slight resemblance to a small amphora. Larger glass receivers with base are called florentine flasks or essenciers.
Glass is normally only used up to 15 liter vessels; above this size, glass is too fragile, so that metal is used for larger capacities.
References
Essential oils
Flavor technology
Liquid-liquid separation
Distillation | Florentine flask | [
"Chemistry"
] | 446 | [
"Natural products",
"Separation processes by phases",
"Separation processes",
"Distillation",
"Liquid-liquid separation",
"Essential oils"
] |
5,545,113 | https://en.wikipedia.org/wiki/Will%20o%27%20the%20Wisp%20%28comics%29 | Will o' the Wisp (Dr. Jackson Arvad) is a supervillain appearing in American comic books published by Marvel Comics. He is a physicist who gained control over the electromagnetic attraction between his body's molecules, allowing him to adjust his density (like the Vision). He is most often a foe of Spider-Man.
Publication history
The character first appeared in The Amazing Spider-Man #167 (Apr 1977).
Fictional character biography
Jackson Arvad was born in Scranton, Pennsylvania. A former employee at Roxxon Oil, he worked in the division dedicated to electromagnetic research as a scientist. Under constant pressure of being fired, Arvad spent much time furthering his work on electromagnetism, getting little sleep in the process. He eventually ended up falling asleep on the job, unable to save himself from a laboratory accident that would change his life.
He ended up being caught in the electromagnetic field of a device he was working on, the device weakening the electromagnetic attraction between the molecules in his body, threatening his life. When his boss learned of the accident, he decided to let Arvad die, but not before he demanded any scientific applications the device would have had.
However, Arvad was able to save himself when he learned he was suddenly able to control the level of attraction between his body's molecules.
Will o' the Wisp was forced by his employer, Jonas Harrow, to carry out criminal activities. Spider-Man persuaded him instead to resist Harrow.
He attempted to kill his employer, Jonas Harrow multiple times but was stopped each time by either Spider-Man or inadvertently by Tarantula. Finally, he opted to simply hypnotize the man into confessing his crimes to the police.
Will o' the Wisp later took control of Killer Shrike's battle-suit and kidnapped Dr. Marla Madison, who restored him to his corporeal form. Will o' the Wisp later forced his former partner, James Melvin, to expose the Brand Corporation's illicit activities to the news media.
Will o' the Wisp later first encountered The Outlaws while hunting down Spider-Man in connection with a crime. Will o' the Wisp eventually joined the Outlaws as an adventurer, to rescue the kidnapped daughter of a Canadian official.
Civil War
Years later, Will o' the Wisp would show up in league with the Scarecrow and later the Molten Man as part of the Chameleon's plot to get revenge on Peter Parker after unmasking during Civil War.
He was seen among an army of supervillains organized by Hammerhead that was captured by Iron Man and S.H.I.E.L.D. agents.
In The Punisher War Journal vol. 2 #4, the Punisher blew up a bar where Will o' the Wisp was attending a wake for Stilt-Man, after poisoning the guests. It was later mentioned in She-Hulk vol. 2 #17 that "they all had to get their stomachs pumped and be treated for third-degree burns."
Powers and abilities
As a result of exposure to Jackson Arvad and James Melvin's "magno-chamber", Will-O'-The-Wisp has the ability to control the electromagnetic particles that make up his body. This enables him to vary the density of his body to make part, or all, of his body intangible or rock hard similar to the synthezoid Vision. He also has superhuman strength at higher densities, and has superhuman speed and durability. Also, he has the ability of flight, and at subsonic speeds, Will-O'-The-Wisp appears to be nothing more than a glowing sphere. Will-O'-The-Wisp can also mesmerize people for a short period of time. Will-O'-The-Wisp can will the molecules of his body to oscillate at a small distance from his body, making him look like an ethereal glowing sphere.
Jackson Arvad is a brilliant scientist, especially in the field of electromagnetics, with a master's of science degree in electrical engineering.
References
External links
Will o' the Wisp at Marvel.com
Will o' the Wisp at Marvel Wiki
Will o' the Wisp at Comic Vine
Will o' the Wisp at MarvelDirectory.com
Characters created by Len Wein
Characters created by Ross Andru
Comics characters introduced in 1977
Fictional characters from Pennsylvania
Fictional characters with density control abilities
Fictional electrical engineers
Fictional physicists
Marvel Comics characters with superhuman strength
Marvel Comics mutates
Marvel Comics scientists
Marvel Comics superheroes
Marvel Comics supervillains
Spider-Man characters | Will o' the Wisp (comics) | [
"Physics"
] | 959 | [
"Density",
"Fictional characters with density control abilities",
"Physical quantities"
] |
5,545,351 | https://en.wikipedia.org/wiki/Recombinase | Recombinases are genetic recombination enzymes.
Site specific recombinases
DNA recombinases are widely used in multicellular organisms to manipulate the structure of genomes, and to control gene expression. These enzymes, derived from bacteria (bacteriophages) and fungi, catalyze directionally sensitive DNA exchange reactions between short (30–40 nucleotides) target site sequences that are specific to each recombinase. These reactions enable four basic functional modules: excision/insertion, inversion, translocation and cassette exchange, which have been used individually or combined in a wide range of configurations to control gene expression.
Types include:
Cre recombinase
Hin recombinase
Tre recombinase
FLP recombinase
Homologous recombination
Recombinases have a central role in homologous recombination in a wide range of organisms. Such recombinases have been described in archaea, bacteria, eukaryotes and viruses.
Archaea
The archaeon Sulfolobus solfataricus RadA recombinase catalyzes DNA pairing and strand exchange, central steps in recombinational repair. The RadA recombinase has greater similarity to the eukaryotic Rad51 recombinase than to the bacterial RecA recombinase.
Bacteria
RecA recombinase appears to be universally present in bacteria. RecA has multiple functions, all related to DNA repair. RecA has a central role in the repair of replication forks stalled by DNA damage and in the bacterial sexual process of natural genetic transformation.
Eukaryotes
Eukaryotic Rad51 and its related family members are homologous to the archaeal RadA and bacterial RecA recombinases. Rad51 is highly conserved from yeast to humans. It has a key function in the recombinational repair of DNA damages, particularly double-strand damages such as double-strand breaks. In humans, over- or under-expression of Rad51 occurs in a wide variety of cancers.
During meiosis Rad51 interacts with another recombinase, Dmc1, to form a presynaptic filament that is an intermediate in homologous recombination. Dmc1 function appears to be limited to meiotic recombination. Like Rad51, Dmc1 is homologous to bacterial RecA.
Viruses
Some DNA viruses encode a recombinase that facilitates homologous recombination. A well-studied example is the UvsX recombinase encoded by bacteriophage T4. UvsX is homologous to bacterial RecA. UvsX, like RecA, can facilitate the assimilation of linear single-stranded DNA into an homologous DNA duplex to produce a D-loop.
References
External links
Modification of genetic information
Molecular biology | Recombinase | [
"Chemistry",
"Biology"
] | 610 | [
"Biochemistry",
"Modification of genetic information",
"Molecular genetics",
"Molecular biology"
] |
5,546,292 | https://en.wikipedia.org/wiki/UNESCO%20Science%20Prize | The UNESCO Science Prize is a biennial scientific prize awarded by the United Nations Educational, Scientific and Cultural Organization (UNESCO) to "a person or group of persons for an outstanding contribution they have made to the technological development of a developing member state or region through the application of scientific and technological research (particularly in the fields of education, engineering and industrial development)."
The candidates for the Science Prize are proposed to the Director-General of UNESCO by the governments of member states or by non-governmental organizations. All proposals are judged by a panel of six scientists and engineers. The prize consists of , an Albert Einstein Silver Medal, and is awarded in odd years to coincide with UNESCO's General Conference.
Past Laureates
1968: Robert Simpson Silver () "for his discovery of a process for the demineralization of sea water."
1970: International Maize and Wheat Improvement Centre () and International Rice Research Institute () "for their work which made it possible to produce, in the space of a few years, improved strains of cereals."
1972: Viktor Kovda () "for his theory on the hydromorphic origin of the soils of the great plains of Asia, Africa, Europe and the Americas" and nine researchers from "for their development of the L-D process designed for recovery of steel from low phosphorus pig iron."
1976: Alfred Champagnat () "for his findings on the low-cost mass production of new proteins from petroleum."
1978: A team of research workers from the Lawes Agricultural Trust () "for their work on synthetic insecticides related to natural pyrethrum."
1980: Leonardo Mata () "for his work on the relationship between malnutrition and infection, particularly in infants" and Vincent Barry's group of scientists from the Medical Research Council (Ireland) () "for their work on the synthesis of an anti-leprosy agent, B-633."
1983: Roger Whitehead () "for his work on the role of maternal nutrition and lactation in infant growth."
1985: A group of six scientists from the Commonwealth Scientific and Industrial Research Organisation () "for their work on the biological control of Salvinia molesta infestations in the Sepik River Basin of Papua New Guinea."
1987: Yuan Longping () "for his work leading to the creation of an hybrid rice with high yield potential."
1989: Johanna Döbereiner () "for her work in exploiting biological nitrogen fixation as the major source of nitrogen in tropical agriculture."
1991: A group of researchers and engineers from the Instituto Tecnológico Venezolano del Petróleo () "for their contribution to the development of hydrocracking distillation and hydrotreatment technology."
1993: Octavio Novaro () for his contribution to the phenomenon of catalysis.
1995: Wang Xuan () "for his contribution to the Chinese photocomposition system".
1997: Marcos Moshinsky () "for his work in nuclear physics."
1999: Atta ur Rahman () "for his work in organic chemistry which has contributed to the development of plant-based therapies for cancer, AIDS and diabetes" and José Leite Lopes () "for his contribution to the development of physics in Latin America."
2001: Baltasar Mena Iniesta (/) "for his ability to relate his research in rheology and new materials to technological applications."
2003: Somchart Soponronnarit () "for research on areas of renewable energy and drying technology."
2005: Alexander Balankin (/) "for his pioneer contributions in development of fractal mechanics and improving exploration techniques for the oil industry".
References
Science and technology awards
Science
Awards established in 1968 | UNESCO Science Prize | [
"Technology"
] | 782 | [
"Science and technology awards"
] |
5,547,122 | https://en.wikipedia.org/wiki/Prony%27s%20method | Prony analysis (Prony's method) was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal.
The method
Let be a signal consisting of evenly spaced samples. Prony's method fits a function
to the observed . After some manipulation utilizing Euler's formula, the following result is obtained, which allows more direct computation of terms:
where
are the eigenvalues of the system,
are the damping components,
are the angular-frequency components,
are the phase components,
are the amplitude components of the series,
is the imaginary unit ().
Representations
Prony's method is essentially a decomposition of a signal with complex exponentials via the following process:
Regularly sample so that the -th of samples may be written as
If happens to consist of damped sinusoids, then there will be pairs of complex exponentials such that
where
Because the summation of complex exponentials is the homogeneous solution to a linear difference equation, the following difference equation will exist:
The key to Prony's Method is that the coefficients in the difference equation are related to the following polynomial:
These facts lead to the following three steps within Prony's method:
1) Construct and solve the matrix equation for the values:
Note that if , a generalized matrix inverse may be needed to find the values .
2) After finding the values, find the roots (numerically if necessary) of the polynomial
The -th root of this polynomial will be equal to .
3) With the values, the values are part of a system of linear equations that may be used to solve for the values:
where unique values are used. It is possible to use a generalized matrix inverse if more than samples are used.
Note that solving for will yield ambiguities, since only was solved for, and for an integer . This leads to the same Nyquist sampling criteria that discrete Fourier transforms are subject to
See also
Generalized pencil-of-function method
Computation of Prony decomposition using Autoregression analysis
Application of Prony decomposition in Time-frequency analysis
Notes
References
Signal processing | Prony's method | [
"Technology",
"Engineering"
] | 483 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
5,547,210 | https://en.wikipedia.org/wiki/Lehmstedt%E2%80%93Tanasescu%20reaction | The Lehmstedt–Tanasescu reaction is a method in organic chemistry for the organic synthesis of acridone derivatives (3) from a 2-nitrobenzaldehyde (1) and an arene compound (2):
The reaction is named after two chemists who devoted part of their careers to research into this synthetic method, the German chemist Kurt Lehmstedt and the Romanian chemist Ion Tănăsescu. Variations of the reaction name include Lehmsted–Tănăsescu reaction, Lehmsted–Tănăsescu acridone synthesis and Lehmsted–Tanasescu acridone synthesis.
Reaction mechanism
In the first step of the reaction mechanism the precursor molecule 2-nitrobenzaldehyde 4 is protonated, often by sulfuric acid, to intermediate 5, followed by an electrophilic attack to benzene (other arenes can be used as well). The resulting benzhydrol 6 cyclisizes to 7 and finally to compound 8. Treatment of this intermediate with nitrous acid (sodium nitrite en sulfuric acid) leads to the N-nitroso acridone 11 via intermediates 9 en 10. The N-nitroso group is removed by an acid in the final step. The procedure is an example of a one-pot synthesis.
References
Heterocycle forming reactions
Name reactions | Lehmstedt–Tanasescu reaction | [
"Chemistry"
] | 280 | [
"Name reactions",
"Heterocycle forming reactions",
"Organic reactions"
] |
5,547,258 | https://en.wikipedia.org/wiki/Coloured%20Book%20protocols | The Coloured Book protocols were a set of communication protocols for computer networks developed in the United Kingdom in the 1970s. These protocols were designed to enable communication and data exchange between different computer systems and networks. The name originated with each protocol being identified by the colour of the cover of its specification document. The protocols were in use until the 1990s when the Internet protocol suite came into widespread use.
History
In the mid-1970s, the British Post Office Telecommunications division (BPO-T) worked with the academic community in the United Kingdom and the computer industry to develop a set of standards to enable interoperability among different computer systems based on the X.25 protocol suite for packet-switched wide area network (WAN) communication. First defined in 1975, the standards evolved through experience developing protocols for the NPL network in the late 1960s and the Experimental Packet Switched Service in the early 1970s.
The Coloured Book protocols were used on SERCnet from 1980, and SWUCN from 1982, both of which became part of the JANET academic network from 1984. The protocols were influential in the development of computer networks, particularly in the UK, gained some acceptance internationally as the first complete X.25 standard, and gave the UK "several years lead over other countries".
From late 1991, Internet protocols were adopted on the Janet network instead; they were operated simultaneously for a while, until X.25 support was phased out entirely in August 1997.
Protocols
The standards were defined in several documents, each addressing different aspects of computer network communication. They were identified by the colour of the cover:
Pink Book
The Pink Book defined protocols for transport over Ethernet. The protocol was basically X.25 level 3 running over LLC2.
Orange Book
The Orange Book defined protocols for transport over local networks using the Cambridge Ring (computer network).
Yellow Book
The Yellow Book defined the Yellow Book Transport Service (YBTS) protocol, also known as Network Independent Transport Service (NITS), which was mainly run over X.25. It was developed by the Data Communications Protocols Unit of the Department of Industry in the late 1970s. It could also run over TCP. The Simple Mail Transfer Protocol was extended to allow running over NITS.
The Yellow Book Transport Service was somewhat misnamed, as it does not fulfill the Transport role in the OSI 7-layer model. It really occupies the top of the Network layer, making up for X.25's lack of NSAP addressing at the time, which did not appear until the X.25 (1980) revision, and was not available in implementations for some years afterward. YBTS used source routing addressing between YBTS nodes—there was no global addressing scheme at that time.
Green Book
The Green Book defined two protocols to connect terminals across a network: an early version of what became Triple-X PAD running over X.25, and the TS29 protocol modelled on Triple-X PAD, but running over YBTS. It was developed by Post Office Telecommunications. These protocols are similar in functionality to TELNET.
Fawn Book
The Fawn Book defined the Simple Screen Management Protocol (SSMP)
Blue Book
The Blue Book defined the Network-Independent File Transfer Protocol (NIFTP), analogous to Internet FTP, running over YBTS. Unlike Internet FTP, NIFTP was intended for batch mode rather than interactive usage.
Grey Book
The Grey Book defined protocols for e-mail transfer (not file transfer as is sometimes claimed), running over Blue Book FTP.
Red Book
The Red Book defined the Job Transfer and Manipulation Protocol (JTMP), a mechanism for jobs to be transferred from one computer to another, and for the output to be returned to the originating (or another) computer, running over Blue Book FTP.
Legacy
Over time, as technology evolved, many of the concepts and principles from the Coloured Book Protocols were integrated into broader international standards. They remain an important part of the history and evolution of computer networking, showcasing an early effort to establish standards and protocols for efficient and reliable communication between computers. One famous quirk of Coloured Book was that components of hostnames used reverse domain name notation as compared to the Internet standard. For example, an address might be user@UK.AC.HATFIELD.STAR instead of user@star.hatfield.ac.uk.
See also
Internet in the United Kingdom § History
Protocol Wars
References
Sources
A Dictionary of Computing. Oxford University Press, 2004, s.v. "coloured book"
External links
alt.folklore.computers: "What is the British Grey Book protocol?"
Janet website
History of computing in the United Kingdom
Network protocols
Wide area networks
X.25 | Coloured Book protocols | [
"Technology"
] | 940 | [
"History of computing",
"History of computing in the United Kingdom"
] |
5,547,312 | https://en.wikipedia.org/wiki/Vacuum%20deposition | Vacuum deposition is a group of processes used to deposit layers of material atom-by-atom or molecule-by-molecule on a solid surface. These processes operate at pressures well below atmospheric pressure (i.e., vacuum). The deposited layers can range from a thickness of one atom up to millimeters, forming freestanding structures. Multiple layers of different materials can be used, for example to form optical coatings. The process can be qualified based on the vapor source; physical vapor deposition uses a liquid or solid source and chemical vapor deposition uses a chemical vapor.
Description
The vacuum environment may serve one or more purposes:
reducing the particle density so that the mean free path for collision is long
reducing the particle density of undesirable atoms and molecules (contaminants)
providing a low pressure plasma environment
providing a means for controlling gas and vapor composition
providing a means for mass flow control into the processing chamber.
Condensing particles can be generated in various ways:
thermal evaporation
sputtering
cathodic arc vaporization
laser ablation
decomposition of a chemical vapor precursor, chemical vapor deposition
In reactive deposition, the depositing material reacts either with a component of the gaseous environment (Ti + N → TiN) or with a co-depositing species (Ti + C → TiC). A plasma environment aids in activating gaseous species (N2 → 2N) and in decomposition of chemical vapor precursors (SiH4 → Si + 4H). The plasma may also be used to provide ions for vaporization by sputtering or for bombardment of the substrate for sputter cleaning and for bombardment of the depositing material to densify the structure and tailor properties (ion plating).
Types
When the vapor source is a liquid or solid, the process is called physical vapor deposition (PVD), which is used in semiconductor devices, thin-film solar panels, and glass coatings. When the source is a chemical vapor precursor, the process is called chemical vapor deposition (CVD). The latter has several variants: low-pressure chemical vapor deposition (LPCVD), plasma-enhanced chemical vapor deposition (PECVD), and plasma-assisted CVD (PACVD). Often a combination of PVD and CVD processes are used in the same or connected processing chambers.
Applications
Electrical conduction: metallic films, resistors, transparent conductive oxides (TCOs), superconducting films & coatings
Semiconductor devices: semiconductor films, electrically insulating films
Solar cells
Optical films: anti-reflective coatings, optical filters
Reflective coatings: mirrors, hot mirrors
Tribological coating: hard coatings, erosion resistant coatings, solid film lubricants
Energy conservation & generation: low emissivity glass coatings, solar absorbing coatings, mirrors, solar thin film photovoltaic cells, smart films
Magnetic films: magnetic recording
Diffusion barrier: gas permeation barriers, vapor permeation barriers, solid state diffusion barriers
Corrosion protection:
Automotive applications: lamp reflectors and trim applications
Vinyl record pressing, manufacture of gold and platinum records
A thickness of less than one micrometre is generally called a thin film, while a thickness greater than one micrometre is called a coating.
See also
Ion plating
Sputter deposition
Cathodic arc deposition
Spin coating
Metallised film
Molecular vapor deposition
References
Bibliography
SVC, "51st Annual Technical Conference Proceedings" (2008) SVC Publications (previous proceeding available on CD)
Anders, Andre (editor) "Handbook of Plasma Immersion Ion Implantation and Deposition" (2000) Wiley-Interscience
Bach, Hans and Dieter Krause (editors) "Thin Films on Glass" (2003) Springer-Verlag
Bunshah, Roitan F (editor). "Handbook of Deposition Technologies for Films and Coatings", second edition (1994)
Glaser, Hans Joachim "Large Area Glass Coating" (2000) Von Ardenne Anlagentechnik GmbH
Glocker and I. Shah (editors), "Handbook of Thin Film Process Technology", Vol.1&2 (2002) Institute of Physics (2 vol. set)
Mahan, John E. "Physical Vapor Deposition of Thin Films" (2000) John Wiley & Sons
Mattox, Donald M. "Handbook of Physical Vapor Deposition (PVD) Processing" 2nd edition (2010) Elsevier
Mattox, Donald M. "The Foundations of Vacuum Coating Technology" (2003) Noyes Publications
Mattox, Donald M. and Vivivenne Harwood Mattox (editors) "50 Years of Vacuum Coating Technology and the Growth of the Society of Vacuum Coaters" (2007), Society of Vacuum Coaters
Westwood, William D. "Sputter Deposition", AVS Education Committee Book Series, Vol. 2 (2003) AVS
Willey, Ronald R. "Practical Monitoring and Control of Optical Thin Films (2007)" Willey Optical, Consultants
Willey, Ronald R. "Practical Equipment, Materials, and Processes for Optical Thin Films" (2007) Willey Optical, Consultants
Thin film deposition
Vacuum
Industrial processes | Vacuum deposition | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,037 | [
"Thin film deposition",
"Coatings",
"Thin films",
"Vacuum",
"Planes (geometry)",
"Solid state engineering",
"Matter"
] |
5,547,558 | https://en.wikipedia.org/wiki/Corey%E2%80%93Winter%20olefin%20synthesis | The Corey–Winter olefin synthesis (also known as Corey–Winter–Eastwood olefination) is a series of chemical reactions for converting 1,2-diols into olefins. It is named for the American chemist and Nobelist Elias James Corey and the American-Estonian chemist Roland Arthur Edwin Winter.
Often, thiocarbonyldiimidazole is used instead of thiophosgene as shown above, since thiophosgene has a similar toxicity profile as phosgene, whereas thiocarbonyldiimidazole is a much safer alternative.
Mechanism
The reaction mechanism involves the formation of a cyclic thiocarbonate from the diol and thiophosgene. The second step involves treatment with trimethyl phosphite, which attacks the sulfur atom, producing S=P(OMe)3 (driven by the formation of a strong P=S double bond) and leaving a carbene. This carbene collapses with loss of carbon dioxide to give the olefin.
An alternative mechanism does not involve a free carbene intermediate, but rather involves attack of the carbanion by a second molecule of trimethylphosphite with concomitant cleavage of the sulfur-carbon bond. The phosphorus stabilized carbanion then undergoes an elimination to give the alkene, along with an acyl phosphite, which then decarboxylates.
The Corey-Winter olefination is a stereospecific reaction: a trans-diol gives a trans-alkene, while a cis-diol gives a cis-alkene as the product. For instance, cis- and trans-1,2-cyclodecanediol gives the respective cis- and trans-cyclodecene.
References
Olefination reactions
Elimination reactions
Organic redox reactions
Name reactions | Corey–Winter olefin synthesis | [
"Chemistry"
] | 387 | [
"Name reactions",
"Organic redox reactions",
"Olefination reactions",
"Organic reactions"
] |
5,547,607 | https://en.wikipedia.org/wiki/Mode%20of%20action | In pharmacology and biochemistry, mode of action (MoA) describes a functional or anatomical change, resulting from the exposure of a living organism to a substance. In comparison, a mechanism of action (MOA) describes such changes at the molecular level.
A mode of action is important in classifying chemicals, as it represents an intermediate level of complexity in between molecular mechanisms and physiological outcomes, especially when the exact molecular target has not yet been elucidated or is subject to debate. A mechanism of action of a chemical could be "binding to DNA" while its broader mode of action would be "transcriptional regulation". However, there is no clear consensus and the term mode of action is also often used, especially in the study of pesticides, to describe molecular mechanisms such as action on specific nuclear receptors or enzymes. Despite this, there are classification attempts, such as the HRAC's classification to manage pesticide resistance.
See also
Mechanism of action in pharmaceuticals
Adverse outcome pathway
References
Pharmacodynamics
Medicinal chemistry | Mode of action | [
"Chemistry",
"Biology"
] | 211 | [
"Pharmacology",
"Pharmacodynamics",
"Medicinal chemistry stubs",
"Medicinal chemistry",
"nan",
"Biochemistry",
"Pharmacology stubs"
] |
5,548,053 | https://en.wikipedia.org/wiki/Coding%20best%20practices | Coding best practices or programming best practices are a set of informal, sometimes personal, rules (best practices) that many software developers, in computer programming follow to improve software quality. Many computer programs require being robust and reliable for long periods of time, so any rules need to facilitate both initial development and subsequent maintenance of source code by people other than the original authors.
In the ninety–ninety rule, Tom Cargill explains why programming projects often run late: "The first 90% of the code takes the first 90% of the development time. The last 10% takes another 90% of the time." Any guidance which can redress this lack of foresight is worth considering.
The size of a project or program has a significant effect on error rates, programmer productivity, and the amount of management needed.
Software quality
As listed below, there are many attributes associated with good software. Some of these can be mutually contradictory (e.g. being very fast versus performing extensive error checking), and different customers and participants may have different priorities. Weinberg provides an example of how different goals can have a dramatic effect on both effort required and efficiency. Furthermore, he notes that programmers will generally aim to achieve any explicit goals which may be set, probably at the expense of any other quality attributes.
Sommerville has identified four generalized attributes which are not concerned with what a program does, but how well the program does it: Maintainability, dependability, efficiency and usability.
Weinberg has identified four targets which a good program should meet:
Does a program meet its specification ("correct output for each possible input")?
Is the program produced on schedule (and within budget)?
How adaptable is the program to cope with changing requirements?
Is the program efficient enough for the environment in which it is used?
Hoare has identified seventeen objectives related to software quality, including:
Clear definition of purpose.
Simplicity of use.
Ruggedness (difficult to misuse, kind to errors).
Early availability (delivered on time when needed).
Reliability.
Extensibility in the light of experience.
Brevity.
Efficiency (fast enough for the purpose to which it is put).
Minimum cost to develop.
Conformity to any relevant standards (including programming language-specific standards).
Clear, accurate and precise user documents.
Prerequisites
Before coding starts, it is important to ensure that all necessary prerequisites have been completed (or have at least progressed far enough to provide a solid foundation for coding). If the various prerequisites are not satisfied, then the software is likely to be unsatisfactory, even if it is completed.
From Meek & Heath: "What happens before one gets to the coding stage is often of crucial importance to the success of the project."
The prerequisites outlined below cover such matters as:
How is the development structured? (life cycle)
What is the software meant to do? (requirements)
What is the overall structure of the software system? (architecture)
What is the detailed design of individual components? (design)
What is the choice of programming language(s)?
For small simple projects it may be feasible to combine architecture with design and adopt a very simple life cycle.
Life cycle
A software development methodology is a framework that is used to structure, plan, and control the life cycle of a software product. Common methodologies include waterfall, prototyping, iterative and incremental development, spiral development, agile software development, rapid application development, and extreme programming.
The waterfall model is a sequential development approach; in particular, it assumes that the requirements can be completely defined at the start of a project. However, McConnell quotes three studies that indicate that, on average, requirements change by around 25% during a project. The other methodologies mentioned above all attempt to reduce the impact of such requirement changes, often by some form of step-wise, incremental, or iterative approach. Different methodologies may be appropriate for different development environments.
Since its introduction in 2001, agile software development has grown in popularity, fueled by software developers seeking a more iterative, collaborative approach to software development.
Requirements
McConnell states: "The first prerequisite you need to fulfill before beginning construction is a clear statement of the problem the system is supposed to solve."
Meek and Heath emphasise that a clear, complete, precise, and unambiguous written specification is the target to aim for. Note that it may not be possible to achieve this target, and the target is likely to change anyway (as mentioned in the previous section).
Sommerville distinguishes between less detailed user requirements and more detailed system requirements. He also distinguishes between functional requirements (e.g. update a record) and non-functional requirements (e.g. response time must be less than 1 second).
Architecture
Hoare points out: "there are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies; the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."
Software architecture is concerned with deciding what has to be done and which program component is going to do it (how something is done is left to the detailed design phase below). This is particularly important when a software system contains more than one program since it effectively defines the interface between these various programs. It should include some consideration of any user interfaces as well, without going into excessive detail.
Any non-functional system requirements (response time, reliability, maintainability, etc.) need to be considered at this stage.
The software architecture is also of interest to various stakeholders (sponsors, end-users, etc.) since it gives them a chance to check that their requirements can be met.
Design
The primary purpose of design is to fill in the details which have been glossed over in the architectural design. The intention is that the design should be detailed enough to provide a good guide for actual coding, including details of any particular algorithms to be used. For example, at the architectural level, it may have been noted that some data has to be sorted, while at the design level, it is necessary to decide which sorting algorithm is to be used. As a further example, if an object-oriented approach is being used, then the details of the objects must be determined (attributes and methods).
Choice of programming language(s)
Mayer states: "No programming language is perfect. There is not even a single best language; there are only languages well suited or perhaps poorly suited for particular purposes. Understanding the problem and associated programming requirements is necessary for choosing the language best suited for the solution."
From Meek & Heath: "The essence of the art of choosing a language is to start with the problem, decide what its requirements are, and their relative importance since it will probably be impossible to satisfy them all equally well. The available languages should then be measured against the list of requirements, and the most suitable (or least unsatisfactory) chosen."
It is possible that different programming languages may be appropriate for different aspects of the problem. If the languages or their compilers permit, it may be feasible to mix routines written in different languages within the same program.
Even if there is no choice as to which programming language is to be used, McConnell provides some advice: "Every programming language has strengths and weaknesses. Be aware of the specific strengths and weaknesses of the language you're using."
Coding standards
This section is also really a prerequisite to coding, as McConnell points out: "Establish programming conventions before you begin programming. It's nearly impossible to change code to match them later."
As listed near the end of coding conventions, there are different conventions for different programming languages, so it may be counterproductive to apply the same conventions across different languages. It is important to note that there is no one particular coding convention for any programming language. Every organization has a custom coding standard for each type of software project. It is, therefore, imperative that the programmer chooses or makes up a particular set of coding guidelines before the software project commences. Some coding conventions are generic, which may not apply for every software project written with a particular programming language.
The use of coding conventions is particularly important when a project involves more than one programmer (there have been projects with thousands of programmers). It is much easier for a programmer to read code written by someone else if all code follows the same conventions.
For some examples of bad coding conventions, Roedy Green provides a lengthy (tongue-in-cheek) article on how to produce unmaintainable code.
Commenting
Due to time restrictions or enthusiastic programmers who want immediate results for their code, commenting of code often takes a back seat. Programmers working as a team have found it better to leave comments behind since coding usually follows cycles, or more than one person may work on a particular module. However, some commenting can decrease the cost of knowledge transfer between developers working on the same module.
In the early days of computing, one commenting practice was to leave a brief description of the following:
Name of the module
Purpose of the Module
Description of the Module
Original Author
Modifications
Authors who modified code with a description on why it was modified.
The "description of the module" should be as brief as possible but without sacrificing clarity and comprehensiveness.
However, the last two items have largely been obsoleted by the advent of revision control systems. Modifications and their authorship can be reliably tracked by using such tools rather than by using comments.
Also, if complicated logic is being used, it is a good practice to leave a comment "block" near that part so that another programmer can understand what exactly is happening.
Unit testing can be another way to show how code is intended to be used.
Naming conventions
Use of proper naming conventions is considered good practice. Sometimes programmers tend to use X1, Y1, etc. as variables and forget to replace them with meaningful ones, causing confusion.
It is usually considered good practice to use descriptive names.
Example: A variable for taking in weight as a parameter for a truck can be named TrkWeight, TruckWeightKilograms or Truck_Weight_Kilograms, with TruckWeightKilograms (See Pascal case naming of variables) often being the preferable one since it is instantly recognizable, but naming convention is not always consistent between projects and/or companies.
Keep the code simple
The code that a programmer writes should be simple. Complicated logic for achieving a simple thing should be kept to a minimum since the code might be modified by another programmer in the future. The logic one programmer implemented may not make perfect sense to another. So, always keep the code as simple as possible.
Portability
Program code should not contain "hard-coded" (literal) values referring to environmental parameters, such as absolute file paths, file names, user names, host names, IP addresses, and URLs, UDP/TCP ports. Otherwise, the application will not run on a host that has a different design than anticipated. A careful programmer can parametrize such variables and configure them for the hosting environment outside of the application proper (for example, in property files, on an application server, or even in a database). Compare the mantra of a "single point of definition".(SPOD).
As an extension, resources such as XML files should also contain variables rather than literal values, otherwise, the application will not be portable to another environment without editing the XML files. For example, with J2EE applications running in an application server, such environmental parameters can be defined in the scope of the JVM, and the application should get the values from there.
Scalability
Design code with scalability as a design goal because very often in software projects, new features are always added to a project which becomes bigger. Therefore, the facility to add new features to a software code base becomes an invaluable method in writing software.
Reusability
Re-use is a very important design goal in software development. Re-use cuts development costs and also reduces the time for development if the components or modules which are reused are already tested. Very often, software projects start with an existing baseline that contains the project in its prior version and depending on the project, many of existing software modules and components are reused, which reduces development and testing time, therefore, increasing the probability of delivering a software project on schedule.
Construction guidelines in brief
A general overview of all of the above:
Know what the code block must perform
Maintain naming conventions which are uniform throughout.
Indicate a brief description of what a variable is for (reference to commenting)
Correct errors as they occur.
Keep your code simple
Design code with scalability and reuse in mind.
Code development
Code building
A best practice for building code involves daily builds and testing, or better still continuous integration, or even continuous delivery.
Testing
Testing is an integral part of software development that needs to be planned. It is also important that testing is done proactively; meaning that test cases are planned before coding starts, and test cases are developed while the application is being designed and coded.
Debugging the code and correcting errors
Programmers tend to write the complete code and then begin debugging and checking for errors. Though this approach can save time in smaller projects, bigger and more complex ones tend to
have too many variables and functions that need attention. Therefore, it is good to debug every module once you are done and not the entire program. This saves time in the long run so that one does not end up wasting a lot of time on figuring out what is wrong. unit tests for individual modules and/or functional tests for web services and web applications can help with this.
Deployment
Deployment is the final stage of releasing an application for users. Some best practices are:
Keep the installation structure simple: Files and directories should be kept to a minimum. Don’t install anything that’s never going to be used.
Keep only what is needed: The software configuration management activities must make sure this is enforced. Unused resources (old or failed versions of files, source code, interfaces, etc.) must be archived somewhere else to keep newer builds lean.
Keep everything updated: The software configuration management activities must make sure this is enforced. For delta-based deployments, make sure the versions of the resources that are already deployed are the latest before deploying the deltas. If not sure, perform a deployment from scratch (delete everything first and then re-deploy).
Adopt a multi-stage strategy: Depending on the size of the project, sometimes more deployments are needed.
Have a roll back strategy: There must be a way to roll-back to a previous (working) version.
Rely on automation for repeatable processes: There's far too much room for human error, deployments should not be manual. Use a tool that is native to each operating system or, use a scripting language for cross-platform deployments.
Re-create the real deployment environment: Consider everything (routers, firewalls, web servers, web browsers, file systems, etc.)
Do not change deployment procedures and scripts on-the-fly and, document such changes: Wait for a new iteration and record such changes appropriately.
Customize deployment: Newer software products such as APIs, micro-services, etc. require specific considerations for successful deployment.
Reduce risk from other development phases: If other activities such as testing and configuration management are wrong, deployment surely will fail.
Consider the influence each stakeholder has: Organizational, social, governmental considerations.
See also
Best practice
List of tools for static code analysis
Motor Industry Software Reliability Association (MISRA)
Software Assurance
Software quality
List of software development philosophies
The Cathedral and the Bazaar - book comparing top-down vs. bottom-up open-source software
Davis 201 Principles of Software Development
Where's the Theory for Software Engineering?
Don't Make Me Think (Principles of intuitive navigation and information design)
Notes
References
Enhancing the Development Life Cycle to Product Secure Software, V2.0 Oct. 2008 describes the security principles and practices that software developers, testers, and integrators can adopt to achieve the twin objectives of producing more secure software-intensive systems, and verifying the security of the software they produce.
External links
Paul Burden, co-author of the MISRA C Coding Standards and PRQA's representative on the MISRA C working group for more than 10 years discusses a common coding standard fallacy: "we don't need a coding standard!, we just need to catch bugs!"
Software development process
Computer programming | Coding best practices | [
"Technology",
"Engineering"
] | 3,387 | [
"Software engineering",
"Computer programming",
"Computers"
] |
5,548,333 | https://en.wikipedia.org/wiki/Specialty%20engineering | In the domain of systems engineering, Specialty Engineering is defined as and includes the engineering disciplines that are not typical of the main engineering effort. More common engineering efforts in systems engineering such as hardware, software, and human factors engineering may be used as major elements in a majority of systems engineering efforts and therefore are not viewed as "special".
Examples of specialty engineering include electromagnetic interference, safety, and physical security.
Less common engineering domains such as electromagnetic interference, electrical grounding, safety, security, electrical power filtering/uninterruptible supply, manufacturability, and environmental engineering may be included in systems engineering efforts where they have been identified to address special system implementations. These less common but just as important engineering efforts are then viewed as "specialty engineering".
However, if the specific system has a standard implementation of environmental or security for example, the situation is reversed and the human factors engineering or hardware/software engineering may be the "specialty engineering" domain.
The key take away is; the context of the system engineering project and unique needs of the project are fundamental when thinking of what are the specialty engineering efforts.
The benefit of citing "specialty engineering" in planning is the notice to all team levels that special management and science factors may need to be accounted for and may influence the project.
Specialty engineering may be cited by commercial entities and others to specify their unique abilities.
References
Eisner, Howard. (2002). "Essentials of Project and Systems Engineering Management". Wiley. p. 217.
Systems engineering
Engineering disciplines | Specialty engineering | [
"Engineering"
] | 311 | [
"Systems engineering",
"nan"
] |
5,548,352 | https://en.wikipedia.org/wiki/W-shingling | In natural language processing a w-shingling is a set of unique shingles (therefore n-grams) each of which is composed of contiguous subsequences of tokens within a document, which can then be used to ascertain the similarity between documents. The symbol w denotes the quantity of tokens in each shingle selected, or solved for.
The document, "a rose is a rose is a rose" can therefore be maximally tokenized as follows:
(a,rose,is,a,rose,is,a,rose)
The set of all contiguous sequences of 4 tokens (Thus 4=n, thus 4-grams) is
{ (a,rose,is,a), (rose,is,a,rose), (is,a,rose,is), (a,rose,is,a), (rose,is,a,rose) } Which can then be reduced, or maximally shingled in this particular instance to { (a,rose,is,a), (rose,is,a,rose), (is,a,rose,is) }.
Resemblance
For a given shingle size, the degree to which two documents A and B resemble each other can be expressed as the ratio of the magnitudes of their shinglings' intersection and union, or
where |A| is the size of set A. The resemblance is a number in the range [0,1], where 1 indicates that two documents are identical. This definition is identical with the Jaccard coefficient describing similarity and diversity of sample sets.
See also
Bag-of-words model
Jaccard index
Concept mining
k-mer
MinHash
N-gram
Rabin fingerprint
Rolling hash
Vector space model
References
Does not yet use the term "shingling".
Natural language processing | W-shingling | [
"Technology"
] | 376 | [
"Natural language processing",
"Natural language and computing"
] |
8,725,881 | https://en.wikipedia.org/wiki/Aspen%20Achievement%20Academy | Aspen Achievement Academy was a wilderness therapy program for adolescents, based in Loa, Utah.
It was operated as a part of Aspen Education Group.
The program has been moved, in name only, and merged with another wilderness therapy program in Utah - Outback Therapeutic Expeditions - in March 2011.
According to the program's promotional materials, Aspen Achievement Academy enrolled adolescent males and females, ages 13–17, with a history of moderate to severe emotional and behavioral problems, such as low self-esteem, academic underachievement, substance abuse, and family conflict. The program had a flexible length of stay, with a minimum of 35 days. Some parents use the services of a teen escort company to transport their children to the site.
The program's website state that the program was JCAHO certified and licensed as an Outdoor Treatment Program by the State of Utah Department of Human Services. It had memberships in the National Association of Therapeutic Schools and Programs and the Outdoor Behavioral Healthcare Industry Council.
In news media and popular culture
Aspen Achievement Academy has been a subject of several media reports and works of popular culture:
The 1999 book Shouting at the Sky: Troubled Teens and the Promise of the Wild by Gary Ferguson, recounts the author's experiences and observations during several months he spent in the wilderness with teens at Aspen Achievement Academy.
The third season of the UK TV series Brat Camp was filmed at Aspen Achievement Academy, and aired in the UK beginning in February 2006.
In January 1996, six teenagers ran away from an Aspen group. They were found by law enforcement officials and returned to the program, but the incident raised concerns that future escapees might assault tourists, hikers or recreationists on the public lands that Aspen used. Afterward, the Bureau of Land Management, which manages these lands, was reported to have conducted a review to determine whether to renew or terminate Aspen's access permit.
In April 2007 a 16-year-old male student died after hanging himself with a piece of seatbelt webbing.
History
Aspen Achievement Academy (AAA) was founded in 1988 by Doug Nelson, Dr. Keith Hooker, Doug Cloward, and Madolyn Liebing, Ph.D. It was originally named Wilderness Academy. AAA is known for being the first wilderness therapy programs to have a clinician (Liebing) who provided individual therapy. AAA was also the first Utah State licensed wilderness therapy program.
References
White, W. (2012) Chapter 2: “A History of Adventure Therapy” in Adventure Therapy: Theory, Practice, and Research by Gass, M, Gillis, L. Russell, K. Routledge/Bruner-Mazel Press.
External links
Aspen Achievement Academy program homepage
Behavior modification
Troubled teen programs | Aspen Achievement Academy | [
"Biology"
] | 548 | [
"Behavior modification",
"Behavior",
"Human behavior",
"Behaviorism"
] |
8,726,320 | https://en.wikipedia.org/wiki/Control%20valve | A control valve is a valve used to control fluid flow by varying the size of the flow passage as directed by a signal from a controller. This enables the direct control of flow rate and the consequential control of process quantities such as pressure, temperature, and liquid level.
In automatic control terminology, a control valve is termed a "final control element".
Operation
The opening or closing of automatic control valves is usually done by electrical, hydraulic or pneumatic actuators. Normally with a modulating valve, which can be set to any position between fully open and fully closed, valve positioners are used to ensure the valve attains the desired degree of opening.
Air-actuated valves are commonly used because of their simplicity, as they only require a compressed air supply, whereas electrically operated valves require additional cabling and switch gear, and hydraulically actuated valves required high pressure supply and return lines for the hydraulic fluid.
The pneumatic control signals are traditionally based on a pressure range of 3–15 psi (0.2–1.0 bar), or more commonly now, an electrical signal of 4-20mA for industry, or 0–10 V for HVAC systems. Electrical control now often includes a "Smart" communication signal superimposed on the 4–20 mA control current, such that the health and verification of the valve position can be signalled back to the controller. The HART, Fieldbus Foundation, and Profibus are the most common protocols.
An automatic control valve consists of three main parts in which each part exist in several types and designs:
Valve actuator – which moves the valve's modulating element, such as ball or butterfly.
Valve positioner – which ensures the valve has reached the desired degree of opening. This overcomes the problems of friction and wear.
Valve body – in which the modulating element, a plug, globe, ball or butterfly, is contained.
Control action
Taking the example of an air-operated valve, there are two control actions possible:
"Air or current to open" – The flow restriction decreases with increased control signal value.
"Air or current to close" – The flow restriction increases with increased control signal value.
There can also be failure to safety modes:
"Air or control signal failure to close" – On failure of compressed air to the actuator, the valve closes under spring pressure or by backup power.
"Air or control signal failure to open" – On failure of compressed air to actuator, the valve opens under spring pressure or by backup power.
The modes of failure operation are requirements of the failure to safety process control specification of the plant. In the case of cooling water it may be to fail open, and the case of delivering a chemical it may be to fail closed.
Valve positioners
The fundamental function of a positioner is to deliver pressurized air to the valve actuator, such that the position of the valve stem or shaft corresponds to the set point from the control system. Positioners are typically used when a valve requires throttling action. A positioner requires position feedback from the valve stem or shaft and delivers pneumatic pressure to the actuator to open and close the valve. The positioner must be mounted on or near the control valve assembly. There are three main categories of positioners, depending on the type of control signal, the diagnostic capability, and the communication protocol: pneumatic, analog, and digital.
Pneumatic positioners
Processing units may use pneumatic pressure signaling as the control set point to the control valves. Pressure is typically modulated between 20.7 and 103 kPa (3 to 15 psig) to move the valve from 0 to 100% position. In a common pneumatic positioner, the position of the valve stem or shaft is compared with the position of a bellows that receives the pneumatic control signal. When the input signal increases, the bellows expands and moves a beam. The beam pivots about an input axis, which moves a flapper closer to the nozzle. The nozzle pressure increases, which increases the output pressure to the actuator through a pneumatic amplifier relay. The increased output pressure to the actuator causes the valve stem to move.
Stem movement is fed back to the beam by means of a cam. As the cam rotates, the beam pivots about the feedback axis to move the flapper slightly away from the nozzle. The nozzle pressure decreases and reduces the output pressure to the actuator. Stem movement continues, backing the flapper away from the nozzle until equilibrium is reached. When the input signal decreases, the bellows contracts (aided by an internal range spring) and the beam pivots about the input axis to move the flapper away from the nozzle. Nozzle decreases and the relay permits the release of diaphragm casing pressure to the atmosphere, which allows the actuator stem to move upward.
Through the cam, stem movement is fed back to the beam to reposition the flapper closer to the nozzle. When equilibrium conditions are obtained, stem movement stops and the flapper is positioned to prevent any further decrease in actuator pressure.
Analog positioners
The second type of positioner is an analog I/P positioner. Most modern processing units use a 4 to 20 mA DC signal to modulate the control valves. This introduces electronics into the positioner design and requires that the positioner convert the electronic current signal into a pneumatic pressure signal (current-to-pneumatic or I/P). In a typical analog I/P positioner, the converter receives a DC input signal and provides a proportional pneumatic output signal through a nozzle/flapper arrangement. The pneumatic output signal provides the input signal to the pneumatic positioner. Otherwise, the design is the same as the pneumatic positioner
Digital positioners
While pneumatic positioners and analog I/P positioners provide basic valve position control, digital valve controllers add another dimension to positioner capabilities. This type of positioner is a microprocessor-based instrument. The microprocessor enables diagnostics and two-way communication to simplify setup and troubleshooting.
In a typical digital valve controller, the control signal is read by the microprocessor, processed by a digital algorithm, and converted into a drive current signal to the I/P converter. The microprocessor performs the position control algorithm rather than a mechanical beam, cam, and flapper assembly. As the control signal increases, the drive signal to the I/P converter increases, increasing the output pressure from the I/P converter. This pressure is routed to a pneumatic amplifier relay and provides two output pressures to the actuator. With increasing control signal, one output pressure always increases and the other output pressure decreases
Double-acting actuators use both outputs, whereas single-acting actuators use only one output. The changing output pressure causes the actuator stem or shaft to move. Valve position is fed back to the microprocessor. The stem continues to move until the correct position is attained. At this point, the microprocessor stabilizes the drive signal to the I/P converter until equilibrium is obtained.
In addition to the function of controlling the position of the valve, a digital valve controller has two additional capabilities: diagnostics and two-way digital communication.
Widely used communication protocols include HART, FOUNDATION fieldbus, and PROFIBUS.
Advantages of placing a smart positioner on a control valve:
Automatic calibration and configuration of positioner.
Real time diagnostics.
Reduced cost of loop commissioning, including installation and calibration.
Use of diagnostics to maintain loop performance levels.
Improved process control accuracy that reduces process variability.
Types of control valve
Control valves are classified by attributes and features.
Based on the pressure drop profile
High recovery valve: These valves typically regain most of static pressure drop from the inlet to vena contracta at the outlet. They are characterised by a lower recovery coefficient. Examples: butterfly valve, ball valve, plug valve, gate valve
Low recovery valve: These valves typically regain little of the static pressure drop from the inlet to vena contracta at the outlet. They are characterised by a higher recovery coefficient. Examples: globe valve, angle valve
Based on the movement profile of the controlling element
Sliding stem: The valve stem / plug moves in a linear, or straight line motion. Examples: Globe valve, angle valve, wedge type gate valve
Rotary valve: The valve disc rotates. Examples: Butterfly valve, ball valve
Based on the functionality
Control valve: Controls flow parameters proportional to an input signal received from the central control system. Examples: Globe valve, angle valve, ball valve
Shut-off / On-off valve: These valves are either completely open or closed. Examples: Gate valve, ball valve, globe valve, angle valve, pinch valve, diaphragm valve
Check valve: Allows flow only in a single direction
Steam conditioning valve: Regulates the pressure and temperature of inlet media to required parameters at outlet. Examples: Turbine bypass valve, process steam letdown station
Spring-loaded safety valve: Closed by the force of a spring, which retracts to open when the inlet pressure is equal to the spring force
Based on the actuating medium
Manual valve: Actuated by hand wheel
Pneumatic valve: Actuated using a compressible medium like air, hydrocarbon, or nitrogen, with a spring diaphragm, piston cylinder or piston-spring type actuator
Hydraulic valve: Actuated by a non-compressible medium such as water or oil
Electric valve: Actuated by an electric motor
A wide variety of valve types and control operation exist. However, there are two main forms of action, the sliding stem and the rotary.
The most common and versatile types of control valves are sliding-stem globe, V-notch ball, butterfly and angle types. Their popularity derives from rugged construction and the many options available that make them suitable for a variety of process applications. Control valve bodies may be categorized as below:
List of common types of control valve
Sliding stem
Rotary
Other
See also
References
External links
Control Valve Handbook
Fluid Control Research Institute
Valve World Magazine
New era of valve design and engineering
Machine learning based Valve Design Application
Control devices
Valves | Control valve | [
"Physics",
"Chemistry",
"Engineering"
] | 2,113 | [
"Control devices",
"Physical systems",
"Control engineering",
"Valves",
"Hydraulics",
"Piping"
] |
8,726,659 | https://en.wikipedia.org/wiki/Structural%20Engineering%20exam | The Structural Engineering exam is a written examination given by state licensing boards in the United States as part of the testing for licensing structural engineers. This exam is written by the National Council of Examiners for Engineering and Surveying. It is given in eight-hour segments over two days, with the first day covering vertical forces. Problems involving lateral forces are covered on the second day. Each day's morning session features multiple-choice questions, while the afternoon sessions are devoted to essay questions.
References
Structural engineering
Standardized tests in the United States
Engineering education | Structural Engineering exam | [
"Engineering"
] | 109 | [
"Structural engineering",
"Civil engineering",
"Construction"
] |
8,726,682 | https://en.wikipedia.org/wiki/Etching%20%28microfabrication%29 | Etching is used in microfabrication to chemically remove layers from the surface of a wafer during manufacturing. Etching is a critically important process module in fabrication, and every wafer undergoes many etching steps before it is complete.
For many etch steps, part of the wafer is protected from the etchant by a "masking" material which resists etching. In some cases, the masking material is a photoresist which has been patterned using photolithography. Other situations require a more durable mask, such as silicon nitride.
Etching media and technology
The two fundamental types of etchants are liquid-phase ("wet") and plasma-phase ("dry"). Each of these exists in several varieties.
Wet etching
The first etching processes used liquid-phase ("wet") etchants. This process is now largely outdated but was used up until the late 1980s when it was superseded by dry plasma etching. The wafer can be immersed in a bath of etchant, which must be agitated to achieve good process control. For instance, buffered hydrofluoric acid (BHF) is used commonly to etch silicon dioxide over a silicon substrate.
Different specialized etchants can be used to characterize the surface etched.
Wet etchants are usually isotropic, which leads to a large bias when etching thick films. They also require the disposal of large amounts of toxic waste. For these reasons, they are seldom used in state-of-the-art processes. However, the photographic developer used for photoresist resembles wet etching.
As an alternative to immersion, single wafer machines use the Bernoulli principle to employ a gas (usually, pure nitrogen) to cushion and protect one side of the wafer while etchant is applied to the other side. It can be done to either the front side or back side. The etch chemistry is dispensed on the top side when in the machine and the bottom side is not affected. This etching method is particularly effective just before "backend" processing (BEOL), where wafers are normally very much thinner after wafer backgrinding, and very sensitive to thermal or mechanical stress. Etching a thin layer of even a few micrometres will remove microcracks produced during backgrinding resulting in the wafer having dramatically increased strength and flexibility without breaking.
Anisotropic wet etching (Orientation dependent etching)
Some wet etchants etch crystalline materials at very different rates depending upon which crystal face is exposed. In single-crystal materials (e.g. silicon wafers), this effect can allow very high anisotropy, as shown in the figure. The term "crystallographic etching" is synonymous with "anisotropic etching along crystal planes".
However, for some non-crystal materials like glass, there are unconventional ways to etch in an anisotropic manner. The authors employ multistream laminar flow that contains etching non-etching solutions to fabricate a glass groove. The etching solution at the center is flanked by non-etching solutions and the area contacting etching solutions is limited by the surrounding non-etching solutions. The etching direction is thereby mainly vertical to the glass surface. The scanning electron microscopy (SEM) images demonstrate the breaking of the conventional theoretical limit of aspect ratio (width/height=0.5) and contribute a two-fold improvement (width/height=1).
Several anisotropic wet etchants are available for silicon, all of them hot aqueous caustics. For instance, potassium hydroxide (KOH) displays an etch rate selectivity 400 times higher in <100> crystal directions than in <111> directions. EDP (an aqueous solution of ethylene diamine and pyrocatechol), displays a <100>/<111> selectivity of 17X, does not etch silicon dioxide as KOH does, and also displays high selectivity between lightly doped and heavily boron-doped (p-type) silicon. Use of these etchants on wafers that already contain CMOS integrated circuits requires protecting the circuitry. KOH may introduce mobile potassium ions into silicon dioxide, and EDP is highly corrosive and carcinogenic, so care is required in their use. Tetramethylammonium hydroxide (TMAH) presents a safer alternative than EDP, with a 37X selectivity between {100} and {111} planes in silicon.
Etching a (100) silicon surface through a rectangular hole in a masking material, like a hole in a layer of silicon nitride, creates a pit with flat sloping {111}-oriented sidewalls and a flat (100)-oriented bottom. The {111}-oriented sidewalls have an angle to the surface of the wafer of:
If the etching is continued "to completion", i.e. until the flat bottom disappears, the pit becomes a trench with a V-shaped cross-section. If the original rectangle was a perfect square, the pit when etched to completion displays a pyramidal shape.
The undercut, δ, under an edge of the masking material is given by:
,
where Rxxx is the etch rate in the <xxx> direction, T is the etch time, D is the etch depth and S is the anisotropy of the material and etchant.
Different etchants have different anisotropies. Below is a table of common anisotropic etchants for silicon:
Plasma etching
Modern very large scale integration (VLSI) processes avoid wet etching, and use plasma etching instead. Plasma etchers can operate in several modes by adjusting the parameters of the plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure, commonly used in vacuum engineering, equals approximately 133.3 pascals.) The plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. Since neutral particles attack the wafer from all angles, this process is isotropic.
Plasma etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in deep reactive ion etching (DRIE). The use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation-dependent etching.
The source gas for the plasma usually contains small molecules rich in chlorine or fluorine. For instance, carbon tetrachloride (CCl4) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. A plasma containing oxygen is used to oxidize ("ash") photoresist and facilitate its removal.
Ion milling, or sputter etching, uses lower pressures, often as low as 10−4 Torr (10 mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock atoms from the substrate by transferring momentum. Because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching (RIE) operates under conditions intermediate between sputter and plasma etching (between 10−3 and 10−1 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to produce deep, narrow features.
Figures of merit
If the etch is intended to make a cavity in a material, the depth of the cavity may be controlled approximately using the etching time and the known etch rate. More often, though, etching must entirely remove the top layer of a multilayer structure, without damaging the underlying or masking layers. The etching system's ability to do this depends on the ratio of etch rates in the two materials (selectivity).
Some etches undercut the masking layer and form cavities with sloping sidewalls. The distance of undercutting is called bias. Etchants with large bias are called isotropic, because they erode the substrate equally in all directions. Modern processes greatly prefer anisotropic etches, because they produce sharp, well-controlled features.
Common etch processes used in microfabrication
See also
Chemical-Mechanical Polishing
Ingot sawing
Metal assisted chemical etching
Lift-off (microtechnology)
References
Ibid, "Processes for MicroElectroMechanical Systems (MEMS)"
Inline references
External links
Semiconductor technology
Semiconductor device fabrication
Etching
Microtechnology | Etching (microfabrication) | [
"Materials_science",
"Engineering"
] | 1,879 | [
"Microtechnology",
"Etching (microfabrication)",
"Materials science",
"Semiconductor device fabrication",
"Semiconductor technology"
] |
8,726,769 | https://en.wikipedia.org/wiki/Trilinos | Trilinos is a collection of open-source software libraries, called packages, intended to be used as building blocks for the development of scientific applications. The word "Trilinos" is Greek and conveys the idea of "a string of pearls", suggesting a number of software packages linked together by a common infrastructure. Trilinos was developed at Sandia National Laboratories from a core group of existing algorithms and utilizes the functionality of software interfaces such as BLAS, LAPACK, and MPI.
In 2004, Trilinos received an R&D100 Award.
Several supercomputing facilities provide an installed version of Trilinos for their users. These include the National Energy Research Scientific Computing Center (NERSC), Blue Waters at the National Center for Supercomputing Applications, and the Titan supercomputer at Oak Ridge National Laboratory.
Features
Trilinos contains packages for:
Constructing and using sparse graphs and matrices, and dense matrices and vectors.
Iterative and direct solution of linear systems.
Parallel multilevel and algebraic preconditioning.
Solution of non-linear, eigenvalue and time-dependent problems.
PDE-constrained optimization problems.
Partitioning and load balancing of distributed data structures.
Automatic differentiation
Discretizing partial differential equations.
Trilinos supports distributed-memory parallel computation through the Message Passing Interface (MPI). In addition, some Trilinos packages have growing support for shared-memory parallel computation. They do so by means of the Kokkos package, which provides a common C++ interface over various parallel programming models, including OpenMP, POSIX Threads, and CUDA.
Programming languages
Most Trilinos packages are written in C++. Trilinos version 12.0 and later requires C++11 support. Some Trilinos packages, like ML and Zoltan, are written in C. A few packages, like Epetra, have optional implementations of some computational kernels in Fortran, but Fortran is not required to build these packages.
Some Trilinos packages have bindings for other programming languages. These include Python, C, Fortran, and MATLAB.
Software licenses
Each Trilinos package may have its own software license. Most packages are Open-source; most of these have a Modified BSD license, while a few packages are under the GNU Lesser General Public License (LGPL). The BLAS and LAPACK libraries are required dependencies.
See also
BLAS
LAPACK
Message Passing Interface
List of numerical-analysis software
Sandia National Laboratories
References
External links
"Kokkos: The Programming Model"
"KOKKOS PROGRAMMING MODEL"
"Kokkos Tutorial"
Numerical libraries
Concurrent programming libraries
Free mathematics software
C++ numerical libraries | Trilinos | [
"Mathematics"
] | 561 | [
"Free mathematics software",
"Mathematical software"
] |
8,726,934 | https://en.wikipedia.org/wiki/Xenobiotica | Xenobiotica is a peer-reviewed medical journal that publishes comprehensive research papers on all areas of xenobiotics. It is published by Informa plc and covers six main areas:
General xenobiochemistry, including in vitro studies concerned with the metabolism, disposition and excretion of drugs, and other xenobiotics, as well as the structure, function and regulation of associated enzymes
Clinical pharmacokinetics and metabolism, covering the pharmacokinetics and absorption, distribution, metabolism and excretion of drugs and other xenobiotics in man.
Animal pharmacokinetics and metabolism, covering the pharmacokinetics, and absorption, distribution, metabolism and excretion of drugs and other xenobiotics in animals.
Pharmacogenetics, defined as the identification and functional characterisation of polymorphic genes that encode xenobiotic metabolising enzymes and transporters that may result in altered enzymatic, cellular and clinical responses to xenobiotics.
Molecular toxicology, concerning the mechanisms of toxicity and the study of toxicology of xenobiotics at the molecular level.
Topics in xenobiochemistry, in the form of reviews and commentaries are primarily intended to be a critical analysis of the issue, wherein the author offers opinions on the relevance of data or of a particular experimental approach or methodology.
According to the Journal Citation Reports, the journal received a 2014 impact factor of 2.199, ranking it 134th out of 254 journals in the category Pharmacology & Pharmacy and 50th out of 87 journals in the category Toxicology.
The editor in chief is Costas Ioannides (University of Surrey).
Abstracting and indexing
Xenobiotica is abstracted and indexed in Biochemistry and Biophysics Citation Index, BIOSIS, Chemical Abstracts; Current Contents/Life Science, EBSCO, Science Citation Index, PASCAL, SciSearch, Scopus, and Index Medicus/MEDLINE/PubMed.
References
External links
Academic journals established in 1971
Pharmacology journals
Toxicology journals
Taylor & Francis academic journals
Monthly journals
English-language journals | Xenobiotica | [
"Environmental_science"
] | 437 | [
"Toxicology journals",
"Toxicology"
] |
8,727,347 | https://en.wikipedia.org/wiki/Mortgage%20discrimination | Mortgage discrimination or mortgage lending discrimination is the practice of banks, governments or other lending institutions denying loans to one or more groups of people primarily on the basis of race, ethnic origin, sex or religion.
Instances of mortgage discrimination occurred in United States inner city neighborhoods from the 1930s and there is evidence that the practice continues to a degree in the United States today. In the United States, banks practiced redlining or denial of financial services including banking or insurance to residents of areas based upon the racial or ethnic composition of those areas, either directly or through selectively raising prices. Prior to the passage of the 1974 Equal Credit Opportunity Act and Housing and Community Development Act, lenders and the U.S. federal government frequently and explicitly discriminated against female mortgage loan applicants.
Background
African Americans and other minorities found it nearly impossible to secure mortgages for property located in redlined parking zones. The systematic denial of loans was a major contributor to the urban decay that plagued many American cities during this time period. Minorities who tried to buy homes continued to face direct discrimination from lending institutions into the late 1990s. The disparities are not simply due to differences in creditworthiness. With other factors held constant, rejection rates for Black and Hispanic applicants was about 1.6 times that for Whites in 1995.
Fairness in lending was improved by the Home Mortgage Disclosure Act, passed in 1975. It requires banks to disclose their lending practices in the communities they serve. In the 1970s, the private sector fight against mortgage discrimination began to be led by community development banks, such as ShoreBank in Chicago.
Contemporary
Several class action mortgage discrimination claims have been filed against lenders across the country, alleging that those lenders disproportionately targeted minorities for high cost, high risk subprime lending, which has resulted in disproportionately higher rates of default and foreclosure for minority African American and Hispanic borrowers.
FHA loans, a federal mortgage program, went to the white majority and reached few minorities. In a study done in Syracuse, between 1996 and 2000, of the 2,169 FHA loans issued only 29 or 1.3 percent went to predominantly minority neighborhoods compared with 1,694 or 78.1 percent that went to white neighborhoods. Mortgage discrimination played a significant part in the real estate bubble that popped during the later part of 2008, it was found that minorities were disproportionately steered by lenders into subprime loans.
In 1993 President Bill Clinton made changes to the Community Reinvestment Act to make mortgages more obtainable for lower and lower-middle-class families. In 1993 the Federal Reserve Bank of Boston issued a report entitled "Closing the Gap: A Guide to Equal Opportunity Lending". The 30-page document was intended to serve as a guide to loan officers to help curb discriminatory lending "Closing the Gap", instructs banks to hire based upon diversity needs, sweeten the compensation structure for working with lower income applicants, encourages shifting high risk, low income applications to the sub prime market, by saying "the secondary market [Subprime Market] is willing to consider ratios above the standard 28/36", and "Lack of credit history should not be seen as a negative factor".
While, "Closing the Gap" was not an industry-wide mandate, it illustrates the efforts banks made to meet public pressure to overcome mortgage discrimination. Under the Clinton administration community organizers pressured banks to increase their loans to minorities. Karen Wegmann, the head of Wells Fargo's community development group in 1993 told the New York Times, "The atmosphere now is one of saying yes." The same New York Times article echoed "Closing the Gap", writing, "The banks have also modified some standards for credit approval. Many low-income people do not have credit-bureau files because they do not have credit cards. So lenders are accepting records of continuously paid utility bills as evidence of creditworthiness. Similarly, they will accept steady income from several employers instead of the length of time at one job."
Because of looser loan restrictions many people who did not qualify for a mortgage before now could own a home. The banks issued loans with teaser rates, knowing that when higher variable rates kicked in later the borrowers would not be able to meet their payments. As long as housing prices kept rising and borrowers could refinance easily, everyone appeared to be doing well.
Minorities willingly entered sub-prime mortgages in far greater numbers than whites and represented a disproportional percentage of foreclosures,
Recently, the NAACP has submitted a lawsuit concerning alleged injustices in the lending industry. An analysis, by N.Y.U.'s Furman Center for Real Estate and Urban Policy, illustrated stark racial differences between the New York City neighborhoods where subprime mortgages were common and those where they were rare. The 10 neighborhoods with the highest rates of mortgages from subprime lenders had black and Hispanic majorities, and the 10 areas with the lowest rates were mainly non-Hispanic white. The analysis showed that even when median income levels were comparable, home buyers in minority neighborhoods were more likely to get a loan from a subprime lender. Discrimination motivated by prejudice is contingent on the racial composition of neighborhoods where the loan is sought and the race of the applicant. Lending institutions have been shown to treat black and Latino mortgage applicants differently when buying homes in white neighborhoods than when buying homes in black neighborhoods. An example of this occurred in the 1960s and 1970s on the near northside of Chicago. Thousands of blacks, Latinos, and poor people were systematically dislocated and prevented from acquiring loans by realtors and lending institutions with the blessings of the city's urban renewal program.
A 2015 Measure of America study commissioned by the American Civil Liberties Union examined the likely effect of discriminatory lending leading up to the financial crisis on the racial wealth gap for the next generation, and found that, among families that owned homes, white households had started to rebound from the worst effects of the Great Recession while black households were still struggling to make up lost ground. The analysis projected that the racial wealth gap will be significantly greater in the next generation because of the differential impact of the Great Recession.
Reverse redlining
Reverse redlining is a term that was coined by Gregory D. Squires, a professor of Sociology and Public Policy and Public Administration at George Washington University. This phenomenon occurs when a lender or insurer particularly targets minority consumers, not to deny them loans or insurance, but rather to charge them more than would be charged to a similarly situated majority consumer, specifically marketing the most expensive and onerous loan products. These communities had largely been ignored by most lenders just a couple decades earlier. However these same financial institutions in the 2000s saw black communities as fertile ground for subprime mortgages. Wells Fargo for instance partnered with churches in black communities, where the pastor would deliver "wealth building" seminars in their sermons, and the bank would make a donation to the church in return for every new mortgage application. There was pressure on both sides, as working-class blacks wanted a part of the nation's home-owning trend.
A survey of two districts of similar incomes, one being largely white and the other largely black, found that branches in the black community offered largely subprime loans and almost no prime loans. Studies found out that high-income blacks were almost twice as likely to end up with subprime home-purchase mortgages as low-income whites. Loan officers were clearly aware that they were exploiting their customers, in some cases referring to blacks as "mud people" and to subprime lending as "ghetto loans". A lower savings rate and a distrust of banks stemming from a legacy of redlining may help explain why there are fewer branches in minority neighborhoods. In recent years while subprime loans were not sought out by borrowers, brokers and telemarketers actively pushed them. A majority of the loans were refinance transactions allowing homeowners to take cash out of their appreciating property or pay off credit card and other debt.
Several state attorneys general have begun investigating these practices which may violate fair lending laws, and the N.A.A.C.P. have filed a class-action lawsuit charging systematic racial discrimination by more than a dozen banks. These suits have met with some success.
Occupy Our Homes
Reverse redlining has been cited as justification for the Occupy Our Homes movement. In Occupy Our Homes, protesters camp out at a person's foreclosed home to gain concessions from the lender, such as a delay in eviction.
Laws
Equal Credit Opportunity Act
Under the Equal Credit Opportunity Act ("ECOA"), a creditor may not discriminate against an applicant based on the applicant's race, color, or national origin "with respect to any aspect of a credit transaction", 15 U.S.C. § 1991.
Fair Housing Act
Under the Fair Housing Act ("FHA") (Title VIII of the Civil Rights Act of 1968), it is "unlawful for any person or other entity whose business includes engaging in residential real estate-related transactions to discriminate against any person in making available such a transaction, or in the terms or conditions of such a transaction, because of race, color, religion, sex, handicap, familial status, or national origin". 42 U.S.C. § 3605. Section 3605, although not specifically naming foreclosures, discrimination in "the manner in which a lending institution forecloses a dlinquent or defaulted mortgage note" falls under the realm of the "terms or conditions of such loan". Harper v. Union Savings Association, 429 F.Supp. 1254, 1258-59 (N.D. Ohio 1977). The Office of Fair Housing and Equal Opportunity is charged with administering and enforcing the Fair Housing Act. Any person who feels that they have faced lending discrimination can file a fair housing complaint.
FDIC
Consistent with many jurisdictions throughout the country, the Federal Deposit Insurance Corporation ("FDIC"), based in part on a study conducted by the Federal Reserve Bank of Boston, issued a "Policy Statement On Discrimination In Lending" on April 29, 2004, emphasizing the breadth of prohibitions on discriminatory conduct in lending under the ECOA and the FHA. The FDIC Policy Statement explained that "courts have recognized three methods of proof of lending discrimination under the ECOA and the FH Act", including: "Overt evidence of discrimination", when a lender blatantly discriminates on a prohibited basis; evidence of "disparate treatment", when a lender treats applicants differently based on one of the prohibited factors; and evidence of "disparate impact", when a lender applies a practice uniformly to all applicants but the practice has a discriminatory effect on a prohibited basis and is not justified by business necessity.
FDIC Policy Statement, p. 5399 (April 29, 2004).
Civil Rights Act of 1866
In addition to ECOA and FHA, the Civil Rights Act of 1866, as amended, provides that "[a]ll citizens of the United States shall have the same right, in every State and Territory, as is enjoyed by white citizens thereof to inherit, purchase, lease, sell, hold, and convey real and personal property". 42 U.S.C. § 1982.
See also
Black flight
Home Mortgage Disclosure Act
Office of Fair Housing and Equal Opportunity
Racial steering
Redlining
Segregation
White flight
Sources
External links
File a housing discrimination complaint
Office of Fair Housing and Equal Opportunity
Discrimination
Mortgage industry of the United States | Mortgage discrimination | [
"Biology"
] | 2,384 | [
"Behavior",
"Aggression",
"Discrimination"
] |
8,728,576 | https://en.wikipedia.org/wiki/Sybil%20attack | A Sybil attack is a type of attack on a computer network service in which an attacker subverts the service's reputation system by creating a large number of pseudonymous identities and uses them to gain a disproportionately large influence. It is named after the subject of the book Sybil, a case study of a woman diagnosed with dissociative identity disorder. The name was suggested in or before 2002 by Brian Zill at Microsoft Research. The term pseudospoofing had previously been coined by L. Detweiler on the Cypherpunks mailing list and used in the literature on peer-to-peer systems for the same class of attacks prior to 2002, but this term did not gain as much influence as "Sybil attack".
Description
The Sybil attack in computer security is an attack wherein a reputation system is subverted by creating multiple identities. A reputation system's vulnerability to a Sybil attack depends on how cheaply identities can be generated, the degree to which the reputation system accepts inputs from entities that do not have a chain of trust linking them to a trusted entity, and whether the reputation system treats all entities identically. , evidence showed that large-scale Sybil attacks could be carried out in a very cheap and efficient way in extant realistic systems such as BitTorrent Mainline DHT.
An entity on a peer-to-peer network is a piece of software that has access to local resources. An entity advertises itself on the peer-to-peer network by presenting an identity. More than one identity can correspond to a single entity. In other words, the mapping of identities to entities is many to one. Entities in peer-to-peer networks use multiple identities for purposes of redundancy, resource sharing, reliability and integrity. In peer-to-peer networks, the identity is used as an abstraction so that a remote entity can be aware of identities without necessarily knowing the correspondence of identities to local entities. By default, each distinct identity is usually assumed to correspond to a distinct local entity. In reality, many identities may correspond to the same local entity.
An adversary may present multiple identities to a peer-to-peer network in order to appear and function as multiple distinct nodes. The adversary may thus be able to acquire a disproportionate level of control over the network, such as by affecting voting outcomes.
In the context of (human) online communities, such multiple identities are sometimes known as sockpuppets.
The less common term inverse-Sybil attack has been used to describe an attack in which many entities appear as a single identity.
Example
A notable Sybil attack in conjunction with a traffic confirmation attack was launched against the Tor anonymity network for several months in 2014.
There are other examples of Sybil attacks run against Tor network users. This includes the 2020 Bitcoin address rewrite attacks. The attacker controlled a quarter of all Tor exit relays and employed SSL stripping to downgrade secure connections and divert funds to the wallet of the threat actor known as BTCMITM20.
Another notable example is the 2017–2021 attack run by threat actor KAX17. This entity controlled over 900 malicious servers, primarily middle points, in an attempt to deanonymize Tor users.
Prevention
Known approaches to Sybil attack prevention include identity validation, social trust graph algorithms, economic costs, personhood validation, and application-specific defenses.
Identity validation
Validation techniques can be used to prevent Sybil attacks and dismiss masquerading hostile entities. A local entity may accept a remote identity based on a central authority which ensures a one-to-one correspondence between an identity and an entity and may even provide a reverse lookup. An identity may be validated either directly or indirectly. In direct validation the local entity queries the central authority to validate the remote identities. In indirect validation the local entity relies on already-accepted identities which in turn vouch for the validity of the remote identity in question.
Practical network applications and services often use a variety of identity proxies to achieve limited Sybil attack resistance, such as telephone number verification, credit card verification, or even based on the IP address of a client. These methods have the limitations that it is usually possible to obtain multiple such identity proxies at some cost – or even to obtain many at low cost through techniques such as SMS spoofing or IP address spoofing. Use of such identity proxies can also exclude those without ready access to the required identity proxy: e.g., those without their own mobile phone or credit card, or users located behind carrier-grade network address translation who share their IP addresses with many others.
Identity-based validation techniques generally provide accountability at the expense of anonymity, which can be an undesirable tradeoff especially in online forums that wish to permit censorship-free information exchange and open discussion of sensitive topics. A validation authority can attempt to preserve users' anonymity by refusing to perform reverse lookups, but this approach makes the validation authority a prime target for attack. Protocols using threshold cryptography can potentially distribute the role of such a validation authority among multiple servers, protecting users' anonymity even if one or a limited number of validation servers is compromised.
Social trust graphs
Sybil prevention techniques based on the connectivity characteristics of social graphs can also limit the extent of damage that can be caused by a given Sybil attacker while preserving anonymity. Examples of such prevention techniques include SybilGuard, SybilLimit, the Advogato Trust Metric, SybilRank, and the sparsity based metric to identify Sybil clusters in a distributed P2P based reputation system.
These techniques cannot prevent Sybil attacks entirely, and may be vulnerable to widespread small-scale Sybil attacks. In addition, it is not clear whether real-world online social networks will satisfy the trust or connectivity assumptions that these algorithms assume.
Economic costs
Alternatively, imposing economic costs as artificial barriers to entry may be used to make Sybil attacks more expensive. Proof of work, for example, requires a user to prove that they expended a certain amount of computational effort to solve a cryptographic puzzle. In Bitcoin and related permissionless cryptocurrencies, miners compete to append blocks to a blockchain and earn rewards roughly in proportion to the amount of computational effort they invest in a given time period. Investments in other resources such as storage or stake in existing cryptocurrency may similarly be used to impose economic costs.
Personhood validation
As an alternative to identity verification that attempts to maintain a strict "one-per-person" allocation rule, a validation authority can use some mechanism other than knowledge of a user's real identity – such as verification of an unidentified person's physical presence at a particular place and time as in a pseudonym party – to enforce a one-to-one correspondence between online identities and real-world users. Such proof of personhood approaches have been proposed as a basis for permissionless blockchains and cryptocurrencies in which each human participant would wield exactly one vote in consensus. A variety of approaches to proof of personhood have been proposed, some with deployed implementations, although many usability and security issues remain.
Application-specific defenses
A number of distributed protocols have been designed with Sybil attack protection in mind. SumUp and DSybil are Sybil-resistant algorithms for online content recommendation and voting. Whānau is a Sybil-resistant distributed hash table algorithm. I2P's implementation of Kademlia also has provisions to mitigate Sybil attacks.
See also
Astroturfing
Ballot stuffing
Social bot
Sockpuppetry
References
External links
A Survey of Solutions to the Sybil Attack
On Network formation: Sybil attacks and Reputation systems
A Survey of DHT Security Techniques by Guido Urdaneta, Guillaume Pierre and Maarten van Steen. ACM Computing surveys, 2009.
An experiment on the weakness of reputation algorithms used in professional social networks: the case of Naymz by Marco Lazzari. Proceedings of the IADIS International Conference e-Society 2010.
Internet manipulation and propaganda
Computer network security
Reputation management | Sybil attack | [
"Engineering"
] | 1,663 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
8,728,891 | https://en.wikipedia.org/wiki/Chariot%20%28Chinese%20constellation%29 | The Chariot mansion () is one of the Twenty-Eight Mansions of the Chinese constellations. It is one of the southern mansions of the Vermilion Bird.
Asterisms
References
Chinese constellations | Chariot (Chinese constellation) | [
"Astronomy"
] | 42 | [
"Chinese constellations",
"Constellations"
] |
8,729,584 | https://en.wikipedia.org/wiki/Hess%20diagram | A Hess diagram plots the relative density of occurrence of stars at differing color–magnitude positions of the Hertzsprung–Russell diagram for a given galaxy or resolved stellar population. The diagram is named after R. Hess who originated it in 1924. Its use dates back to at least 1948.
Hess diagrams are widely used in the study of discrete resolved stellar systems in and around the Milky Way - specifically, in the analysis of globular clusters, satellite galaxies, and stellar streams.
See also
Color-color diagram
References
Hertzsprung–Russell classifications
Stellar evolution
Galaxies
Diagrams | Hess diagram | [
"Physics",
"Astronomy"
] | 118 | [
"Galaxies",
"Astronomy stubs",
"Astrophysics",
"Stellar evolution",
"Stellar astronomy stubs",
"Astrophysics stubs",
"Astronomical objects"
] |
8,729,683 | https://en.wikipedia.org/wiki/Stirling%20numbers%20and%20exponential%20generating%20functions%20in%20symbolic%20combinatorics | The use of exponential generating functions (EGFs) to study the properties of Stirling numbers is a classical exercise in combinatorial mathematics and possibly the canonical example of how symbolic combinatorics is used. It also illustrates the parallels in the construction of these two types of numbers, lending support to the binomial-style notation that is used for them.
This article uses the coefficient extraction operator for formal power series, as well as the (labelled) operators (for cycles) and (for sets) on combinatorial classes, which are explained on the page for symbolic combinatorics. Given a combinatorial class, the cycle operator creates the class obtained by placing objects from the source class along a cycle of some length, where cyclical symmetries are taken into account, and the set operator creates the class obtained by placing objects from the source class in a set (symmetries from the symmetric group, i.e. an "unstructured bag".) The two combinatorial classes (shown without additional markers) are
permutations (for unsigned Stirling numbers of the first kind):
and
set partitions into non-empty subsets (for Stirling numbers of the second kind):
where is the singleton class.
Warning: The notation used here for the Stirling numbers is not that of the Wikipedia articles on Stirling numbers; square brackets denote the signed Stirling numbers here.
Stirling numbers of the first kind
The unsigned Stirling numbers of the first kind count the number of permutations of [n] with k cycles. A permutation is a set of cycles, and hence the set of permutations is given by
where the singleton marks cycles. This decomposition is examined in some detail on the page on the statistics of random permutations.
Translating to generating functions we obtain the mixed generating function of the unsigned Stirling numbers of the first kind:
Now the signed Stirling numbers of the first kind are obtained from the unsigned ones through the relation
Hence the generating function of these numbers is
A variety of identities may be derived by manipulating this generating function:
In particular, the order of summation may be exchanged, and derivatives taken, and then z or u may be fixed.
Finite sums
A simple sum is for
This formula holds because the exponential generating function of the sum is
Infinite sums
Some infinite sums include
where (the singularity nearest to
of is at )
This relation holds because
Stirling numbers of the second kind
These numbers count the number of partitions of [n] into k nonempty subsets. First consider the total number of partitions, i.e. Bn where
i.e. the Bell numbers. The Flajolet–Sedgewick fundamental theorem applies (labelled case).
The set of partitions into non-empty subsets is given by ("set of non-empty sets of singletons")
This decomposition is entirely analogous to the construction of the set of permutations from cycles, which is given by
and yields the Stirling numbers of the first kind. Hence the name "Stirling numbers of the second kind."
The decomposition is equivalent to the EGF
Differentiate to obtain
which implies that
by convolution of exponential generating functions and because differentiating an EGF drops the first coefficient and shifts Bn+1 to z n/n!.
The EGF of the Stirling numbers of the second kind is obtained by marking every subset that goes into the partition with the term , giving
Translating to generating functions, we obtain
This EGF yields the formula for the Stirling numbers of the second kind:
or
which simplifies to
References
Ronald Graham, Donald Knuth, Oren Patashnik (1989): Concrete Mathematics, Addison–Wesley,
D. S. Mitrinovic, Sur une classe de nombre relies aux nombres de Stirling, C. R. Acad. Sci. Paris 252 (1961), 2354–2356.
A. C. R. Belton, The monotone Poisson process, in: Quantum Probability (M. Bozejko, W. Mlotkowski and J. Wysoczanski, eds.), Banach Center Publications 73, Polish Academy of Sciences, Warsaw, 2006
Milton Abramowitz and Irene A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, USGPO, 1964, Washington DC,
Enumerative combinatorics | Stirling numbers and exponential generating functions in symbolic combinatorics | [
"Mathematics"
] | 888 | [
"Enumerative combinatorics",
"Combinatorics"
] |
8,730,871 | https://en.wikipedia.org/wiki/Dual%20quaternion | In mathematics, the dual quaternions are an 8-dimensional real algebra isomorphic to the tensor product of the quaternions and the dual numbers. Thus, they may be constructed in the same way as the quaternions, except using dual numbers instead of real numbers as coefficients. A dual quaternion can be represented in the form , where A and B are ordinary quaternions and ε is the dual unit, which satisfies and commutes with every element of the algebra.
Unlike quaternions, the dual quaternions do not form a division algebra.
In mechanics, the dual quaternions are applied as a number system to represent rigid transformations in three dimensions. Since the space of dual quaternions is 8-dimensional and a rigid transformation has six real degrees of freedom, three for translations and three for rotations, dual quaternions obeying two algebraic constraints are used in this application. Since unit quaternions are subject to two algebraic constraints, unit quaternions are standard to represent rigid transformations.
Similar to the way that rotations in 3D space can be represented by quaternions of unit length, rigid motions in 3D space can be represented by dual quaternions of unit length. This fact is used in theoretical kinematics (see McCarthy), and in applications to 3D computer graphics, robotics and computer vision. Polynomials with coefficients given by (non-zero real norm) dual quaternions have also been used in the context of mechanical linkages design.
History
W. R. Hamilton introduced quaternions in 1843, and by 1873 W. K. Clifford obtained a broad generalization of these numbers that he called biquaternions, which is an example of what is now called a Clifford algebra.
In 1898 Alexander McAulay used Ω with Ω2 = 0 to generate the dual quaternion algebra. However, his terminology of "octonions" did not stick as today's octonions are another algebra.
In 1891 Eduard Study realized that this associative algebra was ideal for describing the group of motions of three-dimensional space. He further developed the idea in Geometrie der Dynamen in 1901.
B. L. van der Waerden called the structure "Study biquaternions", one of three eight-dimensional algebras referred to as biquaternions.
In 1895, Russian mathematician Aleksandr Kotelnikov developed dual vectors and dual quaternions for use in the study of mechanics.
Formulas
In order to describe operations with dual quaternions, it is helpful to first consider quaternions.
A quaternion is a linear combination of the basis elements 1, i, j, and k. Hamilton's product rule for i, j, and k is often written as
Compute , to obtain , and or . Now because , we see that this product yields , which links quaternions to the properties of determinants.
A convenient way to work with the quaternion product is to write a quaternion as the sum of a scalar and a vector (strictly speaking a bivector), that is , where a0 is a real number and is a three dimensional vector. The vector dot and cross operations can now be used to define the quaternion product of and as
A dual quaternion is usually described as a quaternion with dual numbers as coefficients. A dual number is an ordered pair . Two dual numbers add componentwise and multiply by the rule . Dual numbers are often written in the form , where ε is the dual unit that commutes with i, j, k and has the property .
The result is that a dual quaternion can be written as an ordered pair of quaternions . Two dual quaternions add componentwise and multiply by the rule,
It is convenient to write a dual quaternion as the sum of a dual scalar and a dual vector, , where and is the dual vector that defines a screw. This notation allows us to write the product of two dual quaternions as
Addition
The addition of dual quaternions is defined componentwise so that given,
and
then
Multiplication
Multiplication of two dual quaternion follows from the multiplication rules for the quaternion units i, j, k and commutative multiplication by the dual unit ε. In particular, given
and
then
Notice that there is no BD term, because the definition of dual numbers requires that .
This gives us the multiplication table (note the multiplication order is row times column):
Conjugate
The conjugate of a dual quaternion is the extension of the conjugate of a quaternion, that is
As with quaternions, the conjugate of the product of dual quaternions, , is the product of their conjugates in reverse order,
It is useful to introduce the functions Sc(∗) and Vec(∗) that select the scalar and vector parts of a quaternion, or the dual scalar and dual vector parts of a dual quaternion. In particular, if , then
This allows the definition of the conjugate of  as
or,
The product of a dual quaternion with its conjugate yields
This is a dual scalar which is the magnitude squared of the dual quaternion.
Dual number conjugate
A second type of conjugate of a dual quaternion is given by taking the dual number conjugate, given by
The quaternion and dual number conjugates can be combined into a third form of conjugate given by
In the context of dual quaternions, the term "conjugate" can be used to mean the quaternion conjugate, dual number conjugate, or both.
Norm
The norm of a dual quaternion is computed using the conjugate to compute . This is a dual number called the magnitude of the dual quaternion. Dual quaternions with are unit dual quaternions.
Dual quaternions of magnitude 1 are used to represent spatial Euclidean displacements. Notice that the requirement that , introduces two algebraic constraints on the components of Â, that is
The first of these constraints, implies that has magnitude 1, while the second constraint, implies that and are orthogonal.
Inverse
If is a dual quaternion, and p is not zero, then the inverse dual quaternion is given by
p−1 (1 − ε q p−1).
Thus the elements of the subspace do not have inverses. This subspace is called an ideal in ring theory. It happens to be the unique maximal ideal of the ring of dual numbers.
The group of units of the dual number ring then consists of numbers not in the ideal. The dual numbers form a local ring since there is a unique maximal ideal. The group of units is a Lie group and can be studied using the exponential mapping. Dual quaternions have been used to exhibit transformations in the Euclidean group. A typical element can be written as a screw transformation.
Dual quaternions and spatial displacements
A benefit of the dual quaternion formulation of the composition of two spatial displacements DB = ([RB], b) and DA = ([RA], a) is that the resulting dual quaternion yields directly the screw axis and dual angle of the composite displacement DC = DBDA.
In general, the dual quaternion associated with a spatial displacement D = ([A], d) is constructed from its screw axis S = (S, V) and the dual angle (φ, d) where φ is the rotation about and d the slide along this axis, which defines the displacement D. The associated dual quaternion is given by,
Let the composition of the displacement DB with DA be the displacement DC = DBDA. The screw axis and dual angle of DC is obtained from the product of the dual quaternions of DA and DB, given by
That is, the composite displacement DC=DBDA has the associated dual quaternion given by
Expand this product in order to obtain
Divide both sides of this equation by the identity
to obtain
This is Rodrigues' formula for the screw axis of a composite displacement defined in terms of the screw axes of the two displacements. He derived this formula in 1840.
The three screw axes A, B, and C form a spatial triangle and the dual angles at these vertices between the common normals that form the sides of this triangle are directly related to the dual angles of the three spatial displacements.
Matrix form of dual quaternion multiplication
The matrix representation of the quaternion product is convenient for programming quaternion computations using matrix algebra, which is true for dual quaternion operations as well.
The quaternion product AC is a linear transformation by the operator A of the components of the quaternion C, therefore there is a matrix representation of A operating on the vector formed from the components of C.
Assemble the components of the quaternion into the array . Notice that the components of the vector part of the quaternion are listed first and the scalar is listed last. This is an arbitrary choice, but once this convention is selected we must abide by it.
The quaternion product AC can now be represented as the matrix product
The product AC can also be viewed as an operation by C on the components of A, in which case we have
The dual quaternion product ÂĈ = (A, B)(C, D) = (AC, AD+BC) can be formulated as a matrix operation as follows. Assemble the components of Ĉ into the eight dimensional array Ĉ = (C1, C2, C3, c0, D1, D2, D3, d0), then ÂĈ is given by the 8x8 matrix product
As we saw for quaternions, the product ÂĈ can be viewed as the operation of Ĉ on the coordinate vector Â, which means ÂĈ can also be formulated as,
More on spatial displacements
The dual quaternion of a displacement D = ([A], d) can be constructed from the quaternion S = cos(φ/2) + sin(φ/2)S that defines the rotation [A] and the vector quaternion constructed from the translation vector d, given by D = d1i + d2j + d3k. Using this notation, the dual quaternion for the displacement D = ([A], d) is given by
Let the Plücker coordinates of a line in the direction x through a point p in a moving body and its coordinates in the fixed frame which is in the direction X through the point P be given by,
Then the dual quaternion of the displacement of this body transforms Plücker coordinates in the moving frame to Plücker coordinates in the fixed frame by the formula
Using the matrix form of the dual quaternion product this becomes,
This calculation is easily managed using matrix operations.
Dual quaternions and 4×4 homogeneous transforms
It might be helpful, especially in rigid body motion, to represent unit dual quaternions as homogeneous matrices. As given above a dual quaternion can be written as: where r and d are both quaternions. The r quaternion is known as the real or rotational part and the quaternion is known as the dual or displacement part.
The rotation part can be given by
where is the angle of rotation about the direction given by unit vector . The displacement part can be written as
.
The dual-quaternion equivalent of a 3D-vector is
and its transformation by is given by
These dual quaternions (or actually their transformations on 3D-vectors) can be represented by the homogeneous transformation matrix
where the 3×3 orthogonal matrix is given by
For the 3D-vector
the transformation by T is given by
Connection to Clifford algebras
Besides being the tensor product of two Clifford algebras, the quaternions and the dual numbers, the dual quaternions have two other formulations in terms of Clifford algebras.
First, dual quaternions are isomorphic to the Clifford algebra generated by 3 anticommuting elements , , with and . If we define and , then the relations defining the dual quaternions are implied by these and vice versa. Second, the dual quaternions are isomorphic to the even part of the Clifford algebra generated by 4 anticommuting elements with
For details, see Clifford algebras: dual quaternions.
Eponyms
Since both Eduard Study and William Kingdon Clifford used and wrote about dual quaternions, at times authors refer to dual quaternions as "Study biquaternions" or "Clifford biquaternions". The latter eponym has also been used to refer to split-biquaternions. Read the article by Joe Rooney linked below for view of a supporter of W.K. Clifford's claim. Since the claims of Clifford and Study are in contention, it is convenient to use the current designation dual quaternion to avoid conflict.
See also
Screw theory
Rational motion
Quaternions and spatial rotation
Conversion between quaternions and Euler angles
Olinde Rodrigues
Dual-complex number
References
Notes
Sources
A.T. Yang (1963) Application of quaternion algebra and dual numbers to the analysis of spatial mechanisms, Ph.D thesis, Columbia University.
A.T. Yang (1974) "Calculus of Screws" in Basic Questions of Design Theory, William R. Spillers, editor, Elsevier, pages 266 to 281.
J.M. McCarthy (1990) An Introduction to Theoretical Kinematics, pp. 62–5, MIT Press .
L. Kavan, S. Collins, C. O'Sullivan, J. Zara (2006) Dual Quaternions for Rigid Transformation Blending, Technical report, Trinity College Dublin.
Joe Rooney William Kingdon Clifford, Department of Design and Innovation, the Open University, London.
Joe Rooney (2007) "William Kingdon Clifford", in Marco Ceccarelli, Distinguished figures in mechanism and machine science, Springer.
Eduard Study (1891) "Von Bewegungen und Umlegung", Mathematische Annalen 39:520.
Further reading
E. Pennestri & R. Stefanelli (2007) Linear Algebra and Numerical Algorithms Using Dual Numbers, published in Multibody System Dynamics 18(3):323–349.
E. Pennestri and P. P. Valentini, Dual Quaternions as a Tool for Rigid Body Motion Analysis: A Tutorial with an Application to Biomechanics, ARCHIWUM BUDOWY MASZYN, vol. 57, pp. 187–205, 2010
E. Pennestri and P. P. Valentini, Linear Dual Algebra Algorithms and their Application to Kinematics, Multibody Dynamics, October 2008, pp. 207–229,
D.P. Chevallier (1996) "On the transference principle in kinematics: its various forms and limitations", Mechanism and Machine Theory 31(1):57–76.
M.A. Gungor (2009) "Dual Lorentzian spherical motions and dual Euler-Savary formulas", European Journal of Mechanics A Solids 28(4):820–6.
Translation in English by D.H. Delphenich.
Translation in English by D.H. Delphenich.
External links
Dual quaternion toolbox, a Matlab toolbox.
DQrobotics: a standalone open-source library for using dual quaternions within robot modelling and control.
Machines
Kinematics
Quaternions | Dual quaternion | [
"Physics",
"Technology",
"Engineering"
] | 3,285 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics",
"Mechanical engineering"
] |
8,730,922 | https://en.wikipedia.org/wiki/P1%20phage | P1 is a temperate bacteriophage that infects Escherichia coli and some other bacteria. When undergoing a lysogenic cycle the phage genome exists as a plasmid in the bacterium unlike other phages (e.g. the lambda phage) that integrate into the host DNA. P1 has an icosahedral head containing the DNA attached to a contractile tail with six tail fibers.
The P1 phage has gained research interest because it can be used to transfer DNA from one bacterial cell to another in a process known as transduction. As it replicates during its lytic cycle it captures fragments of the host chromosome. If the resulting viral particles are used to infect a different host the captured DNA fragments can be integrated into the new host's genome. This method of in vivo genetic engineering was widely used for many years and is still used today, though to a lesser extent. P1 can also be used to create the P1-derived artificial chromosome cloning vector which can carry relatively large fragments of DNA. P1 encodes a site-specific recombinase, Cre, that is widely used to carry out cell-specific or time-specific DNA recombination by flanking the target DNA with loxP sites (see Cre-Lox recombination).
Morphology
The virion is similar in structure to the T4 phage but simpler. It has an icosahedral head containing the genome attached at one vertex to the tail. The tail has a tube surrounded by a contractile sheath. It ends in a base plate with six tail fibres. The tail fibres are involved in attaching to the host and providing specificity.
Genome
The genome of the P1 phage is moderately large, around 93Kbp in length (compared to the genomes of e.g. T4 - 169Kbp, lambda - 48Kbp and Ff - 6.4Kbp). In the viral particle it is in the form of a linear double stranded DNA molecule. Once inserted into the host it circularizes and replicates as a plasmid.
In the viral particle the DNA molecule is longer (110Kbp) than the actual length of the genome. It is created by cutting an appropriately sized fragment from a concatemeric DNA chain having multiple copies of the genome (see the section below on lysis for how this is made). Due to this the ends of the DNA molecule are identical. This is referred to as being terminally redundant. This is important for the DNA to be circularized in the host. Another consequence of the DNA being cut out of a concatemer is that a given linear molecule can start at any location on the circular genome. This is called a cyclic permutation.
The genome is especially rich in Chi sequences recognized by the bacterial recombinase RecBCD. The genome contains two origins of replication: oriR which replicates it during the lysogenic cycle and oriL which replicates it during the lytic stage. The genome of P1 encodes three tRNAs which are expressed in the lytic stage.
Proteome. The genome of P1 encodes 112 proteins and 5 untranslated genes and is this about twice the size of bacteriophage lambda.
Life cycle
Infection and early stages
The phage particle adsorbs onto the surface of the bacterium using the tail fibers for specificity. The tail sheath contracts and the DNA of the phage is injected into the host cell. The host DNA recombination machinery or the cre enzyme translated from the viral DNA recombine the terminally redundant ends and circularize the genome. Depending on various physiological cues, the phage may immediately proceed to the lytic phase or it may enter a lysogenic state.
The gene that encodes the tail fibers have a set of sequences that can be targeted by a site specific recombinase Cin. This causes the C terminal end of the protein to switch between two alternate forms at a low frequency. The viral tail fibers are responsible for the specificity of binding to the host receptor. The targets of the viral tail fibers are under a constant pressure to evolve and evade binding. This method of recombinational diversity of the tail allows the virus to keep up with the bacterium. This system has close sequence homologies to recombinational systems in the tail fibers of unrelated phages like the mu phage and the lambda phage.
Lysogeny
The genome of the P1 phage is maintained as a low copy number plasmid in the bacterium. The relatively large size of the plasmid requires it to keep a low copy number lest it become too large a metabolic burden while it is a lysogen. As there is usually only one copy of the plasmid per bacterial genome, the plasmid stands a high chance of not being passed to both daughter cells. The P1 plasmid combats this by several methods:
The plasmid replication is tightly regulated by a RepA protein dependent mechanism. This is similar to the mechanism used by several other plasmids. It ensure that the plasmid divides in step with the host genome.
Interlocked plasmids are quickly unlinked by Cre-lox recombination
The plasmid encodes a plasmid addiction system that kills daughter cells that lose the plasmid. It consists of a stable protein toxin and an antitoxin that reversibly binds to and neutralizes it. Cells that lose the plasmid get killed as the antitoxin gets degraded faster than the toxin.
Lysis
The P1 plasmid has a separate origin of replication (oriL) that is activated during the lytic cycle. Replication begins by a regular bidirectional theta replication at oriL but later in the lytic phase, it switches to a rolling circle method of replication using the host recombination machinery. This results in numerous copies of the genome being present on a single linear DNA molecule called a concatemer. The end of the concatemer is cut a specific site called the pac site or packaging site. This is followed by the packing of the DNA into the heads till they are full. The rest of the concatemer that does not fit into one head is separated and the machinery begins packing this into a new head. The location of the cut is not sequence specific. Each head holds around 110kbp of DNA so there is a little more than one complete copy of the genome (~90kbp) in each head, with the ends of the strand in each head being identical. After infecting a new cell this terminal redundancy is used by the host recombination machinery to cyclize the genome if it lacks two copies of the lox locus. If two lox sites are present (one in each terminally redundant end) the cyclization is carried out by the Cre recombinase.
Once the complete virions are assembled, the host cell is lysed, releasing the viral particles.
History
P1 was discovered in 1951 by Giuseppe Bertani in Salvador Luria's laboratory, but the phage was little studied until Ed Lennox, also in Luria's group, showed in 1954–5 that it could transduce genetic material between host bacteria. This discovery led to the phage being used for genetic exchange and genome mapping in E. coli, and stimulated its further study as a model organism. In the 1960s, Hideo Ikeda and Jun-ichi Tomizawa showed the phage's DNA genome to be linear and double-stranded, with redundancy at the ends. In the 1970s, Nat Sternberg characterised the Cre–lox site-specific recombination system, which allows the linear genome to circularise to form a plasmid after infection. During the 1980s, Sternberg developed P1 as a vector for cloning large pieces of eukaryotic DNA. A P1 gene map based on a partial DNA sequence was published in 1993 by Michael Yarmolinsky and Małgorzata Łobocka, and the genome was completely sequenced by Łobocka and colleagues in 2004.
References
External links
Viralzone: P1-like phage
Molecular biology
Myoviridae | P1 phage | [
"Chemistry",
"Biology"
] | 1,737 | [
"Biochemistry",
"Molecular biology"
] |
8,731,428 | https://en.wikipedia.org/wiki/Kile%20%28unit%29 | The kile () was an Ottoman unit of volume similar to a bushel, like other dry measures also often defined as a specific weight of a particular commodity. Its value varied widely by location, period, and commodity, from 8 to 132 oka. The 'standard' kile was 36 litres or 20 oka.
References
Diran Kélékian, Dictionnaire Turc-Français, Constantinople: Imprimerie Mihran, 1911.
A.D. Alderson and Fahir İz, The Concise Oxford Turkish Dictionary, 1959.
Halil İnalcık, Donald Quataert, An Economic and Social History of the Ottoman Empire, 1300-1914, Cambridge University Press, 1997. . Has extensive tables of values of the kile at various times and places.
Obsolete units of measurement
Units of mass
Units of volume
Turkish words and phrases
Ottoman units of measurement | Kile (unit) | [
"Physics",
"Mathematics"
] | 182 | [
"Obsolete units of measurement",
"Matter",
"Units of volume",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
8,731,629 | https://en.wikipedia.org/wiki/Life%20Sciences%20Greenhouse%20of%20Central%20Pennsylvania | Life Sciences Greenhouse of Central Pennsylvania (LSGPA) is a biotechnology initiative and non-profit organization based in Harrisburg, Pennsylvania. It was founded in 2001. It focuses on in the advancement of life sciences through technology to improve the healthcare and economic opportunities of Pennsylvanians.
Background
The initiative began in 2001, funded from the state's settlement with the tobacco industry. Other life sciences greenhouses in Philadelphia and Pittsburgh also received seed money from the settlement. LSGPA partners with a range of institutions, including local research universities, colleges, medical centers, economic development agencies and companies of various sizes to identify needs and opportunities. It then works to help transfer technologies, develop new companies, provide support for existing companies (particularly those seeking to expand or relocate), and ensure that the infrastructure to support a thriving life sciences industry keeps pace with development.
Research areas
Central Pennsylvania has three large research universities which contribute to the initiative. Collectively, these three institutions attract more than $600 million in sponsored research funding annually. They are:
Lehigh University, located in Bethlehem, Pennsylvania
Penn State University, located in State College, Pennsylvania
Penn State Hershey Medical Center, located in Hershey, Pennsylvania
References
External links
Life Sciences Greenhouse of Central Pennsylvania
Organizations based in Harrisburg, Pennsylvania
Healthcare in Harrisburg, Pennsylvania
Lehigh University
Pennsylvania State University
Penn State Milton S. Hershey Medical Center
Biotechnology organizations
Life sciences industry
2001 establishments in the United States
2001 establishments in Pennsylvania
Organizations established in 2001 | Life Sciences Greenhouse of Central Pennsylvania | [
"Engineering",
"Biology"
] | 289 | [
"Biotechnology organizations",
"Life sciences industry"
] |
8,731,811 | https://en.wikipedia.org/wiki/Peppercoin | Peppercoin is a cryptographic system for processing micropayments. Peppercoin Inc. was a company that offered services based on the peppercoin method.
The peppercoin system was developed by Silvio Micali and Ron Rivest and first presented at the RSA Conference in 2002 (although it had not yet been named.) The core idea is to bill one randomly selected transaction a lump sum of money rather than bill each transaction a small amount. It uses "universal aggregation", which means that it aggregates transactions over users, merchants as well as payment service providers. The random selection is cryptographically secure—it cannot be influenced by any of the parties. It is claimed to reduce the transaction cost per dollar from 27 cents to "well below 10 cents."
Peppercoin, Inc. was a privately held company founded in late 2001 by Micali and Rivest based in Waltham, MA. It has secured about $15M in venture capital in two rounds of funding. Its services have seen modest adoption. Peppercoin collects 5-9% of transaction cost from the merchant. Peppercoin, Inc. was bought out in 2007 by Chockstone for an undisclosed amount.
References
Financial services companies established in 2001
Financial cryptography
Payment systems | Peppercoin | [
"Engineering"
] | 260 | [
"Financial cryptography",
"Cybersecurity engineering"
] |
8,732,068 | https://en.wikipedia.org/wiki/New%20Holland%20Brewing%20Company | New Holland Brewing Company is an American independent craft brewing and distilling company headquartered in Holland, Michigan. It also owns and operates brewpub-style restaurants and spirits-tasting rooms located across West Michigan. The company's craft-style beer brands Dragon's Milk, Tangerine Space Machine, and spirits brands Dragon's Milk Origin, Beer Barrel Bourbon among others, are distributed throughout the United States and exported to Canada, Europe and Asia.
After the sale of Bell's to Kirin, New Holland Brewing Company became the largest craft brewery in the state of Michigan.
History
Brett VanderKamp and Jason Spaulding, the founders of New Holland Brewing Company, grew up together in Midland, Michigan, and later attended Hope College. In college Spaulding and VanderKamp cultivated a love of homebrewing, which would bring them together again shortly after graduation. Their business plan took two years to formulate, but once complete, the pair quickly lined up investors, and in 1997 New Holland was founded in Holland, Michigan.
Originally, their goal was to produce beer that was characteristically unique to Western Michigan. Their beer was well received, and the company increased production to just over in 2006. In 2007, the company increased production to over .
New Holland began distilling bourbon, whiskey, rum, gin and vodka in 2005, and selling it in 2008.
On August 23, 2018, New Holland Brewing Company announced that it will be re-branding its flagship Dragon's Milk Bourbon Barrel-Aged Stout. The company launched the re-branding Dragon's Milk packaging in 2023 alongside new Dragon's Milk items, Dragon's Milk Crimson Keep BA Imperial Red Ale and Dragon's Milk Tales of Gold BA Imperial Golden Ale.
References
Breweries in the United States
American beer brands
Beer brewing companies based in Michigan
Distilleries
Bourbon whiskey
Cocktails
Restaurants in Michigan
Companies based in Michigan
Pub chains
Food- and drink-related organizations
Holland, Michigan
Grand Rapids, Michigan
Battle Creek, Michigan | New Holland Brewing Company | [
"Chemistry"
] | 409 | [
"Distilleries",
"Distillation"
] |
8,732,238 | https://en.wikipedia.org/wiki/Crowdreviewing | Crowdreviewing is the practice of gathering opinion or feedback from a large number of people, typically via the internet or an online community; a portmanteau of "crowd" and "reviews". Crowdreviewing is also often viewed as a form of crowd voting which occurs when a website gathers a large group's opinions and judgment. The concept is based on the principles of crowdsourcing and lets users submit online reviews to participate in building online metrics that measure performance. By harnessing social collaboration in the form of feedback individuals are generally able to form a more informed opinion.
Role of the crowd
In crowdreviewing the crowd becomes the source of information used in determining the relative performance of products and services. As crowdreviewing focuses on receiving input from a large number of parties, the resulting collaboration produces more credible feedback compared to the feedback left by a single party. The responsibility of identifying strengths and weaknesses falls to multiple individuals which each have had their own experience rather than on a single individual. Buyers will therefore be more likely to trust the feedback of a collective group of people rather than a single individual.
Common Parties
The crowd consists of a number of different parties which have various interests in regards to the outcome produced.
Potential Customers
A potential customer of a product or service would have an interest in viewing information on how a particular product or service stands in terms of subjective or objective quality before making a purchasing decision. Potential customers may also be interested in leaving feedback on a particular product or service to explain why they did not make their purchase.
Customers
Customers of products and services are a primary party in the process of reviewing. Customers are closely connected to the process as they would have first-hand experience with a product and service. Their primary role would be detailing their experiences with the product or service. A customer's interest in crowdreviewing would stem from an interest in showing their appreciation towards the quality of a product or service or in voicing their concerns or disappointment in a product or service.
Sellers
Sellers usually get their satisfied customers involved in leaving reviews for their products and services. A seller has an interest in having positive feedback on display as a means to influence potential buyers.
Competitors
Competitors would have an interest in reviewing feedback from the crowd as a means of obtaining competitive intelligence.
There may be other audiences involved in the process such as employees, suppliers, partners, and other relevant parties.
Benefits and risks
There are a number of benefits to the different parties which make up the crowd. Potential buyers are able to obtain information on products and services prior to making a purchase. Those which have already bought or used the product or service are able to post experiences both positive and negative in order to inform other potential buyers. As an additional benefit to the buyers, buyers may also post negative reviews in hopes of resolving their negative experiences with their seller. Sellers have the benefits of receiving positive feedback and also potentially resolving issues with dissatisfied customers. Competitors are able to learn more about what their competition is doing in order to improve their own products and services.
In addition to the benefits associated with crowdreviewing, there are a number of risks and challenges to overcome. For potential buyers there is always the risk that reviews may be sourced by the vendors themselves or other parties paid to leave a specific type of feedback on a product or service. Sellers have the possibility of receiving negative reviews which may in turn negatively influence their reputation and affect their bottom line revenue numbers. Competitors, while enjoying the benefit of being able to learn from their competitors are also subject to their competitors learning about their positives and negatives.
Limitations and controversies
Size of the Crowd
One of the major factors influencing crowdreviewing is the size of the crowd involved. A crowdreviewing venture is positively influenced by having a large number of parties leave reviews and feedback on products and services. In cases where a small number of individuals leave their feedback, more weight is placed on an individual reviewer or opinion and could therefore be of minimal value to potential customers. A smaller sample of reviews may also exhibit bias towards or against the product or service.
Industry Knowledge
A common limitation of allowing all parties to have an opportunity to review a product or service may involve having reviews written without a minimal or meaningful understanding of the product or service. A lack of industry or specialized knowledge may in turn minimize the value of a review or potentially inversely affect what would be considered a fair review.
Seller Manipulation
With allowing multiple parties to review a product or service there is a possibility that a seller may attempt to manipulate reviews in a number of ways. Sellers may hire third parties or create fake identities in order to leave positive reviews on their product or service. They may also do the same to create negative reviews on competing products and services.
Balance of Negative and Positive Reviews
Customers which have a negative experience with a product or service are more likely to offer a their review in an effort to resolve buyer's remorse in comparison to those which have had a positive experience.
One Side of the Story
Those reading reviews on products and services are likely to view reviews which only tell one side to the story. This is a disadvantage to both a potential customer and seller as the review may not tell the other side of a story which may be based on a misunderstanding.
See also
Distributed thinking
Collective consciousness
Participatory monitoring
Crowdfunding
Crowdsourcing
References
Further reading
Is There an eBay for Ideas? European Management Review, 2011
Herding Behavior as a Network Externality, Proceedings of the International Conference on Information Systems, Shanghai, December 2011
The Geography of Crowdfunding, NET Institute Working Paper No. 10-08, Oct 2010
The micro-price of micropatronage, The Economist, September 27, 2010
Putting your money where your mouse is, The Economist, September 2, 2010
Cash-strapped entrepreneurs get creative in BBC News
Harter, J.K., Shmidt, F.L., & Keyes, C.L. (2002). Well-Being in the Workplace and its Relationship to Business Outcomes: A Review of the Gallup Studies. In C.L. Keyes & J. Haidt (Eds.), Flourishing: The Positive Person and the Good Life (pp. 205–224). Washington D.C.: American Psychological Association.
Surowiecki, James, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, 2004
Internet terminology
Customer experience
Collaboration
Collective intelligence
Human-based computation
Social information processing
Crowd psychology | Crowdreviewing | [
"Technology"
] | 1,315 | [
"Information systems",
"Computing terminology",
"Human-based computation",
"Internet terminology"
] |
8,732,281 | https://en.wikipedia.org/wiki/GOR%20method | The GOR method (short for Garnier–Osguthorpe–Robson) is an information theory-based method for the prediction of secondary structures in proteins. It was developed in the late 1970s shortly after the simpler Chou–Fasman method. Like Chou–Fasman, the GOR method is based on probability parameters derived from empirical studies of known protein tertiary structures solved by X-ray crystallography. However, unlike Chou–Fasman, the GOR method takes into account not only the propensities of individual amino acids to form particular secondary structures, but also the conditional probability of the amino acid to form a secondary structure given that its immediate neighbors have already formed that structure. The method is therefore essentially Bayesian in its analysis.
Method
The GOR method analyzes sequences to predict alpha helix, beta sheet, turn, or random coil secondary structure at each position based on 17-amino-acid sequence windows. The original description of the method included four scoring matrices of size 17×20, where the columns correspond to the log-odds score, which reflects the probability of finding a given amino acid at each position in the 17-residue sequence. The four matrices reflect the probabilities of the central, ninth amino acid being in a helical, sheet, turn, or coil conformation. In subsequent revisions to the method, the turn matrix was eliminated due to the high variability of sequences in turn regions (particularly over such a large window). The method was considered as best requiring at least four contiguous residues to score as alpha helices to classify the region as helical, and at least two contiguous residues for a beta sheet.
Algorithm
The mathematics and algorithm of the GOR method were based on an earlier series of studies by Robson and colleagues reported mainly in the Journal of Molecular Biology and The Biochemical Journal. The latter describes the information theoretic expansions in terms of conditional information measures. The use of the word "simple" in the title of the GOR paper reflected the fact that the above earlier methods provided proofs and techniques somewhat daunting by being rather unfamiliar in protein science in the early 1970s; even Bayes methods were then unfamiliar and controversial. An important feature of these early studies, which survived in the GOR method, was the treatment of the sparse protein sequence data of the early 1970s by expected information measures. That is, expectations on a Bayesian basis considering the distribution of plausible information measure values given the actual frequencies (numbers of observations). The expectation measures resulting from integration over this and similar distributions may now be seen as composed of "incomplete" or extended zeta functions, e.g. z(s,observed frequency) − z(s, expected frequency) with incomplete zeta function z(s, n) = 1 + (1/2)s + (1/3)s+ (1/4)s + …. +(1/n)s. The GOR method used s=1. Also, in the GOR method and the earlier methods, the measure for the contrary state to e.g. helix H, i.e. ~H, was subtracted from that for H, and similarly for beta sheet, turns, and coil or loop. Thus the method can be seen as employing a zeta function estimate of log predictive odds. An adjustable decision constant could also be applied, which thus implies a decision theory approach; the GOR method allowed the option to use decision constants to optimize predictions for different classes of protein. The expected information measure used as a basis for the information expansion was less important by the time of publication of the GOR method because protein sequence data became more plentiful, at least for the terms considered at that time. Then, for s=1, the expression z(s,observed frequency) − z(s,expected frequency) approaches the natural logarithm of (observed frequency / expected frequency) as frequencies increase. However, this measure (including use of other values of s) remains important in later more general applications with high-dimensional data, where data for more complex terms in the information expansion are inevitably sparse.
See also
List of protein structure prediction software
References
Bioinformatics
Protein methods
Applications of Bayesian inference | GOR method | [
"Chemistry",
"Engineering",
"Biology"
] | 864 | [
"Biochemistry methods",
"Biological engineering",
"Protein methods",
"Protein biochemistry",
"Bioinformatics"
] |
8,733,255 | https://en.wikipedia.org/wiki/Nicol%C3%B2%20Pacassi | Nicolò Pacassi (5 March 1716 – 11 November 1790), also known as Nikolaus Pacassi, was an Italian-Austrian architect. He was born in Wiener Neustadt in Lower Austria in a family of merchants from Gorizia. In 1753, he was appointed court architect to Maria Theresa of Austria. He was commissioned many works throughout the Austrian Empire, mainly in Vienna, Prague, Innsbruck, Buda and his native Gorizia and Gradisca. He died in Vienna.
Works
1743 extension of Schloss Hetzendorf
1745–47 extension of Schönbrunn Palace including Schlosstheater Schönbrunn
1749–58 Buda Castle
1753–54 extension of Spanish Hall of Prague Castle
1753–75 Royal Palace of Prague Castle
1761–63 Rebuilt the Theater am Kärntnertor, Vienna (illustration)
1770 Reconstruction of Prague's cathedral St Vitus' tower
1766 Extension of Ballhausplatz
Palazzo Attems Petzenstein in Gorizia
1784 Josephinum - designed as the Academy for Military Surgeons, sponsored by Emperor Joseph II
References
Architects from Vienna
Austrian people of Italian descent
People from Wiener Neustadt
People from Gorizia
1716 births
1790 deaths
18th-century Austrian architects | Nicolò Pacassi | [
"Engineering"
] | 253 | [
"Architecture stubs",
"Architecture"
] |
8,733,398 | https://en.wikipedia.org/wiki/Ben%20Franklin%20effect | The Ben Franklin effect is a psychological phenomenon in which people like someone more after doing a favor for them. An explanation for this is cognitive dissonance. People reason that they help others because they like them, even if they do not, because their minds struggle to maintain logical consistency between their actions and perceptions.
The Benjamin Franklin effect, in other words, is the result of one's concept of self coming under attack. Every person develops a persona, and that persona persists because inconsistencies in one's personal narrative get rewritten, redacted, and misinterpreted.
Franklin's observation of effect
Benjamin Franklin, after whom the effect is named, quoted what he described as an "old maxim" in his autobiography: "He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged."
Franklin explains how he dealt with the animosity of a rival legislator when he served in the Pennsylvania Assembly in the 18th century:
Research
A study of the effect was done by Jecker and Landy in 1969, in which students were invited to take part in a Q&A competition run by the researcher in which they could win sums of money. After this competition was over, one-third of the students who had "won" were approached by the researcher, who asked them to return the money on the grounds that he had used his own funds to pay the winners and was running short of money now; another third were asked by a secretary to return the money because it was from the psychology department and funds were low; another third were not at all approached. All three groups were then asked how much they liked the researcher. The second group liked him the least, the first group the mostsuggesting that a refund request by an intermediary had decreased their liking, while a direct request had increased their liking.
In 1971, University of North Carolina psychologists John Schopler and John Compere carried out the following experiment:
They had their subjects administer learning tests to accomplices pretending to be other students. The subjects were told the learners would watch as the teachers used sticks to tap out long patterns on a series of wooden cubes. The learners would then be asked to repeat the patterns. Each teacher was to try out two different methods on two different people, one at a time. In one run, the teachers would offer encouragement when the learner got the patterns correct. In the other run of the experiment, the teacher insulted and criticized the learner when they erred. Afterward, the teachers filled out a debriefing questionnaire that included questions about how attractive (as a human being, not romantically) and likable the learners were. Across the board, the subjects who received the insults were rated as less attractive than the ones who got encouragement.
The subjects' own conduct toward the accomplices shaped their perception of them"You tend to like the people to whom you are kind and dislike the people to whom you are rude."
Results were reproduced in a more recent but smaller study by psychologist Yu Niiya with Japanese and American subjects.
Effect as an example of cognitive dissonance
This perception of Franklin has been cited as an example within cognitive dissonance theory, which says that people change their attitudes or behavior to resolve tensions, or "dissonance", between their thoughts, attitudes, and actions. In the case of the Ben Franklin effect, the dissonance is between the subject's negative attitudes to the other person and the knowledge that they did that person a favor.
Alternative explanations
Psychologist Yu Niiya attributes the phenomenon to the requestee reciprocating a perceived attempt by the requester to ignite friendly relations. This theory would explain the Ben Franklin effect's absence when an intermediary is used.
Uses
In the sales field, the Ben Franklin effect can be used to build rapport with a client. Instead of offering to help the potential client, a salesperson can instead ask the potential client for assistance: "For example, ask them to share with you what product benefits they find most compelling, where they think the market is headed, or what products may be of interest several years from now. This pure favor, left unrepaid, can build likability that will enhance your ability to earn that client's time and investment in the future."
The Benjamin Franklin effect can also be observed in successful mentor-protege relationships. Such relationships, one source points out, "are defined by their fundamental imbalance of knowledge and influence. Attempting to proactively reciprocate favors with a mentor can backfire, as the role reversal and unsolicited assistance may put your mentor in an unexpected, awkward situation".
The Ben Franklin effect was cited in Dale Carnegie's bestselling book How to Win Friends and Influence People. Carnegie interprets the request for a favor as "a subtle but effective form of flattery".
As Carnegie suggests:
...when we ask a colleague to do us a favour, we are signalling that we consider them to have something we don't, whether more intelligence, more knowledge, more skills, or whatever. This is another way of showing admiration and respect, something the other person may not have noticed from us before. This immediately raises their opinion of us and makes them more willing to help us again both because they enjoy the admiration and have genuinely started to like us.
Psychologist Yu Niiya suggests that the Ben Franklin effect vindicates Takeo Doi's theory of amae (甘え), as described in The Anatomy of Dependence. It states that dependent, childlike behavior can induce a parent-child bond where one partner sees themselves as the caretaker. In effect, amae creates a relationship where one person feels responsible for the other, who is then free to act immaturely and make demands.
One commentator has discussed the Ben Franklin effect in connection with dog training, thinking "more about the human side of the relationship rather than about the dogs themselves." While trainers often distinguish between the impact of positive and negative reinforcement-based training methods on the dogs, it can also be relevant to "consider the effects that these two approaches may have upon the trainer. The Ben Franklin Effect suggests that how we treat our dogs during training influences how we think about them as individuals – specifically, how much we like (or dislike) them. When we do nice things for our dogs in the form of treats, praise, petting and play to reinforce desired behaviors, such treatment may result in our liking them more. And, if we use harsh words, collar jerks or hitting in an attempt to change our dog's behavior, then...we will start to like our dog less."
Converse
The opposite case is also believed to be true, namely that we come to hate a person to whom we did wrong. We de-humanize them to justify the bad things we did to them.
It has been suggested that if soldiers who have killed enemy servicemen in combat situations later come to hate them, it is because this psychological maneuver helps to "decrease the dissonance of killing". Such a phenomenon might also "explain long-standing grudges like Hatfield vs. McCoy" or vendetta situations in various cultures: "Once we start, we may not be able to stop and engage in behavior we would normally never allow." As one commentator has put it, "Jailers come to look down on inmates; camp guards come to dehumanize their captives; soldiers create derogatory terms for their enemies. It's difficult to hurt someone you admire. It's even more difficult to kill a fellow human being. Seeing the casualties you create as something less than you, something deserving of damage, makes it possible to continue seeing yourself as a good and honest person, to continue being sane."
See also
Foot-in-the-door technique
Icebreaker (facilitation)
Reciprocity (social psychology)
Sunk-cost fallacy
Social proof
Notes
Further reading
Ben Franklin Effect at Tabroot
The Ben Franklin Effect: An Unexpected Way to Build Rapport
Attitude change
Benjamin Franklin
Cognition
Cognitive biases
Interpersonal relationships
Motivational theories
Psychological effects | Ben Franklin effect | [
"Biology"
] | 1,677 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
8,733,599 | https://en.wikipedia.org/wiki/Drug%20Effectiveness%20Review%20Project | The Drug Effectiveness Review Project (DERP) is a self-governed collaboration of state Medicaid and public pharmacy programs that commission high-quality evidence-based research products to assist policymakers and other decision-makers grappling with difficult drug coverage decisions. Housed at the Center for Evidence-based Policy at Oregon Health & Science University in Portland, Oregon, DERP produces concise, comparative, evidence-based research products that evaluate the efficacy, effectiveness, and safety of drugs in many widely used drug classes.
Nationally recognized for its clinical objectivity and high-quality research, DERP focuses on specialty and other high-impact drugs – particularly those that have potential to change clinical practice. The program's goal is to ultimately help improve appropriate patient access, safety, and quality of care while helping government programs contain exploding costs for new pharmaceutical therapies. DERP uses a collaborative governing model to develop work plans that provide independent and objective research on drug effectiveness and safety to bring evidence to health policy decisions.
DERP reports include up-to-date clinical evidence on efficacy, adverse events, and safety information for the drugs reviewed. These reports and research products are not usage guidelines, nor should they be read as an endorsement of or recommendation for any particular drug, use or approach. Rather, DERP reports are used by policy makers to develop criteria for drug coverage, such as prior authorizations, clinical edits, drug utilization management policies, and provider or patient education materials. DERP research products include a comprehensive search of the global evidence, an objective appraisal of the quality of the studies found, and a thorough synthesis of high-quality evidence. Policymakers are able to use these reports and research products to make informed policy decisions that improve patient outcomes and contain costs.
Sources
External links
The Drug Effectiveness Review Project
Pharmacological societies
Systematic review | Drug Effectiveness Review Project | [
"Chemistry"
] | 373 | [
"Pharmacological societies",
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
8,733,699 | https://en.wikipedia.org/wiki/Pegylated%20interferon | Pegylated interferon (PEG-IFN) is a class of medication that includes three different drugs as of 2012:
Pegylated interferon-alpha-2a
Pegylated interferon-alpha-2b
Pegylated interferon beta-1a
In these formulations, Polyethylene glycol (PEG) is added to make interferon last longer in the body. They are used to treat both hepatitis B, hepatitis C and multiple sclerosis.
Pegylated interferon is contraindicated in patients with hyperbilirubinaemia.
References
Antiviral drugs
Immunostimulants
World Health Organization essential medicines | Pegylated interferon | [
"Biology"
] | 139 | [
"Antiviral drugs",
"Biocides"
] |
8,734,100 | https://en.wikipedia.org/wiki/Latency%20%28audio%29 | Latency refers to a short period of delay (usually measured in milliseconds) between when an audio signal enters a system, and when it emerges. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion, and the speed of sound in the transmission medium.
Latency can be a critical performance metric in professional audio including sound reinforcement systems, foldback systems (especially those using in-ear monitors) live radio and television. Excessive audio latency has the potential to degrade call quality in telecommunications applications. Low latency audio in computers is important for interactivity.
Telephone calls
In all systems, latency can be said to consist of three elements: codec delay, playout delay and network delay.
Latency in telephone calls is sometimes referred to as delay; the telecommunications industry also uses the term quality of experience (QoE). Voice quality is measured according to the ITU model; measurable quality of a call degrades rapidly where the mouth-to-ear delay latency exceeds 200 milliseconds. The mean opinion score (MOS) is also comparable in a near-linear fashion with the ITU's quality scale - defined in standards G.107, G.108 and G.109 - with a quality factor R ranging from 0 to 100. An MOS of 4 ('Good') would have an R score of 80 or above; to achieve 100R requires an MOS exceeding 4.5.
The ITU and 3GPP groups end-user services into classes based on latency sensitivity:
Similarly, the G.114 recommendation regarding mouth-to-ear delay indicates that most users are "very satisfied" as long as latency does not exceed 200 ms, with an according R of 90+. Codec choice also plays an important role; the highest quality (and highest bandwidth) codecs like G.711 are usually configured to incur the least encode-decode latency, so on a network with sufficient throughput latencies can be achieved. G.711 at a bitrate of 64 kbit/s is the encoding method predominantly used on the public switched telephone network.
Mobile calls
The AMR narrowband codec, used in GSM and UMTS networks, introduces latency in the encode and decode processes.
As mobile operators upgrade existing best-effort networks to support concurrent multiple types of service over all-IP networks, services such as Hierarchical Quality of Service (H-QoS) allow for per-user, per-service QoS policies to prioritise time-sensitive protocols like voice calls, and other wireless backhaul traffic.
Another aspect of mobile latency is the inter-network handoff; as a customer on Network A calls a Network B customer the call must traverse two separate Radio Access Networks, two core networks, and an interlinking Gateway Mobile Switching Centre (GMSC) which performs the physical interconnecting between the two providers.
IP calls
With end-to-end QoS managed and assured rate connections, latency can be reduced to analogue PSTN/POTS levels. On a stable connection with sufficient bandwidth and minimal latency, VoIP systems typically have a minimum of 20 ms inherent latency. Under less ideal network conditions a 150 ms maximum latency is sought for general consumer use. Many popular videoconferencing systems rely on data buffering and data redundancy to cope for network jitter and packet loss. Measurements have shown that mouth-to-ear delay are between 160 to 300 ms over a 500 mile distance, on an average US network conditions. Latency is a larger consideration when an echo is present and systems must perform echo suppression and cancellation.
Computer audio
Latency can be a particular problem in audio platforms on computers. Supported interface optimizations reduce the delay down to times that are too short for the human ear to detect. By reducing buffer sizes, latency can be reduced. A popular optimization solution is Steinberg's ASIO, which bypasses the audio platform, and connects audio signals directly to the sound card's hardware. Many professional and semi-professional audio applications utilize the ASIO driver, allowing users to work with audio in real time. Pro Tools HD offers a low latency system similar to ASIO. Pro Tools 10 and 11 are also compatible with ASIO interface drivers.
The Linux realtime kernel is a modified kernel, that alters the standard timer frequency the Linux kernel uses and gives all processes or threads the ability to have realtime priority. This means that a time-critical process like an audio stream can get priority over another, less-critical process like network activity. This is also configurable per user (for example, the processes of user "tux" could have priority over processes of user "nobody" or over the processes of several system daemons).
Digital television audio
Many modern digital television receivers, set-top boxes and AV receivers use sophisticated audio processing, which can create a delay between the time when the audio signal is received and the time when it is heard on the speakers. Since TVs also introduce delays in processing the video signal this can result in the two signals being sufficiently synchronized to be unnoticeable by the viewer. However, if the difference between the audio and video delay is significant, the effect can be disconcerting. Some systems have a lip sync setting that allows the audio lag to be adjusted to synchronize with the video, and others may have advanced settings where some of the audio processing steps can be turned off.
Audio lag is also a significant detriment in rhythm games, where precise timing is required to succeed. Most of these games have a lag calibration setting whereupon the game will adjust the timing windows by a certain number of milliseconds to compensate. In these cases, the notes of a song will be sent to the speakers before the game even receives the required input from the player in order to maintain the illusion of rhythm. Games that rely upon musical improvisation, such as Rock Band drums or DJ Hero, can still suffer tremendously, as the game cannot predict what the player will hit in these cases, and excessive lag will still create a noticeable delay between hitting notes, and hearing them play.
Broadcast audio
Audio latency can be experienced in broadcast systems where someone is contributing to a live broadcast over a satellite or similar link with high delay. The person in the main studio has to wait for the contributor at the other end of the link to react to questions. Latency in this context could be between several hundred milliseconds and a few seconds. Dealing with audio latencies as high as this takes special training in order to make the resulting combined audio output reasonably acceptable to the listeners. Wherever practical, it is important to try to keep live production audio latency low in order to keep the reactions and interchange of participants as natural as possible. A latency of 10 milliseconds or better is the target for audio circuits within professional production structures.
Live performance audio
Latency in live performance occurs naturally from the speed of sound. It takes sound about 3 milliseconds to travel 1 meter. Small amounts of latency occur between performers depending on how they are spaced from each other and from stage monitors if these are used. This creates a practical limit to how far apart the artists in a group can be from one another. Stage monitoring extends that limit, as sound travels close to the speed of light through the cables that connect stage monitors.
Performers, particularly in large spaces, will also hear reverberation, or echo of their music, as the sound that projects from stage bounces off of walls and structures, and returns with latency and distortion. A primary purpose of stage monitoring is to provide artists with more primary sound so that they are not confused by the latency of these reverberations.
Live signal processing
While analog audio equipment has no appreciable latency, digital audio equipment has latency associated with two general processes: conversion from one format to another, and digital signal processing (DSP) tasks such as equalization, compression and routing.
Digital conversion processes include analog-to-digital converters (ADC), digital-to-analog converters (DAC), and various changes from one digital format to another, such as AES3 which carries low-voltage electrical signals to ADAT, an optical transport. Any such process takes a small amount of time to accomplish; typical latencies are in the range of 0.2 to 1.5 milliseconds, depending on sampling rate, software design and hardware architecture.
Different audio signal processing operations such as finite impulse response (FIR) and infinite impulse response (IIR) filters take different mathematical approaches to the same end and can have different latencies. In addition, input and output sample buffering add delay. Typical latencies range from 0.5 to ten milliseconds with some designs having as much as 30 milliseconds of delay.
Latency in digital audio equipment is most noticeable when a singer's voice is transmitted through their microphone, through digital audio mixing, processing and routing paths, then sent to their own ears via in-ear monitors or headphones. In this case, the singer's vocal sound is conducted to their own ear through the bones of the head, then through the digital pathway to their ears some milliseconds later. In one study, listeners found latency greater than 15 ms to be noticeable. Latency for other musical activities such as playing guitar does not have the same critical concern. Ten milliseconds of latency isn't as noticeable to a listener who is not hearing his or her own voice.
Delayed loudspeakers
In sound reinforcement for music or speech presentation in large venues, it is optimal to deliver sufficient sound volume to the back of the venue without resorting to excessive sound volumes near the front. One way for audio engineers to achieve this is to use additional loudspeakers placed at a distance from the stage but closer to the rear of the audience. Sound travels through air at the speed of sound (around per second depending on air temperature and humidity). By measuring or estimating the difference in latency between the loudspeakers near the stage and the loudspeakers nearer the audience, the audio engineer can introduce an appropriate delay in the audio signal going to the latter loudspeakers, so that the wavefronts from near and far loudspeakers arrive at the same time. Because of the Haas effect an additional 15 milliseconds can be added to the delay time of the loudspeakers nearer the audience, so that the stage's wavefront reaches them first, to focus the audience's attention on the stage rather than the local loudspeaker. The slightly later sound from delayed loudspeakers simply increases the perceived sound level without negatively affecting localization.
See also
Delay (audio effect)
Group delay and phase delay
References
External links
Music Collaboration Will Never Happen Online in Real Time
Audio engineering | Latency (audio) | [
"Engineering"
] | 2,269 | [
"Electrical engineering",
"Audio engineering"
] |
8,735,022 | https://en.wikipedia.org/wiki/Fluorescein%20diacetate%20hydrolysis | Fluorescein diacetate (FDA) hydrolysis assays can be used to measure the enzyme activity of microbes in a sample. A bright yellow-green glow is produced and is strongest when enzymatic activity is greatest. This can be quantified using a spectrofluorometer or a spectrophotometer.
Applications
FDA hydrolysis is often used to measure activity in soil and compost samples; however, it may not give an accurate reading if microbes with lower activity phases, such as esterases, cleave the fluorescein first.
It is also used in combination with propidium iodide (PI) to determine viability in eukaryotic cells. Living cells will actively convert the non-fluorescent FDA into the green fluorescent compound fluorescein, a sign of viability; while nucleus of membrane-compromised cells will fluoresce red, a sign of cell death. Currently FDA/PI staining is the standard assessment of human pancreatic islet viability with suitability for transplantation when viability score is above 70%.
Preparation
FDA stock solution is prepared by dissolving 5 mg of fluorescein diacetate in 1 ml acetone, and sucrose may be added for live cell viability testing. FDA stain must be kept in the dark at 4°C or it will spoil.
References
3. "Fluorescein diacetate hydrolysis assay" http://www.eeescience.utoledo.edu/Faculty/Sigler/Von_Sigler/LEPR_Protocols_files/FDA%20assay.pdf
United States Department of Agriculture (USDA): Assay for Fluorescein Diacetate Hydrolytic Activity for soil samples
Fluorescein Diacetate: A Potential Biological Indicator for Arid Soils
Chemical tests | Fluorescein diacetate hydrolysis | [
"Chemistry",
"Biology"
] | 386 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry",
"Chemical tests"
] |
8,735,112 | https://en.wikipedia.org/wiki/Cinnamon%20basil | Cinnamon basil is a type of basil (Ocimum basilicum). The term "cinnamon basil" can refer to a number of different varieties of basil, including as a synonym for Thai basil (O. basilicum var. thyrsiflora), as a particular cultivar of Thai basil, and as a separate cultivar in its own right (i.e., O. basilicum 'Cinnamon'). This article discusses the latter type.
Description
Cinnamon basil, also known as Mexican spice basil, has a spicy, fragrant aroma and flavor. It contains methyl cinnamate, giving it a flavor reminiscent of cinnamon. Cinnamon basil has somewhat narrow, slightly serrated, dark green, shiny leaves with reddish-purple veins, which can resemble certain types of mint, and produces small, pink flowers from July to September. Its stems are dark purple. Cinnamon basil grows to 18–30 inches tall.
Cultivation
Cinnamon basil is an easy-to-grow herb. It requires six to eight hours of bright sunlight per day. Although it is often grown as an annual, it is a perennial in USDA plant hardiness zones 9–11. Cinnamon basil is sometimes planted near tomatoes and roses to discourage pests such as whiteflies.
Uses
Cinnamon basil is used in teas and baked goods such as cookies and pies. It is also used in pastas, salads, jellies, and vinegars. Outside the kitchen, cinnamon basil is used in dried arrangements and as a potpourri.
Space
Cinnamon basil was taken into space by the Space Shuttle Endeavour during STS-118 and grown in an experiment in low Earth orbit on the International Space Station.
References
Ocimum
Herbs
Space-flown life | Cinnamon basil | [
"Biology"
] | 349 | [
"Space-flown life"
] |
8,735,392 | https://en.wikipedia.org/wiki/Blowing%20agent | A blowing agent is a substance which is capable of producing a cellular structure via a foaming process in a variety of materials that undergo hardening or phase transition, such as polymers, plastics, and metals. They are typically applied when the blown material is in a liquid stage. The cellular structure in a matrix reduces density, increasing thermal and acoustic insulation, while increasing relative stiffness of the original polymer.
Blowing agents (also known as 'pneumatogens') or related mechanisms to create holes in a matrix producing cellular materials, have been classified as follows:
Physical blowing agents include CFCs (however, these are ozone depletants, banned by the Montreal Protocol of 1987), HCFCs (replaced CFCs, but are still ozone depletants, therefore being phased out), hydrocarbons (e.g. pentane, isopentane, cyclopentane), and liquid CO2. The bubble/foam-making process is irreversible and endothermic, i.e. it needs heat (e.g. from a melt process or the chemical exotherm due to cross-linking), to volatilize a liquid blowing agent. However, on cooling process, the blowing agent will condense, which is a reversible process.
Chemical blowing agents include isocyanate and water for polyurethane, azodicarbonamide for vinyl, hydrazine and other nitrogen-based materials for thermoplastic and elastomeric foams, and sodium bicarbonate for thermoplastic foams. Gaseous products and other byproducts are formed by a chemical reaction of the chemical blowing agent, promoted by the heat of the foam production process or a reacting polymer's exothermic heat. Since the blowing reaction occurs forming low molecular weight compounds acting as the blowing gas, additional exothermic heat is also released. Powdered titanium hydride is used as a foaming agent in the production of metal foams, as it decomposes to form hydrogen gas and titanium at elevated temperatures. Zirconium(II) hydride is used for the same purpose. Once formed the low molecular weight compounds will never revert to the original blowing agent; the reaction is irreversible.
Mixed physical/chemical blowing agents are used to produce flexible PU foams with very low densities. Here both the chemical and physical blowing are used in tandem to balance each other out with respect to thermal energy released and absorbed, minimizing temperature rise. Otherwise excessive exothermic heat because of high loading of a physical blowing agent can cause thermal degradation of a developing thermoset or polyurethane material. For instance, to avoid this in polyurethane systems isocyanate and water (which react to form carbon dioxide) are used in combination with liquid carbon dioxide (which boils to give gaseous form) in the production of very low density flexible PU foams for mattresses.
Mechanically made foams and froths, involves methods of introducing bubbles into liquid polymerisable matrices (e.g. an unvulcanised elastomer in the form of a liquid latex). Methods include whisking-in air or other gases or low boiling volatile liquids in low viscosity lattices, or the injection of a gas into an extruder barrel or a die, or into injection molding barrels or nozzles and allowing the shear/mix action of the screw to disperse the gas uniformly to form very fine bubbles or a solution of gas in the melt. When the melt is molded or extruded and the part is at atmospheric pressure, the gas comes out of solution expanding the polymer melt immediately before solidification. Frothing (akin to beating egg whites making a meringue), is also used to stabilize foamed liquid reactants, e.g. to prevent slumping occurring on vertical walls before cure – (i.e. avoiding foam collapse and sliding down a vertical face due to gravity).
Soluble fillers, e.g. solid sodium chloride crystals mixed into a liquid urethane system, which is then shaped into a solid polymer part, the sodium chloride is later washed out by immersing the solid molded part in water for some time, to leave small inter-connected holes in relatively high density polymer products, (e.g. Porvair synthetic leather materials for shoe uppers).
Hollow spheres and porous particles (e.g. glass shells/spheres, epoxide shells, PVDC shells, fly ash, vermiculite, other reticulated materials) are mixed and dispersed in the liquid reactants, which are then shaped into a solid polymer part containing a network of voids.
The blowing agent can affect the physical and mechanical properties of natural rubber.
References
Materials science
Plastics additives | Blowing agent | [
"Physics",
"Materials_science",
"Engineering"
] | 1,005 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
8,735,466 | https://en.wikipedia.org/wiki/MacCharlie | The MacCharlie was a hardware add-on for the original Apple Macintosh (Macintosh 128K) that was made by Dayna Communications. It allowed users to run DOS software for the IBM PC by clipping a unit onto the chassis of the Macintosh 128K, and included a keyboard extender to provide the function keys and numeric keypad that are absent from Apple's original keyboard. The name refers to an IBM PC advertising campaign of the time featuring Charlie Chaplin's "Little Tramp" character.
The clip-on unit sits to the side of the Mac and, like the contemporary Amiga Sidecar, contains essentially a complete IBM PC compatible with an 8088 processor, 256 KB of RAM (expandable to 640 KB) and a single 5.25" floppy disk drive that stores 360 KB. A second floppy drive could be added.
While running DOS software using MacCharlie, users could still access the Macintosh menu bar and desk accessories. However, the DOS environment, which ran in a window, was text-only and did not permit Macintosh applications to run concurrently while in use. MacCharlie used the Mac as a terminal, performing all DOS processing itself, and sent video data over a relatively slow serial link to the Mac for display. This slowness, coupled with the declining prices of real IBM PC compatibles, contributed to the short market life of the MacCharlie.
See also
Amiga Sidecar
References
External links
1985 Advertisement hosted by The Mac Mothership
Personal Computers; linking Mac to the I.B.M. PC, 1985 review article at the New York Times
IBM PC compatibles
Macintosh peripherals
Macintosh platform
Products introduced in 1985
Compatibility cards | MacCharlie | [
"Technology"
] | 342 | [
"Computing platforms",
"Macintosh platform"
] |
8,735,708 | https://en.wikipedia.org/wiki/Video%20game%20localization | Video game localization (or computer game localisation), is the process of preparing a video game for a market outside of where it was originally published. The game's name, art assets, packaging, manuals, and cultural and legal differences are typically altered.
Before localization, producers consider economic factors such as potential foreign profit. Most official localizations are done by the game's developers or a third-party translation company. Nevertheless, fan localizations are also popular.
Localization is largely inconsistent between platforms, engines and companies due to its recency. Localizers intend to create an experience like the original game, with discretion to the localization audience. Localizations are considered to have failed if they are confusing or difficult to understand and this may break the player's immersion.
History
Since the beginning of video game history, video games have been localized. One of the first widely popular video games, Pac-Man was localized from Japanese. The original transliteration of the Japanese title would be "Puck-Man", but the decision was made to change the name when the game was imported to the United States out of fear that the word 'Puck' would be vandalized into an obscenity. In addition, the names of the ghosts were originally based on colors - roughly translating to "Reddie", "Pinky", "Bluey", and "Slowly". Rather than translate these names exactly, they were renamed to Blinky, Pinky, Inky, and Clyde. This choice maintained the odd-man-out style of the original names without adhering to their exact meaning. This is an early example of a change in cultural context.
Early localization had one main concern. Due to the small memory size of the NES and SNES cartridges many translated text strings were too long. Ted Woolsey, translator of Final Fantasy VI, recounts having to continually cut down the English text due to limited capacity.
Early video game translation was not often a priority for companies, leading to budgets being low and localization time being cut short. Early translations were sometimes "literally done by a 'programmer with a phrase book'". For example, the original translation for the Sega Genesis game Beyond Oasis was discarded as the editor considered it nonsensical and an entirely new story was rewritten without any input from the translator. Occasionally the poor translation of video games has made the game notable. An example of this is with the game Zero Wing whose Engrish text "All Your Base Are Belong to Us" became an early Internet meme.
As technology in the early 2000s improved, localization was made both easier and harder. These improvements made in technology allowed text to be stored in ASCII strings instead of in picture format. Audio processing capability also improved allowing voice acting to be included in video games. The addition of dubbing into video games made the localization process harder and localization producers had to chose if they wanted to record entirely new voice lines or keep the original voice over. Graphical capability also improved making games more cinematic, so making sure the newly recorded voice lines matched the lip movements of the characters was important. Also, ensuring that visual gestures of animated characters made sense to a different audience was important.
Modern video games are becoming increasingly complex in scope. As opposed to their older counterparts, video games can have a large amount of dialogue and voice over, making localization efforts significantly harder. The team in charge of localizing Fable II into five languages consisted of 270 actors and 130 personnel. Likewise, the dialogue scripts for Star Wars: The Old Republic contained over 200,000 lines. Director of audio and localization Shauna Perry said that the game had as much audio as ten Knights of the Old Republic recorded back-to-back.
Styles of localization
There are many styles of localizing a video game. "No localization" is when a game is released in an overseas territory with little to no effort to localize the game. "Box and documentation localization" is when only the manuals and box are translated into the target language, but the game itself is not. This style is mostly chosen if the game is an arcade game or if the target country is expected to decently know the original language. In partial localization, the game's text is translated, but voice-over files are not re-recorded. This style is popular with many new Japanese role-playing games and visual novels. Full localization is when all assets of a game are translated and all voice-over is recorded in the target language. This option is usually undertaken by AAA game companies.
Academics in translation studies describe four primary methods for translating video games: foreignization (keeping a "foreign taste"), domestication (translating as game to suit characteristics and cultural standards of the destination), no translation (leaving parts of the game in the source language), and transcreation (creating a new text in the destination language).
Production models
Officially produced localization generally fit into one of two categories: "Post-gold" or "Sim-ship". Post-gold means that the game has been released and completed. This usually means there is a gap of time between the release of a localized version and original. The post-gold model allows the producers of a localization to access and play the fully completed game, generally allowing more time to work on and complete a thorough translation. This model was commonly used by Japanese AAA producers, but these companies are now moving towards the sim-ship style.
The other main model is "Sim-ship". This is when a localization is produced before the original game has been released. This method is more viable as games are prone to be pirated at release so there is a profit incentive to releasing this way. Though being crucial to maintain a good release window and leave games less prone to piracy, Sim-ship has its drawbacks. When localizing with this model, a completed game is unlikely to be ready. This results in a few risks for the continuity of the game, since a lot of the crucial context and information needed may be missing. Most western games follow this model.
There are two means to go about making a localization that follow one of each of these models: outsourced or in-house. Most game companies in North America and Europe rely on outsourcing as a means of localization. This model is also popular in emerging video game markets such as Chile, Russia, and China. When outsourced, a company that specializes in producing localization is hired to undertake the process. An issue that arises with an outsourced localization is that the company lacks knowledge of the game, as opposed to in-house developers. A localization that arises from the lack of knowledge about the game is commonly known as a "blind localization".
If a localization is outsourced, the developers will usually provide the outsourced company with a localization kit. A localization kit may contain elements such as general information about the project (including deadlines, contact information, software details), resources about the game itself (a walk-through, plot or character descriptions, cheat codes), reference materials (glossaries of terms used in the game world or used for the specific hardware), software (such as computer-aided translation tools), code, and the assets to be translated.
Companies may choose to localize in-house. This practice is common for Japanese developers, most notably Square Enix. When localized in-house, the process is completely controlled by the original developers. Although it is common practice to hire freelance translators to work alongside the development team, in-house producers usually have greater access to the original game and to the original artists and authors, who can be consulted about changing art assets or story concerns. Since Japanese companies prefer the post-gold method, in-house translation is favored. In-house productions usually have fewer mistakes and an overall smoother localization. The downside is that this causes a delay between the release of the international and home versions.
Another means of localization is through the unauthorized effort of fans. Fans of video games without an international release may be willing to put unpaid effort into localizing a game if the game is not released internationally. The most notable example of this is the Mother 3 (2006) localization. Fans attempted a petition to Nintendo to localize the game into English, and when this failed they undertook the process themselves. Sometimes, fan interest and fanmade localization is used as a metric of interest. For example, when The Great Ace Attorney was only released in Japan and fans localized it into other languages upon release, it was clear to Capcom that there was enough interest in their game to warrant an international release with an official localization.
When a game is released with a fan-deemed "inferior translation", or the game has been "blindly translated", it can prompt fan action to correct or completely re-do the process of localization. A fan group called DLAN has undertaken the work of localizing many games, mods, cheats, guides, and more into Castilian Spanish when the official versions were of poor quality, such as with The Elder Scrolls IV: Oblivion.
Tasks and challenges
The major types of localization are as follows:
Linguistic and cultural: the translation of language and cultural references maintaining the feel of the game but making it more appealing for the receiving locale.
Hardware and software: for example the change between PAL and NTSC, re-mapping of hotkeys, gameplay modifications.
Legal: age ratings may differ depending on the country of release. They are controlled by national or international bodies like PEGI (for Europe), ESRB (for US and Canada), ACB (for Australia), or CERO (for Japan).
Graphics and music: some games may exhibit different characters, or the same ones with a slightly different appearance in order to facilitate players identification with their avatar. Music may also vary according to national trends or the preferences of major fan communities.
Localization can be affected by the space on the screen allocated for text, which is often set based on the source language. This can include game elements such as dialogue, signage, captions, or narrative. German is an example of a destination language that presents difficulties due to length the constraints of screen space.
When games are more story-driven than action-driven, culturalising them can be challenging because of all the premises the designers are taking for granted in the development of the plot. Asian gamers seem to prefer more childlike characters, while Western countries might emphasize adult features. An example of the changes that are likely to happen during localization is Fatal Frame (known in Japan as Zero and known in Europe as Project Zero). In the original Japanese version the protagonist, Miku, was a frightened seventeen-year-old girl looking for her brother Mafuyu who disappeared after entering a haunted mansion. In the US and European versions Miku is nineteen, has Western features, and is not wearing the original Japanese school uniform, but developers did not think necessary to change her brother's appearance, so when players do find Mafuyu at the end of the game they do not seem to be blood-related. While most games only need small changes to be localized for another region, there are also games that had to be thematically overhauled for a new region. For example, efforts to localize the Nintendo DS rhythm game Osu! Tatakae! Ouendan for the western world led to a completely new and thematically different game, Elite Beat Agents, which reuses Ouendans gameplay but is re-themed to feature special agents helping people around the world instead of oendan cheering people in Japan, due to Ouendans innate reliance on Japanese culture making a plain localization of that game unviable.
A similar thing happens with the depiction of blood, and real historical events; many things have to be readjusted to fit the country's tolerance and taste in order not to hurt sensibilities. This is probably one of the reasons why so many games take place in imaginary worlds. This customization effort draws on the knowledge of geopolitical strategists, like Kate Edwards from Englobe. During the 2006 Game Developers Conference in California she explained the importance of being culturally aware when internationalizing games in a presentation called "Fun vs. Offensive: Balancing the 'Cultural Edge' of Content for Global Games". Both developers and publishers want to please their clients. Gamers are not particularly interested in where the game comes from, or who created it any more than someone buying a new car or DVD player. A product for mass consumption only keeps the branding features of the trademark; all the other characteristics might be subject to customization due to the need to appeal to the local market. Therefore, the translation will be in some cases an actual recreation, or, in the words of Mangiron & O'Hagan (2006), a "transcreation", where translators will be expected to produce a text with the right "feel" for the target market. It is important for translators to be aware of the logic behind this. Video games are a software product, and as such, they will have manuals and instructions, as well as interactive menus and help files. This will call for technical translation. On the other hand, players also find narration and dialogue closer to literary texts or film scripts where a more creative translation would be expected, but unlike most forms of translation, video games can adapt or even change the original script, as long as it is in the search of enhanced fun and playability of the target culture. Players only find a parallel of this type of practice in the translation of children's literature where professionals often adapt or alter the original text to improve children's understanding and enjoyment of the book.
David Reeves of SCEE said that the main reason that Europe is often affected by significant content delays is because of language localization and "that there isn't enough incentive for developers to work on multiple language translations during development. Hence, Europeans suffer delays and may never see a particular title". He also commented on why the UK and Ireland, which are English speaking countries, also experience the same delays as those in continental Europe with many different languages despite little or no modification. In his words: "With PlayStation Store we could probably go in the UK almost day and date. But then what are the Germans and the French going to say to me? That I'm Anglo-centric", indicating that the reason that these countries also must wait is to avoid criticism from other large European gaming countries such as Germany and France.
Cultural changes
Often localization changes include adjusting a game to consider specific cultural sensitivities. These changes may be self-enforced by the developers themselves, or enacted by national or regional rating boards (Video game content rating system), but the games are still sometimes released with controversial or insensitive material, which can lead to controversy or recall of the product.
Games localized for import into Germany often have significant changes made due to the Unterhaltungssoftware Selbstkontrolle's (USK) strict policies against blood and gore, profanity, and symbols associated with racial hatred, such as Nazi symbolism (until 2019).
For instance, the German version of Team Fortress 2 (2007) has no blood or detached body parts as a result of this regulation, which can cause difficulty for players as it is hard to tell if an enemy has been hit or taken damage. As a result, mods known as "bloodpatches" have been created for this and many German games that allow the blood and gore of the original game to be unlocked. Despite a significant overhaul of the graphics, the German localization of the World War II game Wolfenstein (2009) contained a single visible swastika on an art asset. As a result, Raven Software recalled the game.
China also has strict censorship rules, and forbids content that endangers the "unity, sovereignty and territorial integrity of the state" or the "social moralities or fine national cultural traditions", amongst other qualifications. As a result, the Swedish PC game Hearts of Iron (2002), set during World War II, was banned because maps depicted Manchuria, West Xinjiang, and Tibet as independent states. Additionally, Taiwan was shown to be a territory of Japan, as was accurate for the time period, but these inclusions were considered harmful to China's territorial integrity, so the game was forbidden from being legally imported. The localization of Football Manager (2005) was similarly banned because Tibet, Taiwan, Hong Kong, and China were all treated as separate teams, putting them on equal footing.
Other localization challenges or controversies arise from material deemed too sexual for the cultural expectations of the target market. For example, when the Japanese game Xenoblade Chronicles X was localized for the North American market, the options to change a protagonist's bust size was removed, as were clothing options including bikinis. This resulted in complaints from American players who had been playing the Japanese version.
Some translators of video games favor glocalization over the process of localization. In this context, glocalization seeks from the outset to minimize localization requirements for video games intended to be universally appealing. Academic Douglas Eyman cites the Mists of Pandaria expansion for World of Warcraft as an example of glocalization because it was designed at the outset to appeal to global audiences while celebrating Chinese culture.
Linguistic assets
In video games there are a number of different types of texts that require translation. These can be manuals, subtitles and dubbing scripts. There is another type of script that poses an issue to developers of localization. This type of script is in a format common with software like web browsers or word processors. Utility programs like this have a commonality with each other because a user can input any text into it. This is referred to as "interactivity". The interactive element of this type of text makes it difficult for producers of localization because it has an aspect of randomness, for example a user may have to input a command or a message at a certain point. The random nature of this takes away linearity and contextual information that a game has. As a result of this, translators do not have important sources in the translation process and loose both co-text and context in text. When the game is unfinished or an inadequate localization kit has been supplied the team must look elsewhere to draw from. There are many resources which they use to do this.
Due to the differences between each of the ways a video game can be produced, there is no standard localization tool that producers use. In modern games it is able to do this in the game engine but older titles do not have this. There are multiple programs that can be used, most popular being Catalyst and Passolo, which allow producers to work directly with the game code.
Producers of localizations deal with a variety of different linguistic assets, which include the game itself, the official website, promotional articles, game updates and patches.
Textual types and file formats
In a video game there are various types of text. Video games are also multimedia including a variety of different assets like video. Producers of localization's have to be knowledgeable in dealing with these. When dealing with cut-scenes or pre-rendered video, producers have to put effort into ensuring these stay relatively unchanged. The most important challenge is the lip-syncing of newly recorded dialogue, and fitting the subtitles into each part of a pre-recorded or pre-rendered scene. The types of text and files that are commonly found in video games are as follows:
Instruction manual is a document that outlines important details relating to the purchased video game. These can be instructions on how to use the game, a guide on how to complete the game and other information like corporate and legal texts.
Packaging can include the slip inserted into the DVD or CD case the video game comes in, or before optical disks were adopted in gaming, the box that a game came in. Packaging usually features the title of a game, its rating and logos of companies involved. It also features pictures and other points of information relating to a game. The manual is usually found within the packaging.
A Readme file is a file usually included with digital video games. It contains information on how to install the game and run it.
An official website is a website created for the promotion and usually the sale of a video game. The information found on a website is similar to that of a manual.
Dialogue for dubbing is the translated dialogue that is prepared for a voice actor to read out.
Dialogue for subtitling is the translated dialogue that is applied to pre-rendered or pre-recorded video. Most subtitles are hard-coded in to ensure that the video and subtitles are in-sync.
A user interface (UI) is what the player of a video game interacts with. It can contain a variety of different assets that need to be translated. Producers need to ensure that the interface's assets are big enough to contain readable text when translating, as well as confirming that graphics without text convey a clear message in the target market. Additionally, games must be able to support various special characters if the user is able to input text.
Controversy
During the 2010s there was significant debate surrounding the localisation of Japanese games, particularly for Nintendo platforms. Some fans consider resulting changes to plot and characterization as marring the original artistic vision, and some object to sexual content being removed or bowdlerized. Localization of Nintendo games is commonly handled by a Nintendo division called the Treehouse. In the face of Nintendo's unwillingness to communicate about localization, speculation and conspiracy theories circulated among enthusiasts, and several employees of the Treehouse were alleged to be responsible for unpopular changes.
Allison Rapp, a Treehouse employee not directly involved in localization, garnered controversy due to her comments on Twitter. Attention on Rapp was heightened as part of the Gamergate controversy by the circulation of an undergraduate essay by Rapp which favored cultural relativism regarding sexualization of minors in Japanese media. The essay argued against the sort of censorship that the Treehouse's critics decried. Some however interpreted the essay as defending the exploitation of children, and readers of the alt-right, neo-Nazi publication The Daily Stormer organized a letter-writing campaign to have her fired. That initiative was controversial within the Gamergate movement, with some supporters considering it justifiable treatment of an ideological opponent, while others considered the campaign against Rapp to be unethical or not aligned with the movement's goals. Rapp was subsequently fired, though Nintendo issued a statement that the reason was that Rapp had held a second job against company policy. She maintains that her controversial online presence was the true cause.
See also
Fan translation of video games
Undubbing
Accessibility
Localization of Square Enix video games
References
Bibliography
Bernal-Merino, M. 2006. "On the Translation of Video Games". The Journal of Specialised Translation, Issue 6: 22–36
Bernal-Merino, M. 2007. "Training translators for the video game industry", in J. Diaz-Cintas (ed.), The Didactics of Audiovisual Translation. Amsterdam / Philadelphia: John Benjamins.
Bernal-Merino, M. 2007. "Localization and the Cultural Concept of Play". Game Career Guide
Bernal-Merino, M. 2007. "Challenges in the Translation of Video Games". Tradumática, No. 5.
Bernal-Merino, Miguel. (2008). "Inside the Game Localisation Round Table". Develop. Retrieved December 2, 2014.
Chandler, H. 2005. The Game Localization Handbook. Massachusetts: Charles River Media
Chandler, Heather M and Stephanie O'Malley Deming. (2012). The Game Localization Handbook (2nd ed.). Sudbury, MA; Ontario and London: Jones & Bartlett Learning.
"Clan DLAN: Traducción de videojuegos, traducción y creación de mods, modding, revisiones, guías, rol y más. Todo en español". (2014). Retrieved December 2, 2014
Corliss, Jon. (2007). "All Your Base are Belong to Us! Videogame Localization and Thing Theory". Accessed July 15, 2012. Retrieved December 2nd, 2014
Dietz, F. 2006. Issues in localizing computer games. Perspectives on Localization edited by Keiran J. Dunne. Amsterdam and Philadelphia: John Benjamins, 121–134.
Dietz, Frank. (2007). "How Difficult Can That Be? The Work of Computer and Video Game Localization". Revista Tradumatica 5: "La localitzacio de videojocs". Accessed July 12, 2011.
Diaz Montón, Diana. (2007). "It's a Funny Game". The Linguist 46 (3). Accessed July 12, 2011. Retrieved December 2, 2014
Edwards, Kate. GDC 2006 presentation "Fun Vs. Offensive"
Esselink, B. 2000. A Practical Guide to Localization. Amsterdam and Philadelphia: John Benjamins.
Fahey, Mike. (2009). "Star Wars: The Old Republic Script More Than 40 Novels Long". Kotaku. Retrieved December 2, 2014
Good, Owen. (2009). "Swastika Gets Wolfenstein Pulled from German Shelves" Kotaku. Retrieved December 2nd, 2014
Heimburg, E, 2006. Localizing MMORPGs. Perspectives on Localization edited by Keiran J. Dunne. Amsterdam and Philadelphia: John Benjamins,135–154.
Kohler, Chris. (2005). Power-up: How Japanese Video Games Gave the World an Extra Life. Indianapolis: Brady Games.
Mangiron, C. & O'Hagan, M. 2006. "Game localization: unleashing imagination with 'restricted' translation". The Journal of Specialised Translation 6: 10–21
O'Hagan, Minako and Mangiron, Carme. (2013). Game Localization: translating for the global digital entertainment industry. Amsterdam/Philadelphia: John Benjamins Publishing Company.
Sutton-Smith, B. 1997. The Ambiguity of Play. Cambridge/London: Harvard University Press.
"The Mother 3 Fan Translation". Retrieved December 2, 2014
Zhang, Xiaochun. (2012). "Censorship and Digital Games Localisation in China". Meta: journal des traducteurs, 57(2), 338–350. Retrieved December 2, 2014
External links
Localization Production Pitfalls – excerpt from 'The Game Localization Handbook'
Game Localization and the Cultural Concept of Play
Best practices for game localization
You Spoony Bard!: An Analysis of Video Game Localization Practices
Videogame localization and internationalization
Video game development
Internationalization and localization | Video game localization | [
"Technology"
] | 5,521 | [
"Natural language and computing",
"Internationalization and localization"
] |
8,736,036 | https://en.wikipedia.org/wiki/Outline%20of%20the%20Internet | The following outline is provided as an overview of and topical guide to the Internet.
The Internet is a worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of interconnected smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web.
Internet features
Hosting –
File hosting –
Web hosting
E-mail hosting
DNS hosting
Game servers
Wiki farms
World Wide Web –
Websites –
Web applications –
Webmail –
Online shopping –
Online auctions –
Webcomics –
Wikis –
Voice over IP
IPTV
Internet communication technology
Internet infrastructure
Critical Internet infrastructure –
Internet access –
Internet access in the United States –
Internet service provider –
Internet backbone –
Internet exchange point (IXP) –
Internet standard –
Request for Comments (RFC) –
Internet communication protocols
Internet protocol suite –
Link layer
Link layer –
Address Resolution Protocol (ARP/InARP) –
Neighbor Discovery Protocol (NDP) –
Open Shortest Path First (OSPF) –
Tunneling protocol (Tunnels) –
Layer 2 Tunneling Protocol (L2TP) –
Point-to-Point Protocol (PPP) –
Medium access control –
Ethernet –
Digital subscriber line (DSL) –
Integrated Services Digital Network (ISDN) –
Fiber Distributed Data Interface (FDDI) –
Internet layer
Internet layer –
Internet Protocol (IP) –
IPv4 –
IPv6 –
Internet Control Message Protocol (ICMP) –
ICMPv6 –
Internet Group Management Protocol (IGMP) –
IPsec –
Transport layer
Transport layer –
Transmission Control Protocol (TCP) –
User Datagram Protocol (UDP) –
Datagram Congestion Control Protocol (DCCP) –
Stream Control Transmission Protocol (SCTP) –
Resource reservation protocol (RSVP) –
Explicit Congestion Notification (ECN) –
QUIC
Application layer
Application layer –
Border Gateway Protocol (BGP) –
Dynamic Host Configuration Protocol (DHCP) –
Domain Name System (DNS) –
File Transfer Protocol (FTP) –
Hypertext Transfer Protocol (HTTP) –
Internet Message Access Protocol (IMAP) –
Internet Relay Chat (IRC) –
LDAP –
Media Gateway Control Protocol (MGCP) –
Network News Transfer Protocol (NNTP) –
Network Time Protocol (NTP) –
Post Office Protocol (POP) –
Routing Information Protocol (RIP) –
Remote procedure call (RPC) –
Real-time Transport Protocol (RTP) –
Session Initiation Protocol (SIP) –
Simple Mail Transfer Protocol (SMTP) –
Simple Network Management Protocol (SNMP) –
SOCKS –
Secure Shell (SSH) –
Telnet –
Transport Layer Security (TLS/SSL) –
Extensible Messaging and Presence Protocol (XMPP) –
History of the Internet
Networks prior to the Internet
NPL network – a local area computer network operated by a team from the National Physical Laboratory in England, the first to implement packet switching, the design of which influenced other networks that followed.
ARPANET – the first wide-area packet switching network, developed by the Advanced Research Projects Agency in the United States, and one of the first networks to implement the TCP/IP protocol suite which later became a technical foundation of the Internet.
SATNET – an early satellite packet-switched network, also developed by the Advanced Research Projects Agency, which implemented TCP/IP before the ARPANET.
Merit Network – a computer network created in 1966 to connect the mainframe computers at universities that is currently the oldest running regional computer network in the United States.
CYCLADES – a French research network created in the early 1970s that pioneered the concept of internetworking by making the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself.
Computer Science Network (CSNET) – a computer network created in the United States for computer science departments at academic and research institutions that could not be directly connected to ARPANET, due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to development of the global Internet.
National Science Foundation Network (NSFNET) – an American networking project, initially created to link researchers to the NSF-funded supercomputing centers that, through further public funding and private industry partnerships, developed into a major part of the early Internet backbone.
History of Internet components
History of packet switching – a method of grouping data into packets that are transmitted over a digital network, conceived independently by Paul Baran and Donald Davies in the early and mid-1960s.
History of communication protocols – the set of rules to enable data communication between computers on a network.
History of internetworking – networking between computers on different networks.
very high speed Backbone Network Service (vBNS) –
Network access point (NAP) –
Federal Internet Exchange (FIX) –
Commercial Internet eXchange (CIX) –
List of Internet pioneers
Timeline of Internet conflicts
Internet usage
Global Internet usage
Internet traffic
List of countries by number of Internet users
List of European countries by number of Internet users
List of sovereign states by number of broadband Internet subscriptions
List of sovereign states by number of Internet hosts
Languages used on the Internet
List of countries by IPv4 address allocation
Internet Census of 2012
Internet politics
Internet privacy – a subset of data privacy concerning the right to privacy from third parties including corporations and governments on the Internet.
Censorship – the suppression of speech, public communication, or other information, on the basis that such material is considered objectionable, harmful, sensitive, politically incorrect or "inconvenient" as determined by government authorities or by community consensus.
Censorship by country – the extent of censorship varies between countries and sometimes includes restrictions to freedom of the Press, freedom of speech, and human rights.
Internet censorship – the control or suppression of what can be accessed, published, or viewed on the Internet enacted by regulators or self-censorship.
Content control software – a type of software that restricts or controls the content an Internet user is capable to access.
Internet censorship and surveillance by country
Internet censorship circumvention – the use of techniques and processes to bypass filtering and censored online materials.
Internet law – law governing the Internet, including dissemination of information and software, information security, electronic commerce, intellectual property in computing, privacy, and freedom of expression.
Internet organizations
Domain name registry or Network Information Center (NIC) – a database of all domain names and the associated registrant information in the top level domains of the Domain Name System of the Internet that allow third party entities to request administrative control of a domain name.
Private sub-domain registry – an NIC which allocates domain names in a subset of the Domain Name System under a domain registered with an ICANN-accredited or ccTLD registry.
Internet Society (ISOC) – an American non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy.
InterNIC (historical) – the organization primarily responsible for Domain Name System (DNS) domain name allocations until 2011 when it was replaced by ICANN.
Internet Corporation for Assigned Names and Numbers (ICANN) – a nonprofit organization responsible for coordinating the maintenance and procedures of several databases related to the namespaces of the Internet, ensuring the network's stable and secure operation.
Internet Assigned Numbers Authority (IANA) – a department of ICANN which allocates domain names and maintains IP addresses.
Internet Activities Board (IAB) –
Internet Engineering Task Force (IETF) –
Non-profit Internet organizations
Advanced Network and Services (ANS) (historical) –
Internet2 –
Merit Network –
North American Network Operators' Group (NANOG) –
Commercial Internet organizations
Amazon.com –
ANS CO+RE (historical) –
Google – an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, search engine, cloud computing, software, and hardware.
Cultural and societal implications of the Internet
Sociology – the scientific study of society, including patterns of social relationships, social interaction, and culture.
Sociology of the Internet – the application of sociological theory and methods to the Internet, including analysis of online communities, virtual worlds, and organizational and social change catalyzed through the Internet.
Digital sociology – a sub-discipline of sociology that focuses on understanding the use of digital media as part of everyday life, and how these various technologies contribute to patterns of human behavior, social relationships and concepts of the self.
Internet culture
List of web awards
Underlying technology
MOSFET (MOS transistor)
CMOS (complementary MOS)
LDMOS (lateral diffused MOS)
Power MOSFET
RF CMOS (radio frequency CMOS)
Optical networking
Fiber-optic communication
Laser
Optical fiber
Telecommunications network
Modem
Telecommunication circuit
Wireless network
Base station
Cellular network
RF power amplifier
Router
Transceiver
By region
Internet in Africa
By country
Internet in Afghanistan
Internet in Australia
Internet in Azerbaijan
Internet in China
Internet in Egypt
Internet in Myanmar
Internet in New Zealand
Internet in the Philippines
Internet in South Africa
Internet in the United Kingdom
Internet in the United States
See also
Outline of information technology
Further reading
Yeo, ShinJoung. (2023) Behind the Search Box: Google and the Global Internet Industry (U of Illinois Press, 2023) ISBN 10:0252087127 online
Internet
Internet | Outline of the Internet | [
"Technology"
] | 1,940 | [
"Computing-related lists",
"Internet",
"Transport systems",
"Internet-related lists"
] |
8,736,383 | https://en.wikipedia.org/wiki/Castor%20et%20Pollux | Castor et Pollux (Castor and Pollux) is an opera by Jean-Philippe Rameau, first performed on 24 October 1737 by the Académie royale de musique at its theatre in the Palais-Royal in Paris. The librettist was Pierre-Joseph-Justin Bernard, whose reputation as a salon poet it made. This was the third opera by Rameau and his second in the form of the tragédie en musique (if the lost Samson is discounted). Rameau made substantial cuts, alterations and added new material to the opera for its revival in 1754. Experts still dispute which of the two versions is superior. Whatever the case, Castor et Pollux has always been regarded as one of Rameau's finest works.
Composition history
Charles Dill proposes that Rameau had composed the 1737 opera just after working with Voltaire on the opera "Samson" that was never completed, after which he composed "Castor et Pollux" implementing Voltaire's aesthetics. For example, Voltaire sought the presentation of static tableaus that expressed emotion, as in the first act of the 1737 version which begins at the scene of Castor's tomb with a Chorus of Spartans singing "Que tout gemisse", followed by a recitative between Telaire and Phoebe in which the former is grieving the loss of her lover Castor, and culminating in Telaire's lament aria "Tristes apprets". Dill notes that in contrast, the 1754 version begins with much more background behind the story of Telaire's love for Castor and depicts his death at the end. The events in Act I of the 1737 version appear in Act II of the 1754 version. Dill claims that Voltaire was more interested in music than action in opera. Moreover, Dill notes a difference in the plots between in the two versions. In the 1737 version, the main concern is for the moral dilemma between love and duty that Pollux faces: should he pursue his love of Telaira or rescue his brother? Of course, he chooses the latter. In the 1754 version, Dill remarks that that plot is more concerned with the tests that Pollux must face: he must kill Lynceus, persuade Jupiter not to oppose his journey into the Underworld, and persuade Castor not to accept the gift of immortality.
While some scholars (such as Cuthbert Girdlestone, Paul-Marie Masson, and Graham Sadler) have assumed that the 1754 version was superior, Dill argues that Rameau made the changes of 1754 at a different point in his career. In 1737, he was testing the limits of tragedie lyrique; where in 1754, he had done more work with ballet-oriented genres in which he included striking musical compositions that delighted audiences. Thus, Dill proposes that there may have been some commercial concerns behind the change in aesthetic in 1754, as the revised version conformed more to the traditional Lullian aesthetic. He comments that while many see the revision as more innovative, in actuality the 1737 version was the more daring.
Performance history and reception
Castor et Pollux appeared in 1737 while the controversy ignited by Rameau's first opera Hippolyte et Aricie was still raging. Conservative critics held the works of the "father of French opera", Jean-Baptiste Lully, to be unsurpassable. They saw Rameau's radical musical innovations as an attack on all they held dear and a war of words broke out between these Lullistes and the supporters of the new composer, the so-called Rameauneurs (or Ramistes). This controversy ensured that the premiere of Castor would be a noteworthy event.
Rameau had not altered the dramatic structure of Lully's tragédie lyrique genre: he retained the same five-act format with the same types of musical numbers (overture, recitative, air, chorus, and dance suites). He had simply expanded the musical resources available to French opera composers. While some welcomed Rameau's new idiom, more conservative listeners found it unappealing. On the one hand, Rameau's supporter Diderot (who later turned his loyalty elsewhere) remarked: "Old Lulli is simple, natural, even, too even sometimes, and this is a defect. Young Rameau is singular, brilliant, complex, learned, too learned sometimes; but this is perhaps a defect on the listeners." On the other hand, the complaint of the Lullistes was that Rameau's musical idiom was far more expressive that Lully's and went so far as to call it distastefully "Italianate" (by French standard). For example, where Lully has contained musical expression, Rameau's recitative style included much wider melodic leaps in contrast to Lully's more declamatory style. This can be heard clearly, for example, in the opening recitative between Phoebe and Cleone (Phoebe's servant) in Act I, scene 1 of the 1754 revised version. Additionally, he added a richer harmonic vocabulary that included ninth chords. Rameau's more demanding vocal style led to the remark (thought to be made by Rameau himself) that while Lully's operas required actors, his required singers. Over time, these changes became more and more acceptable to the French audience.
As it turned out, the opera was a success. It received twenty performances in late 1737 but did not reappear until the substantially revised version took to the stage in 1754. This time there were thirty performances and ten in 1755. Graham Sadler writes that "It was ... Castor et Pollux that was regarded as Rameau's crowning achievement, at least from the time of its first revival (1754) onwards."
Revivals followed in 1764, 1765, 1772, 1773, 1778, 1779 and 1780. The taste for Rameau's operas did not long outlive the French Revolution but extracts from Castor et Pollux were still being performed in Paris as late as 1792. During the nineteenth century, the work did not appear on the French stage, though its fame survived the general obscurity into which Rameau's works had sunk; Hector Berlioz admiringly mentioned the aria Tristes apprêts.
The first modern revival took place at the Schola Cantorum in Paris in 1903. Among the audience was Claude Debussy. The first UK performance, organised by Ronald Crichton, was given by the Oxford University Opera Club in the early 1930s at Magdalen College in November 1934.
Roles
Synopsis
The synopsis is based on 1737 version
Prologue
The allegorical prologue is unrelated to the main story. It celebrates the end of the War of the Polish Succession, in which France had been involved. In the prologue, Venus, goddess of love, subdues Mars, god of war, with the help of Minerva. In the 1754 revision, the prologue was eliminated.
Act 1
Background note: Castor and Pollux are famous heroes. Despite being twin brothers, one of them (Pollux) is immortal and the other (Castor) is mortal. They are both in love with the princess Telaira (Télaïre), but she loves only Castor. The twins have fought a war against an enemy king, Lynceus (Lyncée) which has resulted in disaster: Castor has been slain. The opera opens with his funeral rites. Telaira expresses her grief to her friend Phoebe (Phébé) in Tristes apprêts, one of Rameau's most famous arias. Pollux and his band of Spartan warriors interrupt the mourning bringing the dead body of Lynceus who has been killed in revenge. Pollux confesses his love for Telaira. She avoids giving a reply, instead asking him to go and plead with his father Jupiter, king of the gods, to restore Castor to life.
Act 2
Pollux expresses his conflicting emotions in the aria Nature, amour, qui partagez mon coeur. If he does what Telaira says and manages to persuade Jupiter to restore his brother to life, he knows he will lose the chance to marry her. But he finally yields to her pleas. Jupiter descends from above and Pollux begs him to bring Castor back to life. Jupiter replies he is powerless to alter the laws of fate. The only way to save Castor is for Pollux to take his place among the dead. Pollux, despairing that he will never win Telaira, decides to go to the Underworld. Jupiter tries to dissuade him with a ballet of the Celestial Pleasures led by Hebe, goddess of youth, but Pollux is resolute.
Act 3
The stage shows the entrance to the Underworld, guarded by monsters and demons. Phoebe gathers the Spartans to prevent Pollux from entering the gate of the Underworld. Pollux refuses to be dissuaded, even though Phoebe declares her love for him. When Telaira arrives and she sees Pollux's true love for her, Phoebe realises her love will be unrequited. She urges the demons of the Underworld to stop him entering (Sortez, sortez d'esclavage/Combattez, Démons furieux). Pollux fights the demons with the help of the god Mercury and descends into Hades.
Act 4
The scene shows the Elysian fields in the Underworld. Castor sings the aria Séjours de l'éternelle paix: the beautiful surroundings cannot comfort him for the loss of Telaira, neither can a Chorus of Happy Spirits. He is amazed to see his brother Pollux, who tells him of his sacrifice. Castor says he will only take the opportunity to revisit the land of the living for one day so he can see Telaira for the last time.
Act 5
Castor returns to Sparta. When Phoebe sees him, she thinks Pollux is dead for good and commits suicide so she can join him in the Underworld. But Castor tells Telaira he only plans to remain alive with her for a single day. Telaira bitterly accuses him of never having loved her. Jupiter descends in a storm as a deus ex machina to resolve the dilemma. He declares that Castor and Pollux can both share immortality. The opera ends with the fête de l'univers ("Festival of the Universe") in which the stars, planets and sun celebrate the god's decision and the twin brothers are received into the Zodiac as the constellation of Gemini.
Musical analysis
Act 1
In the 1737 version, the first act opens with a tomb scene in which a chorus of Spartans mourns the death of their fallen king Castor who has been slain by Lynceus. The music in F minor features a descending tetrachord motive associated with lamentation since Claudio Monteverdi's Nymph's Lament (in this case it is chromatic: F-E-Eb-D-Db-C). Although Telaira's Tristes apprêts in scene 3 does not have the descending tetrachord feature, Cuthbert Girdlestone still calls it a lament. The air is in da capo form, whose B-section has a recitative-like quality. It features a bassoon obbligato part and a high register outburst on the word "Non!" that marks its high point. The march music for the entrance of Pollux and the Spartans is martial in character. With Lynceus's corpse at his feet, Pollux proclaims his brother avenged; the Spartans chorus then sings and dances in rejoice "Let Hell applaud this new turn! Let a mournful shade rejoice in it! The cry of revenge is the song of Hell.". The second air of the Spartans in C Major, as that allows for a trumpet obbligato part with all of its military associations. (Before valved instruments, the trumpet keys were C and D major.) The act concludes with a lengthy recitative in which Pollux professes his love for Telaira.
The 1754 revisions
The prologue was completely cut; it was no longer politically relevant and the fashion for operas having prologues had died out. The opera no longer begins with Castor's funeral; a wholly new Act One was created explaining the background to the story: Telaira is in love with Castor but she is betrothed to Pollux, who is prepared to give her up to his brother when he finds out. Unfortunately the wedding celebrations are violently interrupted by Lynceus and a battle breaks out in which Castor is killed. Acts Three and Four were merged and the work as a whole shortened by cutting a great deal of recitative.
Recordings
Castor et Pollux (1737 version) Concentus Musicus Wien, Harnoncourt (Teldec, 1972)
Castor et Pollux (1737 version) Les Arts Florissants, William Christie (Harmonia Mundi, 1993))
Castor et Pollux (1754 version) English Bach Festival Singers and Orchestra, Farncombe (Erato, 1982)
Castor et Pollux (1754 version) Aradia Ensemble; Opera in Concert Chorus, Kevin Mallon (Naxos, 2004)
Castor et Pollux (1754 version) Les Talens Lyriques, Chorus of De Nederlandse Opera, Christophe Rousset (Opus Arte, 2008)
Castor et Pollux (1754 version) Ensemble Pygmalion, Raphaël Pichon (Hamonia Mundi, 2015)
References
Notes
Sources
Bouissou, Sylvie, Booklet notes accompanying the Christie recording
Girdlestone, Cuthbert, Jean-Philippe Rameau: His Life and Work Cassell & Company Ltd, 1962; Dover paperback, 1990
Holden, Amanda (Ed.), The New Penguin Opera Guide, New York: Penguin Putnam, 2001.
Sadler, Graham (Ed.), The New Grove French Baroque Masters New YorK: W. W. Norton & Company, 1997
External links
Le magazine de l'opéra baroque by Jean-Claude Brenac (in French)
Castor et Pollux synopsis - 1754 version
Tragédies en musique
Operas by Jean-Philippe Rameau
French-language operas
1737 operas
Operas
Operas based on classical mythology
Opera world premieres at the Paris Opera
Fiction about twins
Castor and Pollux | Castor et Pollux | [
"Astronomy"
] | 2,973 | [
"Castor and Pollux",
"Astronomical myths"
] |
8,736,435 | https://en.wikipedia.org/wiki/The%20dose%20makes%20the%20poison | "The dose makes the poison" ( 'only the dose makes the poison') is an adage intended to indicate a basic principle of toxicology. It is credited to Paracelsus who expressed the classic toxicology maxim "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison." This is often condensed to: "The dose makes the poison" or in Latin, . It means that a substance can produce the harmful effect associated with its toxic properties only if it reaches a susceptible biological system within the body in a high enough concentration (i.e., dose).
The principle relies on the finding that all chemicals—even water and oxygen—can be toxic if too much is eaten, drunk, or absorbed. "The toxicity of any particular chemical depends on many factors, including the extent to which it enters an individual’s body." This finding also provides the basis for public health standards, which specify maximum acceptable concentrations of various contaminants in food, public drinking water, and the environment.
The idea also describes the phenomenon in which poisonous substances can be medicinal in small doses.
See also
Notes
Toxicology
Paracelsus
Phrases | The dose makes the poison | [
"Environmental_science"
] | 246 | [
"Toxicology"
] |
2,195,504 | https://en.wikipedia.org/wiki/Oxygenate | In the liquid fuel industry, oxygenates are hydrocarbon-derived fuel additives containing at least one oxygen atom to promote complete combustion. Absent oxygenates, fuel combustion is usually incomplete, and the exhaust stream pollutes the air with carbon monoxide, soot particles, aromatic and polyaromatic hydrocarbons, and nitrated polyaromatic hydrocarbons.
The most common oxygenates are either alcohols or ethers, but ketones and aldehydes are also included in this distinction. Carboxylic acids and esters can be grouped with oxygenates in the simple definition that they contain at least one oxygen atom. However, they are usually unwanted in oils, and therefore likely fuels, due to their environmental toxicity and tendency to cause catalyst poisoning and corrosion during oil production and refining.
Alcohols:
Methanol (MeOH)
Ethanol (EtOH); see also Common ethanol fuel mixtures
Isopropyl alcohol (IPA)
n-Butanol (BuOH)
Gasoline grade tert-butanol (GTBA)
Ethers:
Methyl tert-butyl ether (MTBE)
tert-Amyl methyl ether (TAME)
tert-Hexyl methyl ether (THEME)
Ethyl tert-butyl ether (ETBE)
tert-Amyl ethyl ether (TAEE)
Diisopropyl ether (DIPE)
In the United States
In the United States, the Environmental Protection Agency (EPA) had authority to mandate that minimum proportions of oxygenates be added to automotive gasoline on regional and seasonal basis from 1992 until 2006 in an attempt to reduce air pollution, in particular ground-level ozone and smog. As of 2023, the EPA continues to require the use of oxygenated gasoline in certain areas during winter to regulate carbon monoxide emissions; however, the programs to fulfill its conditions are implemented by the states. In addition to this North American automakers from 2006 onwards promoted a blend of 85% ethanol and 15% gasoline, marketed as E85, and their flex-fuel vehicles, e.g. GM's Live Green, Go Yellow campaign. US Corporate Average Fuel Economy (CAFE) standards give an artificial 54% fuel efficiency bonus to vehicles capable of running on 85% alcohol blends over vehicles not adapted to run on 85% alcohol blends. There is also alcohols' intrinsically cleaner combustion, however due to its lower energy density it is not capable of producing as much energy per gallon as gasoline. Much gasoline sold in the United States is blended with up to 10% of an oxygenating agent. This is known as oxygenated fuel and often (but not entirely correctly, as there are reformulated gasolines without oxygenate) as reformulated gasoline. Methyl tert-butyl ether (MTBE) was the most common fuel additive in the United States, prior to government mandated use of ethanol. Typically, gasoline with added MTBE is called reformulated gasoline, while gasoline with ethanol is called oxygenated gasoline.
References
External links
EPA Definition of Oxygenates
USGS Definition of Oxygenates
Petroleum products
Fuels | Oxygenate | [
"Chemistry"
] | 628 | [
"Petroleum",
"Petroleum products",
"Fuels",
"Chemical energy sources"
] |
2,196,146 | https://en.wikipedia.org/wiki/California%20Senate%20Bill%201386%20%282002%29 | California S.B. 1386 was a bill passed by the California legislature that amended the California law regulating the privacy of personal information: civil codes 1798.29, 1798.82 and 1798.84. This was an early example of many future U.S. and international security breach notification laws, it was introduced by California State Senator Steve Peace on February 12, 2002, and became operative July 1, 2003.
Sections
Enactment of a requirement for notification to any resident of California whose unencrypted personal information was, or is reasonably believed to have been, acquired by an unauthorized person. This requires an agency, person or business that conducts business in California and owns or licenses computerized 'personal information,' to disclose any breach of security (to any resident whose unencrypted data is believed to have been disclosed).
The bill mandates various mechanisms and procedures with respect to many aspects of this scenario, subject also to other defined provisions.
Any agency that owns or licenses computerized data that includes personal information shall disclose any breach of the security of the system following discovery or notification of the breach in the security of the data to any resident of California whose unencrypted personal information was, or is reasonably believed to have been, acquired by an unauthorized person. An out-of-state corporation that has personal information relating to a California resident would fall under this statute. A question on minimum contacts would then ensue as to whether an action may be brought in California to enforce the California resident's rights under the statute.
Corporations with no physical locations in California are not subject to California law. That SB 1386 would affect an out-of-state corporation is based on the notion of 'quasi in rem' jurisdiction, a notion that the Supreme Court invalidated in Shaffer v. Heitner.
Corporations can determine whether they are subject to this statute by reviewing the following questions:
Does their data include "personal information" as defined by the statute?
Does that "personal information" relate to a California resident?
Was the "personal information" unencrypted?
Was there a "breach of the security" of the data as defined by the statute?
Was the "personal information" acquired, or is reasonably believed to have been acquired, by an unauthorized person?
A corporation that answers yes to all five of these questions must report.
The statute does not apply to "encrypted" information. Thus one way to avoid reporting is to encrypt all "personal information." A corporation can also avoid reporting if its data does not contain "personal information" relating to a California resident.
"Personal information" means an individual's first name or first initial and last name in combination
with any one or more of the following data elements, when either the name or the data elements are not encrypted:
Social security number.
Driver's license number or California Identification Card number.
Account number, credit or debit card number, in combination with any required security code, access code, or password that would permit access to an individual's financial account.
"Personal information" does not include publicly available information that is lawfully made available to the general public from federal, state, or local government records.
References
External links
Text of SB1386
The SB 1386 Management Toolkit
Computing legislation
Information privacy
SB 1386 | California Senate Bill 1386 (2002) | [
"Engineering"
] | 679 | [
"Cybersecurity engineering",
"Information privacy"
] |
2,196,648 | https://en.wikipedia.org/wiki/Lithium%20hydride | Lithium hydride is an inorganic compound with the formula LiH. This alkali metal hydride is a colorless solid, although commercial samples are grey. Characteristic of a salt-like (ionic) hydride, it has a high melting point, and it is not soluble but reactive with all protic organic solvents. It is soluble and nonreactive with certain molten salts such as lithium fluoride, lithium borohydride, and sodium hydride. With a molar mass of 7.95 g/mol, it is the lightest ionic compound.
Physical properties
LiH is a diamagnetic and an ionic conductor with a conductivity gradually increasing from at 443 °C to 0.18 Ω−1cm−1 at 754 °C; there is no discontinuity in this increase through the melting point. The dielectric constant of LiH decreases from 13.0 (static, low frequencies) to 3.6 (visible-light frequencies). LiH is a soft material with a Mohs hardness of 3.5. Its compressive creep (per 100 hours) rapidly increases from < 1% at 350 °C to > 100% at 475 °C, meaning that LiH cannot provide mechanical support when heated.
The thermal conductivity of LiH decreases with temperature and depends on morphology: the corresponding values are 0.125 W/(cm·K) for crystals and 0.0695 W/(cm·K) for compacts at 50 °C, and 0.036 W/(cm·K) for crystals and 0.0432 W/(cm·K) for compacts at 500 °C. The linear thermal expansion coefficient is 4.2/°C at room temperature.
Synthesis and processing
LiH is produced by treating lithium metal with hydrogen gas:
This reaction is especially rapid at temperatures above 600 °C. Addition of 0.001–0.003% carbon, and/or increasing temperature/pressure, increases the yield up to 98% at 2-hour residence time. However, the reaction proceeds at temperatures as low as 29 °C. The yield is 60% at 99 °C and 85% at 125 °C, and the rate depends significantly on the surface condition of LiH.
Less common ways of LiH synthesis include thermal decomposition of lithium aluminium hydride (200 °C), lithium borohydride (300 °C), n-butyllithium (150 °C), or ethyllithium (120 °C), as well as several reactions involving lithium compounds of low stability and available hydrogen content.
Chemical reactions yield LiH in the form of lumped powder, which can be compressed into pellets without a binder. More complex shapes can be produced by casting from the melt. Large single crystals (about 80 mm long and 16 mm in diameter) can be then grown from molten LiH powder in hydrogen atmosphere by the Bridgman–Stockbarger technique. They often have bluish color owing to the presence of colloidal Li. This color can be removed by post-growth annealing at lower temperatures (~550 °C) and lower thermal gradients. Major impurities in these crystals are Na (20–200 ppm), O (10–100 ppm), Mg (0.5–6 ppm), Fe (0.5-2 ppm) and Cu (0.5-2 ppm).
Bulk cold-pressed LiH parts can be easily machined using standard techniques and tools to micrometer precision. However, cast LiH is brittle and easily cracks during processing.
A more energy efficient route to form lithium hydride powder is by ball milling lithium metal under high hydrogen pressure. A problem with this method is the cold welding of lithium metal due to the high ductility. By adding small amounts of lithium hydride powder the cold welding can be avoided.
Reactions
LiH powder reacts rapidly with air of low humidity, forming LiOH, and . In moist air the powder ignites spontaneously, forming a mixture of products including some nitrogenous compounds. The lump material reacts with humid air, forming a superficial coating, which is a viscous fluid. This inhibits further reaction, although the appearance of a film of "tarnish" is quite evident. Little or no nitride is formed on exposure to humid air. The lump material, contained in a metal dish, may be heated in air to slightly below 200 °C without igniting, although it ignites readily when touched by an open flame. The surface condition of LiH, presence of oxides on the metal dish, etc., have a considerable effect on the ignition temperature. Dry oxygen does not react with crystalline LiH unless heated strongly, when an almost explosive combustion occurs.
LiH is highly reactive towards water and other protic reagents:
LiH is less reactive with water than Li and thus is a much less powerful reducing agent for water, alcohols, and other media containing reducible solutes. This is true for all the binary saline hydrides.
LiH pellets slowly expand in moist air, forming LiOH; however, the expansion rate is below 10% within 24 hours in a pressure of 2 Torr of water vapor. If moist air contains carbon dioxide, then the product is lithium carbonate. LiH reacts with ammonia, slowly at room temperature, but the reaction accelerates significantly above 300 °C. LiH reacts slowly with higher alcohols and phenols, but vigorously with lower alcohols.
LiH reacts with sulfur dioxide to give the dithionite:
though above 50 °C the product is lithium sulfide instead.
LiH reacts with acetylene to form lithium carbide and hydrogen. With anhydrous organic acids, phenols and acid anhydrides, LiH reacts slowly, producing hydrogen gas and the lithium salt of the acid. With water-containing acids, LiH reacts faster than with water. Many reactions of LiH with oxygen-containing species yield LiOH, which in turn irreversibly reacts with LiH at temperatures above 300 °C:
Lithium hydride is rather unreactive at moderate temperatures with or . It is, therefore, used in the synthesis of other useful hydrides, e.g.,
Applications
Hydrogen storage and fuel
With a hydrogen content in proportion to its mass three times that of NaH, LiH has the highest hydrogen content of any hydride. LiH is periodically of interest for hydrogen storage, but applications have been thwarted by its stability to decomposition. Thus removal of requires temperatures above the 700 °C used for its synthesis, such temperatures are expensive to create and maintain. The compound was once tested as a fuel component in a model rocket.
Precursor to complex metal hydrides
LiH is not usually a hydride-reducing agent, except in the synthesis of hydrides of certain metalloids. For example, silane is produced in the reaction of lithium hydride and silicon tetrachloride by the Sundermeyer process:
Lithium hydride is used in the production of a variety of reagents for organic synthesis, such as lithium aluminium hydride () and lithium borohydride (). Triethylborane reacts to give superhydride ().
In nuclear chemistry and physics
Lithium hydride (LiH) is sometimes a desirable material for the shielding of nuclear reactors, with the isotope lithium-6 (Li-6), and it can be fabricated by casting.
Lithium deuteride
Lithium deuteride, in the form of lithium-7 deuteride ( or 7LiD), is a good moderator for nuclear reactors, because deuterium (2H or D) has a lower neutron absorption cross-section than ordinary hydrogen or protium (1H) does, and the cross-section for 7Li is also low, decreasing the absorption of neutrons in a reactor. 7Li is preferred for a moderator because it has a lower neutron capture cross-section, and it also forms less tritium (3H or T) under bombardment with neutrons.
The corresponding lithium-6 deuteride ( or 6LiD) is the primary fusion fuel in thermonuclear weapons. In hydrogen warheads of the Teller–Ulam design, a nuclear fission trigger explodes to heat and compress the lithium-6 deuteride, and to bombard the 6LiD with neutrons to produce tritium in an exothermic reaction:
The deuterium and tritium then fuse to produce helium, one neutron, and 17.59 MeV of free energy in the form of gamma rays, kinetic energy, etc. Tritium has a favorable reaction cross section. The helium is an inert byproduct.
+ → + n.
Before the Castle Bravo nuclear weapons test in 1954, it was thought that only the less common isotope 6Li would breed tritium when struck with fast neutrons. The Castle Bravo test showed (accidentally) that the more plentiful 7Li also does so under extreme conditions, albeit by an endothermic reaction.
Safety
LiH reacts violently with water to give hydrogen gas and LiOH, which is caustic. Consequently, LiH dust can explode in humid air, or even in dry air due to static electricity. At concentrations of in air the dust is extremely irritating to the mucous membranes and skin and may cause an allergic reaction. Because of the irritation, LiH is normally rejected rather than accumulated by the body.
Some lithium salts, which can be produced in LiH reactions, are toxic. LiH fire should not be extinguished using carbon dioxide, carbon tetrachloride, or aqueous fire extinguishers; it should be smothered by covering with a metal object or graphite or dolomite powder. Sand is less suitable, as it can explode when mixed with burning LiH, especially if not dry. LiH is normally transported in oil, using containers made of ceramic, certain plastics or steel, and is handled in an atmosphere of dry argon or helium. Nitrogen can be used, but not at elevated temperatures, as it reacts with lithium. LiH normally contains some metallic lithium, which corrodes steel or silica containers at elevated temperatures.
References
External links
University of Southampton, Mountbatten Centre for International Studies, Nuclear History Working Paper No5.
CDC - NIOSH Pocket Guide to Chemical Hazards
Lithium compounds
Metal hydrides
Nuclear materials
Nuclear fusion fuels
Superbases
Rock salt crystal structure | Lithium hydride | [
"Physics",
"Chemistry"
] | 2,160 | [
"Superbases",
"Inorganic compounds",
"Reducing agents",
"Materials",
"Nuclear materials",
"Metal hydrides",
"Bases (chemistry)",
"Matter"
] |
2,196,665 | https://en.wikipedia.org/wiki/Sangerman%27s%20Bombers | Sangerman's Bombers were a criminal group of bombers based in Chicago during the 1920s.
The successors of Sweeney's Bombers, the gang was formed by Joseph Sangerman in the early-1920s, shortly after the arrests of the Sweeney gang in 1921. Hired out primarily by Chicago politicians and organized crime groups (such as Al Capone's Chicago Outfit), the group was the first to use its services for labor unions. As an officer of the Chicago barbers union, as well as a leading manufacturer of barber supplies, Sangerman began using the gang to bomb barber shops which refused to agree to union regulations. With the gang's early success, Sangerman began accepting jobs from outside trade unions. By the time of Sangerman's arrest in 1925, the gang, by Sangerman's own admission, included a well-organized group of six members which was hired from $50 to $700. George Matrisciano, a leading member of the gang, was considered one of the best bomb makers in Chicago history. After receiving several indictments against him as a result of Sangerman's arrest, Matrisciano was killed before his testimony. A later investigation by the Illinois Crime Survey suspected several members of the barbers union; however, no charges were filed. Another famous member of the Bombers was Cornelius "Con" Shea.
The gang dissolved shortly afterward the indictment of Sangerman (who died while still awaiting trial, on February 12, 1926, following emergency intestinal surgery), and Matrisciano's death. The use of bombings as a means of intimidation had become less favorable from the negative press coverage, specifically during the Aldermen's Wars of 1916-1921 and the 1928 Republican convention known as the "Pineapple Primary", drew too much attention and public outcry and by the end of the decade the Chicago underworld had returned to more discreet methods of intimidation.
References
Nash, Jay Robert. World Encyclopedia of Organized Crime. Chicago: Da Capo Press, 1993.
Sifakis, Carl. Encyclopedia of American Crime. New York: Facts on File Inc., 1982.
Former gangs in Chicago
Bombing | Sangerman's Bombers | [
"Chemistry"
] | 437 | [
"Bombing",
"Explosions"
] |
2,196,795 | https://en.wikipedia.org/wiki/Apparent%20death | Apparent death is a behavior in which animals take on the appearance of being dead. It is an immobile state most often triggered by a predatory attack and can be found in a wide range of animals from insects and crustaceans to mammals, birds, reptiles, amphibians, and fish. Apparent death is separate from the freezing behavior seen in some animals.
Apparent death is a form of animal deception considered to be an anti-predator strategy, but it can also be used as a form of aggressive mimicry. When induced by humans, the state is sometimes colloquially known as animal hypnosis. The earliest written record of "animal hypnosis" dates back to the year 1646 in a report by Athanasius Kircher, in which he subdued chickens.
Description
Tonic immobility (also known as the act of feigning death, or exhibiting thanatosis) is a behaviour in which some animals become apparently temporarily paralysed and unresponsive to external stimuli. Tonic immobility is most generally considered to be an anti-predator behavior because it occurs most often in response to an extreme threat such as being captured by a (perceived) predator. Some animals use it to attract prey or facilitate reproduction. For example, in sharks exhibiting the behaviour, some scientists relate it to mating, arguing that biting by the male immobilizes the female and thus facilitates mating.
Despite appearances, some animals remain conscious throughout tonic immobility. Evidence for this includes the occasional responsive movement, scanning of the environment and animals in tonic immobility often taking advantage of escape opportunities. Tonic immobility is preferred in the literature because it has neutral connotations compared to 'thanatosis' which has a strong association with death.
Difference from freezing
Tonic immobility is different from freezing behavior in animals. A deer in headlights and an opossum "playing dead" are common examples of an animal freezing and playing dead, respectively. Freezing occurs early during a predator-prey interaction when the prey detects and identifies the threat, but the predator has not yet seen the prey. Because freezing occurs before detection and is used to better camouflage the prey and prevent the predator from attacking, it is considered a primary defense mechanism.
Tonic immobility occurs after the predator has detected and or made contact with the prey, and is likely used to prevent further attack by the predator or consumption of the prey. Because tonic immobility occurs later in the predator attack sequence, it is considered a secondary defense mechanism and is therefore distinct from freezing. Although freezing animals become rigid, they often stay upright and do not change their posture while frozen whereas during tonic immobility, animals often become rigid and change their posture.
Freezing behavior and tonic immobility are similar in that both may induce bradycardia (slowing of the heart rate), but the freezing response may instead be accompanied by rapid or increased breathing rate, increased heart rate, increased blood pressure and inhibition of digestion, depending on whether the sympathetic or parasympathetic nervous system is engaged. In contrast, along with bradycardia, vertebrates in tonic immobility often reduce their breathing rate or protrude their tongue, further distinguishing this behavior from the freezing response.
Defensive
For defensive purposes, thanatosis hinges on the pursuer's becoming unresponsive to its victim, as most predators only catch live prey.
In beetles, artificial selection experiments have shown that there is heritable variation for length of death-feigning. Those selected for longer death-feigning durations are at a selective advantage to those at shorter durations when a predator is introduced, which suggests that thanatosis is indeed adaptive.
In the hog-nosed snake, a threatened individual rolls onto its back and appears to be dead when threatened by a predator, while a foul-smelling, volatile fluid oozes from its body. Predators, such as cats, then lose interest in the snake, which both looks and smells dead. One reason for their loss of interest is that rotten-smelling animals are instinctively avoided as a precaution against infectious disease, so the snake's adaptions exploit that reaction. Newly hatched young also instinctively show this behaviour when rats try to eat them.
In mammals, the Virginia opossum (commonly known simply as possums) is perhaps the best known example of defensive thanatosis. "Playing possum" is an idiomatic phrase which means "pretending to be dead". It comes from a characteristic of the Virginia opossum, which is famous for reacting with a death-like posture when threatened. This instinct does not always pay off in the modern world; for example, opossums scavenging roadkill may react with the death-like posture to the threat posed by oncoming traffic, and subsequently end up as roadkill themselves. "Playing possum" can also mean simply pretending to be injured, unconscious, asleep, or otherwise vulnerable, often to lure an opponent into a vulnerable position.
The usual advice for humans attempting to survive an attack by a brown bear is to lie face down, cover the face with one's hands/arms/elbows, and 'play dead'.
Thanatosis has also been observed in many invertebrates such as the wasp Nasonia vitripennis, and the cricket, Gryllus bimaculatus.
Reproductive
In the spider species Pisaura mirabilis, male spiders often stage elaborate rituals of gift-giving and thanatosis to avoid getting eaten by female spiders during mating. Studies have shown higher chances of success in mating with females for males who exhibit death-feigning more frequently than for males who do it less.
Predatory
Nimbochromis (sleeper cichlids), endemic to Lake Malawi in East Africa, are large predatory fish for whom thanatosis is a form of aggressive mimicry. This fish will lie down on its side on the bottom sediments and assume a blotchy coloration. Scavengers, attracted to what seems like a dead fish, will approach the predator to investigate. N. livingstoni then abandons the thanatosis, righting itself again and quickly eating any scavenger unfortunate enough to come too close. A similar strategy has also been observed in the African cichlid Lamprologus lemairii from Lake Tanganyika and in the Central American yellowjacket cichlid Parachromis friedrichsthalii.
Examples
Invertebrates
Within the invertebrates, tonic immobility is widespread throughout phylum Arthropoda and has been demonstrated to occur in beetles, moths, mantids, cicadas, crickets, spiders, wasps, bees, and ants.
Wasps
Tonic immobility has been observed in several species of parasitoid wasp and is considered to be an antipredator behavior in these insects. In wasps, tonic immobility can be induced by tapping their antennae, tapping the abdomen repeatedly, or squeezing their abdomen. A study in 2020 found that the frequency and duration of tonic immobility was affected by the sex of the wasp and the temperature of the environment, but not the color of the background the wasp was on. These results were consistent with a study in 2006 that found no effect of background color on tonic immobility in a different wasp species, Nasonia vitripennis.
Fire ants
In fire ant colonies, tonic immobility is used by young workers to avoid conflict with competing ants. In the fire ant species Solenopsis invicta, the tendency to exhibit thanatosis decreases with age, with older ants choosing to fight with any workers from neighboring colonies. By using tonic immobility to evade conflict, the researchers found that the young ants were four times more likely to survive an attack compared to their older counterparts, despite being more vulnerable due to their softer exoskeletons.
Spiders
In the nuptial gift-giving spider, thanatosis is incorporated into their mating display. A study in 2008 demonstrated that male Pisaura mirabilis spiders who displayed thanatosis were more likely to copulate with females and copulated longer.
Green Lacewings
Larvae of Chrysoperla plorabunda engage in tonic immobility when they come into close proximity with a predator. Usage of tonic immobility as an antipredator strategy has been shown to vary with energy availability and within-population genetic variation, with lacewings under energetic stress being more likely to engage in tonic immobility.
Vertebrates
Tonic immobility has been observed in a large number of vertebrate taxa, including sharks, fish, amphibians, reptiles, birds, and mammals.
Sharks
Some sharks can be induced into tonic immobility by inverting them and restraining them by hand, e.g. dogfish sharks, lemon sharks, whitetip reef sharks. For tiger sharks (measuring 3–4 metres in length), tonic immobility can be induced by humans placing their hands lightly on the sides of the animal's snout in the area surrounding the eyes. During tonic immobility in sharks, the dorsal fins straighten, and both breathing and muscle contractions become more steady and relaxed. This state persists for an average of 15 minutes before recovery and the resumption of active behaviour. Scientists have exploited this response to study shark behaviour; chemical shark repellent has been studied to test its effectiveness and to more accurately estimate dose sizes, concentrations and time to recovery. Tonic immobility can also be used as a form of mild anesthesia during experimental manipulations of sharks.
Scientists also believe that tonic immobility can be a stressful experience for sharks. By measuring blood chemistry samples when the shark is immobile, it has been suggested that tonic immobility can actually put stress on the shark, and reduce breathing efficiency. Others think sharks have a series of compensatory mechanisms that work to increase respiration rates and lower stress.
It has been observed that orcas can exploit sharks' tonic immobility to prey on large sharks. Some orcas ram sharks from the side to stun them, then flip the sharks to induce tonic immobility and keep them in such state for sustained time. For some sharks, this prevents water from flowing through their gills and the result can be fatal.
Teleost fishes
Goldfish, trout, rudd, tench, brown bullhead, medaka, paradise fish, and topminnow have been reported to go limp when they are restrained on their backs. Oscars seem to go into shock when they are stressed (when their aquarium is being cleaned, for example): they lie on their side, stop moving their fins, start to breathe more slowly and deeply, and lose colour. A similar behavior has been reported for convict tangs in the field.
Amphibians and reptiles
Tonic immobility can be found in several families of anurans (frogs and toads). In anurans, tonic immobility is demonstrated most often with open eyes and the limbs sprawled and easy to move, but some species keep their eyes closed. Some species also protrude their tongue.
Tonic immobility has also been observed in several species of lizards and snakes. The most common example of tonic immobility in the latter is the North American hog-nose snake, but it has also been observed in grass snakes. Tonic immobility can be reliably induced in iguanas by a combination of inversion, restraint and moderate pressure. During tonic immobility, there are obvious changes in respiration including a decline in respiration rate, the rhythm becomes sporadic, and the magnitude irregular. The prolonged period of tonic immobility does not seem to be consistent with the fear hypothesis, but could be the result of a period of cortical depression due to increased brain stem activity.
Tonic immobility can also be induced in the Carolina anole. The characteristics of this tonic immobility vary as a function of the duration and condition of captivity. Tonic immobility is also observed in sea turtles.
Chickens
Tonic immobility can be induced in chickens, but the behavior is more colloquially referred to as hypnosis.
Tonic immobility can be induced in chickens through several means, including by gently restraining them on their side, stomach, or back for a short period of time, or by using chalk to draw a line on the ground away from the chicken's beak while restraining them with their head down. Chickens have been used in several studies to elucidate the genetic basis of tonic immobility. While early studies focused on determining whether tonic immobility was influenced by genetics, a study in 2019 identified five genes that potentially control tonic immobility in white leghorn chickens and red junglefowl.
Ducks
Tonic immobility has been observed in several species of ducks as an effective anti-predatory response. A study by Sargeant and Eberhardt (1975) determined that ducks who feigned death had a better chance at surviving a fox attack than those who resisted and struggled. Despite being immobile the ducks remained conscious and were aware of opportunities for escape. Although the researchers concluded that tonic immobility was an effective anti-predator response, they conceded that it would not be useful against predators that kill or fatally injure prey immediately after capture.
Rabbits
Tonic immobility occurs in both domestic and wild species of rabbit and can be induced by placing and restraining the animal for a short period of time. As in other prey animals, tonic immobility is considered to be an antipredator behavior in rabbits. Studies on tonic immobility in rabbits focus on the European rabbit Oryctolagus cuniculus, but other species of rabbit have been studied.
A laboratory experiment by Ewell, Cullen, and Woodruff (1981) provided support to the hypothesis that European rabbits use tonic immobility as an anti-predator response. The study found that how quickly the rabbits "righted" themselves (i.e. how quickly they came out of tonic immobility) depended on how far a predator was away from the rabbit, and how close the rabbit was to their home cage. Rabbits that were closer to their home cage righted themselves more quickly than those that were farther from their home cage. Conversely, when predators were closer to the rabbits, they took longer to right themselves. These results were consistent with those found in studies on chickens, lizards, and blue crabs at the time, and provided support that rabbits use tonic immobility as an antipredator response.
A more recent study on European rabbits monitored their heart rate during tonic immobility and found several physiological changes to the cardiovascular system during this state, including a decrease in heart rate.
Humans
Tonic immobility has been hypothesized to occur in humans undergoing intense trauma, including sexual assault.
There is also an increasing body of evidence that points to a positive contribution of tonic immobility in human functioning. Thus, defensive immobilization is hypothesized to have played a crucial role in the evolution of human parent-child attachment, sustained attention and suggestibility, REM sleep and theory of mind.
Induction
Tonic immobility is considered to be a fear-potentiated response induced by physical restraint and characterised by reduced responsiveness to external stimulation. It has been used as a measure in the assessment of animal welfare, particularly hens, since 1970. The rationale for the tonic immobility test is that the experimenter simulates a predator, thereby eliciting the anti-predator response. The precept is that the prey animal 'pretends' to be dead to be able to escape when/if the predator relaxes its concentration. Death-feigning birds often take advantage of escape opportunities; tonic immobility in quail reduces the probability of the birds being predated by cats.
To induce tonic immobility, the animal is gently restrained on its side or back for a period of time, e.g. 15 seconds. This is done either on a firm, flat surface or sometimes in a purpose-built V- or U-shaped restraining cradle. In rodents, the response is sometimes induced by additionally pinching or attaching a clamp to the skin at the nape of the neck. Scientists record behaviours such as the number of inductions (15-second restraining periods) required for the animal to remain still, the latency to the first major movements (often cycling motions of the legs), latency to first head or eye movements and the duration of immobility, sometimes called the 'righting time'.
Tonic immobility has been used to show that hens in cages are more fearful than those in pens, hens on the top tier of tiered battery cages are more fearful than those on the lower levels, hens carried by hand are more fearful than hens carried on a mechanical conveyor, and hens undergoing longer transportation times are more fearful than those undergoing transport of a shorter duration.
Tonic immobility as a scientific tool has also been used with mice, gerbils, guinea pigs, rats, rabbits and pigs.
See also
which inhibits the flow of sodium into cells causing paralysis of muscles
Explanatory notes
References
External links
Antipredator adaptations
Biological defense mechanisms
Death
Deception
English phrases
Ethology
Signalling theory
Thanatos
Unconscious | Apparent death | [
"Biology"
] | 3,531 | [
"Behavior",
"Biological interactions",
"Biological defense mechanisms",
"Behavioural sciences",
"Antipredator adaptations",
"Ethology"
] |
2,196,799 | https://en.wikipedia.org/wiki/Gauss%E2%80%93Kuzmin%20distribution | In mathematics, the Gauss–Kuzmin distribution is a discrete probability distribution that arises as the limit probability distribution of the coefficients in the continued fraction expansion of a random variable uniformly distributed in (0, 1). The distribution is named after Carl Friedrich Gauss, who derived it around 1800, and Rodion Kuzmin, who gave a bound on the rate of convergence in 1929. It is given by the probability mass function
Gauss–Kuzmin theorem
Let
be the continued fraction expansion of a random number x uniformly distributed in (0, 1). Then
Equivalently, let
then
tends to zero as n tends to infinity.
Rate of convergence
In 1928, Kuzmin gave the bound
In 1929, Paul Lévy improved it to
Later, Eduard Wirsing showed that, for λ = 0.30366... (the Gauss–Kuzmin–Wirsing constant), the limit
exists for every s in [0, 1], and the function Ψ(s) is analytic and satisfies Ψ(0) = Ψ(1) = 0. Further bounds were proved by K. I. Babenko.
See also
Khinchin's constant
Lévy's constant
References
Continued fractions
Discrete distributions | Gauss–Kuzmin distribution | [
"Mathematics"
] | 253 | [
"Continued fractions",
"Number theory"
] |
2,196,827 | https://en.wikipedia.org/wiki/GABAB%20receptor | {{DISPLAYTITLE:GABAB receptor}}
GABAB receptors (GABABR) are G-protein coupled receptors for gamma-aminobutyric acid (GABA), therefore making them metabotropic receptors, that are linked via G-proteins to potassium channels. The changing potassium concentrations hyperpolarize the cell at the end of an action potential. The reversal potential of the GABAB-mediated IPSP (inhibitory postsynaptic potential) is −100 mV, which is much more hyperpolarized than the GABAA IPSP. GABAB receptors are found in the central nervous system and the autonomic division of the peripheral nervous system.
The receptors were first named in 1981 when their distribution in the CNS was determined, which was determined by Norman Bowery and his team using radioactively labelled baclofen.
Functions
GABABRs stimulate the opening of K+ channels, specifically GIRKs, which brings the neuron closer to the equilibrium potential of K+. This reduces the frequency of action potentials which reduces neurotransmitter release. Thus GABAB receptors are inhibitory receptors.
GABAB receptors also reduces the activity of adenylyl cyclase and Ca2+ channels by using G-proteins with Gi/G0 α subunits.
GABAB receptors are involved in behavioral actions of ethanol, gamma-hydroxybutyric acid (GHB), and possibly in pain. Recent research suggests that these receptors may play an important developmental role.
Structure
GABAB Receptors are similar in structure to and in the same receptor family with metabotropic glutamate receptors. There are two subunits of the receptor, GABAB1 and GABAB2, and these appear to assemble as obligate heterodimers in neuronal membranes by linking up by their intracellular C termini. In the mammalian brain, two predominant, differentially expressed isoforms of the GABAB1 are transcribed from the Gabbr1 gene, GABAB(1a) and GABAB(1b), which are conserved in different species including humans. This might potentially offer more complexity in terms of the function due to different composition of the receptor. Cryo-electron microscopy structures of the full length GABAB receptor in different conformational states from inactive apo to fully active have been obtained. Unlike Class A and B GPCRs, phospholipids bind within the transmembrane bundles and allosteric modulators bind at the interface of GABAB1 and GABAB2 subunits.
Ligands
Agonists
GABA
Baclofen is a GABA analogue which acts as a selective agonist of GABAB receptors, and is used as a muscle relaxant. However, it can aggravate absence seizures, and so is not used in epilepsy.
gamma-Hydroxybutyrate (GHB)
Phenibut
4-Fluorophenibut
Isovaline
3-Aminopropylphosphinic acid
Lesogaberan
SKF-97541: 3-Aminopropyl(methyl)phosphinic acid, 10× more potent than baclofen as GABAB agonist, but also GABAA-ρ antagonist
Taurine
CGP-44532
Positive Allosteric Modulators
CGP-7930
BHFF
Fendiline
BHF-177
BSPP
GS-39783
Antagonists
Homotaurine
Ginsenosides
2-OH-saclofen
Saclofen
Phaclofen
SCH-50911
2-Phenethylamine
CGP-35348
CGP-52432: 3-([(3,4-Dichlorophenyl)methyl]amino]propyl) diethoxymethyl)phosphinic acid, CAS# 139667-74-6
CGP-55845: (2S)-3-([(1S)-1-(3,4-Dichlorophenyl)ethyl]amino-2-hydroxypropyl)(phenylmethyl)phosphinic acid, CAS# 149184-22-5
SGS-742
See also
GABA receptor
GABAA receptor
References
External links
G protein-coupled receptors
GABA | GABAB receptor | [
"Chemistry"
] | 913 | [
"G protein-coupled receptors",
"Signal transduction"
] |
2,196,957 | https://en.wikipedia.org/wiki/Tiabendazole | Tiabendazole (INN, BAN), also known as thiabendazole (AAN, USAN) or TBZ and the trade names Mintezol, Tresaderm, and Arbotect, is a preservative, an antifungal agent, and an antiparasitic agent.
Uses
Preservative
Tiabendazole is used primarily to control mold, blight, and other fungal diseases in fruits (e.g. oranges) and vegetables; it is also used as a prophylactic treatment for Dutch elm disease.
Tiabendazole is also used as a food additive, a preservative with E number E233 (INS number 233). For example, it is applied to bananas to ensure freshness, and is a common ingredient in the waxes applied to the skins of citrus fruits. It is not approved as a food additive in the EU, Australia and New Zealand.
Use in treatment of aspergillosis has been reported.
It is also used in anti-fungal wallboards as a mixture with azoxystrobin.
Parasiticide
As an antiparasitic, tiabendazole is able to control roundworms (such as those causing strongyloidiasis), hookworms, and other helminth species which infect wild animals, livestock, and humans. First approved for use in sheep in 1961 and horses in 1962, resistance to this drug was first found in Haemonchus contortus in 1964, and then in the two other major small ruminant nematode parasites, Teladorsagia circumcincta and Trichostrongylus colubriformis.
Fungicide
Tiabendazole acts as a fungicide through binding fungal tubulin. Resistant Aspergillus nidulans specimens were found to have a mutation in the gene coding for β-tubulin, which was reversible by a mutation in the gene for α-tubulin. This showed that thiabendazole binds to both α- and β-tubulin.
This chemical is also used as a pesticide, including to treat Beech Leaf Disease.
Other
In dogs and cats, tiabendazole is used to treat ear infections.
Tiabendazole is also a chelating agent, which means it is used medicinally to bind metals in cases of metal poisoning, such as lead, mercury, or antimony poisoning.
Research
Genes responsible for the maintenance of cell walls in yeast have been shown to be responsible for angiogenesis in vertebrates. Tiabendazole serves to block angiogenesis in both frog embryos and human cells. It has also been shown to serve as a vascular disrupting agent to reduce newly established blood vessels. Tiabendazole has been shown to effectively do this in certain cancer cells.
Pharmacodynamics
Tiabendazole works by inhibition of the mitochondrial, helminth-specific enzyme, fumarate reductase, with possible interaction with endogenous quinone.
Safety
The substance appears to have a slight toxicity in higher doses, with effects such as liver and intestinal disorders at high exposure in test animals (just below level). Some reproductive disorders and decreasing weaning weight have been observed, also at high exposure. Effects on humans from use as a drug include nausea, vomiting, loss of appetite, diarrhea, dizziness, drowsiness, or headache; very rarely also ringing in the ears, vision changes, stomach pain, yellowing eyes and skin, dark urine, fever, fatigue, increased thirst and change in the amount of urine occur. Carcinogenic effects have been shown at higher doses.
Synthesis
Intermediate aryl amidine (2) is prepared by aluminium trichloride-catalyzed addition of aniline to the nitrile of 4-cyanothiazole (1). The amidine (2) is then converted to its N-chloro derivative 3 with sodium hypochlorite (NaOCl). Upon treatment with base, this undergoes a nitrene insertion reaction (4) to produce tiabendazole (5).
An alternative synthesis involves reacting 4-thiazolecarboxamide with o-phenylenediamine in polyphosphoric acid.
Derivatives
A number of derivatives of tiabendazole are also pharmaceutical drugs, including
albendazole, cambendazole, fenbendazole, oxfendazole, mebendazole, and flubendazole.
See also
Fungicide use in the United States
List of fungicides
References
External links
Thiabendazole, Extension Toxicology Network
Medicinenet: Thiabendazole – Oral
Antiparasitic agents
Benzimidazoles
Fungicides
Preservatives
Thiazoles | Tiabendazole | [
"Biology"
] | 1,013 | [
"Fungicides",
"Biocides",
"Antiparasitic agents"
] |
2,197,070 | https://en.wikipedia.org/wiki/IP%20%28complexity%29 | In computational complexity theory, the class IP (which stands for interactive proof) is the class of problems solvable by an interactive proof system. It is equal to the class PSPACE. The result was established in a series of papers: the first by Lund, Karloff, Fortnow, and Nisan showed that co-NP had multiple prover interactive proofs; and the second, by Shamir, employed their technique to establish that IP=PSPACE. The result is a famous example where the proof does not relativize.
The concept of an interactive proof system was first introduced by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in 1985. An interactive proof system consists of two machines, a prover, P, which presents a proof that a given string n is a member of some language, and a verifier, V, that checks that the presented proof is correct. The prover is assumed to be infinite in computation and storage, while the verifier is a probabilistic polynomial-time machine with access to a random bit string whose length is polynomial on the size of n. These two machines exchange a polynomial number, p(n), of messages and once the interaction is completed, the verifier must decide whether or not n is in the language, with only a 1/3 chance of error. (So any language in BPP is in IP, since then the verifier could simply ignore the prover and make the decision on its own.)
Definition
A language L belongs to IP if there exist V, P such that for all Q, w:
The Arthur–Merlin protocol, introduced by László Babai, is similar in nature, except that the number of rounds of interaction is bounded by a constant rather than a polynomial.
Goldwasser et al. have shown that public-coin protocols, where the random numbers used by the verifier are provided to the prover along with the challenges, are no less powerful than private-coin protocols. At most two additional rounds of interaction are required to replicate the effect of a private-coin protocol. The opposite inclusion is straightforward, because the verifier can always send to the prover the results of their private coin tosses, which proves that the two types of protocols are equivalent.
In the following section we prove that IP = PSPACE, an important theorem in computational complexity, which demonstrates that an interactive proof system can be used to decide whether a string is a member of a language in polynomial time, even though the traditional PSPACE proof may be exponentially long.
Proof of IP = PSPACE
The proof can be divided in two parts, we show that IP ⊆ PSPACE and PSPACE ⊆ IP.
IP ⊆ PSPACE
In order to demonstrate that IP ⊆ PSPACE, we present a simulation of an interactive proof system by a polynomial space machine. Now, we can define:
and for every 0 ≤ j ≤ p and every message history Mj, we inductively define the function NMj:
where:
where Prr is the probability taken over the random string r of length p. This expression is the average of NMj+1, weighted by the probability that the verifier sent message mj+1.
Take M0 to be the empty message sequence, here we will show that NM0 can be computed in polynomial space, and that NM0 = Pr[V accepts w]. First, to compute NM0, an algorithm can recursively calculate the values NMj for every j and Mj. Since the depth of the recursion is p, only polynomial space is necessary. The second requirement is that we need NM0 = Pr[V accepts w], the value needed to determine whether w is in A. We use induction to prove this as follows.
We must show that for every 0 ≤ j ≤ p and every Mj, NMj = Pr[V accepts w starting at Mj], and we will do this using induction on j. The base case is to prove for j = p. Then we will use induction to go from p down to 0.
The base case of j = p is fairly simple. Since mp is either accept or reject, if mp is accept, NMp is defined to be 1 and Pr[V accepts w starting at Mj] = 1 since the message stream indicates acceptance, thus the claim is true. If mp is reject, the argument is very similar.
For the inductive hypothesis, we assume that for some j+1 ≤ p and any message sequence Mj+1, NMj+1 = Pr[V accepts w starting at Mj+1] and then prove the hypothesis for j and any message sequence Mj.
If j is even, mj+1 is a message from V to P. By the definition of NMj,
Then, by the inductive hypothesis, we can say this is equal to
Finally, by definition, we can see that this is equal to Pr[V accepts w starting at Mj].
If j is odd, mj+1 is a message from P to V. By definition,
Then, by the inductive hypothesis, this equals
This is equal to Pr[V accepts w starting at Mj] since:
because the prover on the right-hand side could send the message mj+1 to maximize the expression on the left-hand side. And:
Since the same Prover cannot do any better than send that same message. Thus, this holds whether i is even or odd and the proof that IP ⊆ PSPACE is complete.
Here we have constructed a polynomial space machine that uses the best prover P for a particular string w in language A. We use this best prover in place of a prover with random input bits because we are able to try every set of random input bits in polynomial space. Since we have simulated an interactive proof system with a polynomial space machine, we have shown that IP ⊆ PSPACE, as desired.
PSPACE ⊆ IP
In order to illustrate the technique that will be used to prove PSPACE ⊆ IP, we will first prove a weaker theorem, which was proven by Lund, et al.: #SAT ∈ IP. Then using the concepts from this proof we will extend it to show that TQBF ∈ IP. Since TQBF ∈ PSPACE-complete, and TQBF ∈ IP then PSPACE ⊆ IP.
#SAT is a member of IP
We begin by showing that #SAT is in IP, where:
Note that this is different from the normal definition of #SAT, in that it is a decision problem, rather than a function.
First we use arithmetization to map the boolean formula with n variables, φ(b1, ..., bn) to a polynomial pφ(x1, ..., xn), where pφ mimics φ in that pφ is 1 if φ is true and 0 otherwise provided that the variables of pφ are assigned Boolean values. The Boolean operations ∨, ∧ and ¬ used in φ are simulated in pφ by replacing the operators in φ as shown in the table below.
As an example, would be converted into a polynomial as follows:
The operations ab and a ∗ b each result in a polynomial with a degree bounded by the sum of the degrees of the polynomials for a and b and hence, the degree of any variable is at most the length of φ.
Now let F be a finite field with order q > 2n; also demand that q be at least 1000. For each 0 ≤ i ≤ n, define a function fi on F, having parameters , and a single variable : For 0 ≤ i ≤ n and for let
Note that the value of f0 is the number of satisfying assignments of φ. f0 is a void function, with no variables.
Now the protocol for #SAT works as follows:
Phase 0: The prover P chooses a prime q > 2n and computes f0, it then sends q and f0 to the verifier V. V checks that q is a prime greater than max(1000, 2n) and that f0() = k.
Phase 1: P sends the coefficients of f1(z) as a polynomial in z. V verifies that the degree of f1 is less than n and that f0 = f1(0) + f1(1). (If not V rejects). V now sends a random number r1 from F to P.
Phase i: P sends the coefficients of as a polynomial in z. V verifies that the degree of fi is less than n and that . (If not V rejects). V now sends a random number ri from F to P.
Phase n+1: V evaluates to compare to the value . If they are equal V accepts, otherwise V rejects.
Note that this is a public-coin algorithm.
If φ has k satisfying assignments, clearly V will accept. If φ does not have k satisfying assignments we assume there is a prover that tries to convince V that φ does have k satisfying assignments. We show that this can only be done with low probability.
To prevent V from rejecting in phase 0, has to send an incorrect value to P. Then, in phase 1, must send an incorrect polynomial with the property that . When V chooses a random r1 to send to P,
This is because a polynomial in a single variable of degree at most d can have no more than d roots (unless it always evaluates to 0). So, any two polynomials in a single variable of degree at most d can be equal only in d places. Since |F| > 2n the chances of r1 being one of these values is at most if n > 10, or at most (n/1000) ≤ (n/n3) if n ≤ 10.
Generalizing this idea for the other phases we have for each 1 ≤ i ≤ n if
then for ri chosen randomly from F,
There are n phases, so the probability that is lucky because V selects at some stage a convenient ri is at most 1/n. So, no prover can make the verifier accept with probability greater than 1/n. We can also see from the definition that the verifier V operates in probabilistic polynomial time. Thus, #SAT ∈ IP.
TQBF is a member of IP
In order to show that PSPACE is a subset of IP, we need to choose a PSPACE-complete problem and show that it is in IP. Once we show this, then it clear that PSPACE ⊆ IP. The proof technique demonstrated here is credited to Adi Shamir.
We know that TQBF is in PSPACE-Complete. So let ψ be a quantified boolean expression:
where φ is a CNF formula. Then Qi is a quantifier, either ∃ or ∀. Now fi is the same as in the previous proof, but now it also includes quantifiers.
Here, φ(a1, ..., ai) is φ with a1 to ai substituted for x1 to xi. Thus f0 is the truth value of ψ. In order to arithmetize ψ we must use the following rules:
where as before we define x ∗ y = 1 − (1 − x)(1 − y).
By using the method described in #SAT, we must face a problem that for any fi the degree of the resulting polynomial may double with each quantifier. In order to prevent this, we must introduce a new reduction operator R which will reduce the degrees of the polynomial without changing their behavior on Boolean inputs.
So now before we arithmetize we introduce a new expression:
or put another way:
Now for every i ≤ k we define the function fi. We also define to be the polynomial p(x1, ..., xm) which is obtained by arithmetizing φ. Now in order to keep the degree of the polynomial low, we define fi in terms of fi+1:
Now we can see that the reduction operation R, doesn't change the degree of the polynomial. Also it is important to see that the Rx operation doesn't change the value of the function on boolean inputs. So f0 is still the truth value of ψ, but the Rx value produces a result that is linear in x. Also after any we add in ψ′ in order to reduce the degree down to 1 after arithmetizing .
Now let's describe the protocol. If n is the length of ψ, all arithmetic operations in the protocol are over a field of size at least n4 where n is the length of ψ.
Phase 0: P → V: P sends f0 to V. V checks that f0= 1 and rejects if not.
Phase 1: P → V: P sends f1(z) to V. V uses coefficients to evaluate f1(0) and f1(1). Then it checks that the polynomial's degree is at most n and that the following identities are true:
If either fails then reject.
Phase i: P → V: P sends as a polynomial in z. r1 denotes the previously set random values for
V uses coefficients to evaluate and . Then it checks that the polynomial degree is at most n and that the following identities are true:
If either fails then reject.
V → P: V picks a random r in F and sends it to P. (If then this r replaces the previous r).
Goto phase i + 1 where P must persuade V that is correct.
Phase k + 1: V evaluates . Then it checks if If they are equal then V accepts, otherwise V rejects.
This is the end of the protocol description.
If ψ is true then V will accept when P follows the protocol. Likewise if is a malicious prover which lies, and if ψ is false, then will need to lie at phase 0 and send some value for f0. If at phase i, V has an incorrect value for then and will likely also be incorrect, and so forth. The probability for to get lucky on some random r is at most the degree of the polynomial divided by the field size: . The protocol runs through O(n2) phases, so the probability that gets lucky at some phase is ≤ 1/n. If is never lucky, then V will reject at phase k+1.
Since we have now shown that both IP ⊆ PSPACE and PSPACE ⊆ IP, we can conclude that IP = PSPACE as desired. Moreover, we have shown that any IP algorithm may be taken to be public-coin, since the reduction from PSPACE to IP has this property.
Variants
There are a number of variants of IP which slightly modify the definition of the interactive proof system. We summarize some of the better-known ones here.
dIP
A subset of IP is the deterministic Interactive Proof class, which is similar to IP but has a deterministic verifier (i.e. with no randomness).
This class is equal to NP.
Perfect completeness
An equivalent definition of IP replaces the condition that the interaction succeeds with high probability on strings in the language with the requirement that it always succeeds:
This seemingly stronger criterion of "perfect completeness" does not change the complexity class IP, since any language with an interactive proof system may be given an interactive proof system with perfect completeness.
MIP
In 1988, Goldwasser et al. created an even more powerful interactive proof system based on IP called MIP in which there are two independent provers. The two provers cannot communicate once the verifier has begun sending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a malicious prover trying to trick the verifier if there is another prover it can double-check with. In fact, this is so helpful that Babai, Fortnow, and Lund were able to show that MIP = NEXPTIME, the class of all problems solvable by a nondeterministic machine in exponential time, a very large class. Moreover, all languages in NP have zero-knowledge proofs in an MIP system, without any additional assumptions; this is only known for IP assuming the existence of one-way functions.
IPP
IPP (unbounded IP) is a variant of IP where we replace the BPP verifier by a PP verifier. More precisely, we modify the completeness and soundness conditions as follows:
Completeness: if a string is in the language, the honest verifier will be convinced of this fact by an honest prover with probability at least 1/2.
Soundness: if the string is not in the language, no prover can convince the honest verifier that it is in the language, except with probability less than 1/2.
Although IPP also equals PSPACE, IPP protocols behaves quite differently from IP with respect to oracles: IPP=PSPACE with respect to all oracles, while IP ≠ PSPACE with respect to almost all oracles.
QIP
QIP is a version of IP replacing the BPP verifier by a BQP verifier, where BQP is the class of problems solvable by quantum computers in polynomial time. The messages are composed of qubits. In 2009, Jain, Ji, Upadhyay, and Watrous proved that QIP also equals PSPACE, implying that this change gives no additional power to the protocol. This subsumes a previous result of Kitaev and Watrous that QIP is contained in EXPTIME because QIP = QIP[3], so that more than three rounds are never necessary.
compIP
Whereas IPP and QIP give more power to the verifier, a compIP system (competitive IP proof system) weakens the completeness condition in a way that weakens the prover:
Completeness: if a string is in the language L, the honest verifier will be convinced of this fact by an honest prover with probability at least 2/3. Moreover, the prover will do so in probabilistic polynomial time given access to an oracle for the language L.
Essentially, this makes the prover a BPP machine with access to an oracle for the language, but only in the completeness case, not the soundness case. The concept is that if a language is in compIP, then interactively proving it is in some sense as easy as deciding it. With the oracle, the prover can easily solve the problem, but its limited power makes it much more difficult to convince the verifier of anything. In fact, compIP isn't even known or believed to contain NP.
On the other hand, such a system can solve some problems believed to be hard. Somewhat paradoxically, though such a system is not believed to be able to solve all of NP, it can easily solve all NP-complete problems due to self-reducibility. This stems from the fact that if the language L is not NP-hard, the prover is substantially limited in power (as it can no longer decide all NP problems with its oracle).
Additionally, the graph nonisomorphism problem (which is a classical problem in IP) is also in compIP, since the only hard operation the prover has to do is isomorphism testing, which it can use the oracle to solve. Quadratic non-residuosity and graph isomorphism are also in compIP. Note, quadratic non-residuosity (QNR) is likely an easier problem than graph isomorphism as QNR is in UP intersect co-UP.
Notes
References
Babai, L. Trading group theory for randomness. In Proceedings of the 17th ACM Symposium on the Theory of Computation . ACM, New York, 1985, pp. 421–429.
Shafi Goldwasser, Silvio Micali, and Charles Rackoff. The Knowledge complexity of interactive proof-systems. Proceedings of 17th ACM Symposium on the Theory of Computation, Providence, Rhode Island. 1985, pp. 291–304. Extended abstract
Shafi Goldwasser and Michael Sipser. Private coins versus public coins in interactive proof systems. Proceedings of the 18th Annual ACM Symposium on Theory of Computation. ACM, New York, 1986, pp. 59–68.
Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay, John Watrous. QIP = PSPACE.
Lund, C., Fortnow, L., Karloff, H., Nisan, N. Algebraic methods for interactive proof systems. In Proceedings of 31st Symposium on the Foundations of Computer Science. IEEE, New York, 1990, pp. 2–90.
Adi Shamir. IP = PSPACE. Journal of the ACM, volume 39, issue 4, p. 869–877. October 1992.
Alexander Shen. IP=PSpace: Simplified Proof. J.ACM, v. 39(4), pp. 878–880, 1992.
, MIP , IPP , QIP , QIP(2) , compIP , frIP
Probabilistic complexity classes
Articles containing proofs | IP (complexity) | [
"Mathematics"
] | 4,404 | [
"Articles containing proofs"
] |
2,197,220 | https://en.wikipedia.org/wiki/SYSTAT%20%28statistics%20package%29 | SYSTAT is a statistics and statistical graphics software package, developed by Leland Wilkinson in the late 1970s, who was at the time an assistant professor of psychology at the University of Illinois at Chicago. Systat Software Inc. was incorporated in 1983 and grew to over 50 employees.
In 1995, SYSTAT was sold to SPSS Inc., who marketed the product to a scientific audience under the SPSS Science division. By 2002, SPSS had changed its focus to business analytics and decided to sell SYSTAT to Cranes Software in Bangalore, India. Cranes formed Systat Software, Inc. to market and distribute SYSTAT in the US, and a number of other divisions for global distribution. The headquarters are in Chicago, Illinois.
By 2005, SYSTAT was in its eleventh version having a revamped codebase completely changed from Fortran into C++. Version 13 came out in 2009, with improvements in the user interface and several new features.
See also
Comparison of statistical packages
PeakFit
TableCurve 2D
TableCurve 3D
References
External links
SYSTAT
The story of SYSTAT as told by Wilkinson
C++ software
Statistical software
Windows-only proprietary software | SYSTAT (statistics package) | [
"Mathematics"
] | 244 | [
"Statistical software",
"Mathematical software"
] |
2,197,855 | https://en.wikipedia.org/wiki/Dentinogenesis | In animal tooth development, dentinogenesis is the formation of dentin, a substance that forms the majority of teeth. Dentinogenesis is performed by odontoblasts, which are a special type of biological cell on the outer wall of dental pulps, and it begins at the late bell stage of a tooth development. The different stages of dentin formation after differentiation of the cell result in different types of dentin: mantle dentin, primary dentin, secondary dentin, and tertiary dentin.
Odontoblast differentiation
Odontoblasts differentiate from cells of the dental papilla. This is an expression of signaling molecules and growth factors of the inner enamel epithelium (IEE).
Formation of mantle dentin
They begin secreting an organic matrix around the area directly adjacent to the IEE, closest to the area of the future cusp of a tooth. The organic matrix contains collagen fibers with large diameters (0.1-0.2 μm in diameter). The odontoblasts begin to move toward the center of the tooth, forming an extension called the odontoblast process. Thus, dentin formation proceeds toward the inside of the tooth. The odontoblast process causes the secretion of hydroxyapatite crystals and mineralization of the matrix (mineralisation occurs due to matrix vesicles). This area of mineralization is known as mantle dentin and is a layer usually about 20-150 μm thick.
Formation of primary dentin
Whereas mantle dentin forms from the preexisting ground substance of the dental papilla, primary dentin forms through a different process. Odontoblasts increase in size, eliminating the availability of any extracellular resources to contribute to an organic matrix for mineralization. Additionally, the larger odontoblasts cause collagen to be secreted in smaller amounts, which results in more tightly arranged, heterogeneous nucleation that is used for mineralization. Other materials (such as lipids, phosphoproteins, and phospholipids) are also secreted. There is some dispute about the control of mineralization during dentinogenesis.
The dentin in the root of a tooth forms only after the presence of Hertwig epithelial root sheath (HERS), near the cervical loop of the enamel organ. Root dentin is considered different from dentin found in the crown of the tooth (known as coronal dentin) because of the different orientation of collagen fibers, as well as the possible decrease of phosphophoryn levels and less mineralization.
Maturation of dentin or mineralization of predentin occurs soon after its apposition, which takes place two phases: primary and secondary. Initially, the calcium hydroxyapatite crystals form as globules, or calcospherules, in the collagen fibers of the predentin, which allows for both the expansion and fusion during the primary mineralization phase. Later, new areas of mineralization occur as globules form in the partially mineralized predentin during the secondary mineralization phase. These new areas of crystal formation are more or less regularly layered on the initial crystals, allowing them to expand, although they fuse incompletely.
In areas where both primary and secondary mineralization have occurred with complete crystalline fusion, these appear as lighter rounded areas on a stained section of dentin and are considered globular dentin. In contrast, the darker arclike areas in a stained section of dentin are considered interglobular dentin. In these areas, only primary mineralization has occurred within the predentin, and the globules of dentin do not fuse completely. Thus, interglobular dentin is slightly less mineralized than globular dentin. Interglobular dentin is especially evident in coronal dentin, near the DEJ, and in certain dental anomalies, such as in dentin dysplasia.
Formation of secondary dentin
Secondary dentin is formed after root formation is finished and occurs at a much slower rate. It is not formed at a uniform rate along the tooth, but instead forms faster along sections closer to the crown of a tooth. This development continues throughout life and accounts for the smaller areas of pulp found in older individuals.
Formation of tertiary dentin
Tertiary dentin is deposited at specific sites in response to injury by odontoblasts or replacement odontoblasts from the pulp depending on the severity of the injury. Tertiary dentin can be divided into reactionary or reparative dentin. Reactionary dentin is formed by odontoblasts when the injury does not damage the odontoblast layer. Reparative dentin is formed by replacement odontoblasts when the injury is so severe that it damages a part of the primary odontoblast layer. Thus a type of tertiary dentin forms in reaction to stimuli, such as attrition or dental caries.
See also
Odontoblasts
Dentin
Animal tooth development
Human tooth development
Dentinogenesis imperfecta
References
Cellular processes
Tooth development | Dentinogenesis | [
"Biology"
] | 1,065 | [
"Cellular processes"
] |
2,197,956 | https://en.wikipedia.org/wiki/Thorium-232 | Thorium-232 () is the main naturally occurring isotope of thorium, with a relative abundance of 99.98%. It has a half life of 14.05 billion years, which makes it the longest-lived isotope of thorium. It decays by alpha decay to radium-228; its decay chain terminates at stable lead-208.
Thorium-232 is a fertile material; it can capture a neutron to form thorium-233, which subsequently undergoes two successive beta decays to uranium-233, which is fissile. As such, it has been used in the thorium fuel cycle in nuclear reactors; various prototype thorium-fueled reactors have been designed. However, as of 2024, thorium fuel has not been widely adopted for commercial-scale nuclear power.
Natural occurrence
The half-life of thorium-232 (14 billion years) is more than three times the age of the Earth; thorium-232 therefore occurs in nature as a primordial nuclide. Other thorium isotopes occur in nature in much smaller quantities as intermediate products in the decay chains of uranium-238, uranium-235, and thorium-232.
Some minerals that contain thorium include apatite, sphene, zircon, allanite, monazite, pyrochlore, thorite, and xenotime.
Decay
Thorium-232 has a half-life of 14 billion years and mainly decays by alpha decay to radium-228 with a decay energy of 4.0816 MeV. The decay chain follows the thorium series, which terminates at stable lead-208. The intermediates in the thorium-232 decay chain are all relatively short-lived; the longest-lived intermediate decay products are radium-228 and thorium-228, with half lives of 5.75 years and 1.91 years, respectively. All other intermediate decay products have half lives of less than four days.
The following table lists the intermediate decay products in the thorium-232 decay chain:
Rare decay modes
Although thorium-232 mainly decays by alpha decay, it also undergoes spontaneous fission 1.1% of the time. In addition, it is capable of
cluster decay, splitting into ytterbium-182, neon-24, and neon-26; the upper limit for the branching ratio of this decay mode is 2.78%. Double beta decay to uranium-232 is also theoretically possible, but has not been observed.
Use in nuclear power
Thorium-232 is not fissile; it therefore cannot be used directly as fuel in nuclear reactors. However, is fertile: it can capture a neutron to form , which undergoes beta decay with a half-life of 21.8 minutes to . This nuclide subsequently undergoes beta decay with a half-life of 27 days to form fissile .
One potential advantage of a thorium-based nuclear fuel cycle is that thorium is three times more abundant than uranium, the current fuel for commercial nuclear reactors. It is also more difficult to produce material suitable for nuclear weapons from the thorium fuel cycle compared to the uranium fuel cycle. Some proposed designs for thorium-fueled nuclear reactors include the molten salt reactor and a fast neutron reactor, among others. Although thorium-based nuclear reactors have been proposed since the 1960s and several prototype reactors have been built, there has been relatively little research on the thorium fuel cycle compared to the more established uranium fuel cycle; thorium-based nuclear power has not seen large-scale commercial use as of 2024. Nevertheless, some countries such as India have actively pursued thorium-based nuclear power.
References
Actinides
Isotopes of thorium
Fertile materials
IARC Group 1 carcinogens
Radionuclides used in radiometric dating | Thorium-232 | [
"Chemistry"
] | 773 | [
"Isotopes of thorium",
"Isotopes",
"Radionuclides used in radiometric dating"
] |
2,198,300 | https://en.wikipedia.org/wiki/Phosphaalkyne | In chemistry, a phosphaalkyne (IUPAC name: alkylidynephosphane) is an organophosphorus compound containing a triple bond between phosphorus and carbon with the general formula R-C≡P. Phosphaalkynes are the heavier congeners of nitriles, though, due to the similar electronegativities of phosphorus and carbon, possess reactivity patterns reminiscent of alkynes. Due to their high reactivity, phosphaalkynes are not found naturally on earth, but the simplest phosphaalkyne, phosphaethyne (H-C≡P) has been observed in the interstellar medium.
Synthesis
From phosphine gas
The first of preparation of a phosphaalkyne was achieved in 1961 when Thurman Gier produced phosphaethyne by passing phosphine gas at low pressure over an electric arc produced between two carbon electrodes. Condensation of the gaseous products in a –196 °C (–321 °F) trap revealed that the reaction had produced acetylene, ethylene, phosphaethyne, which was identified by infrared spectroscopy.
By elimination reactions
Elimination of hydrogen halides
Following the initial synthesis of phosphaethyne, it was realized that the same compound can be prepared more expeditiously via the flash pyrolysis of methyldichlorophosphine (CH3PCl2), resulting in the loss of two equivalents of hydrogen chloride. This methodology has been utilized to synthesize numerous substituted phosphaalkynes, including the methyl, vinyl, chloride, and fluoride derivatives. Fluoromethylidynephosphane (F-C≡P) can also be prepared via the potassium hydroxide promoted dehydrofluorination of trifluoromethylphosphine (CF3PH2). It is speculated that these reactions generally proceed via an intermediate phosphaethylene with general structure RClC=PH. This hypothesis has found experimental support in the observation of F2C=PH by 31P NMR spectroscopy during the synthesis of F-C≡P.
Elimination of chlorotrimethylsilane
The high strength of silicon–halogen bonds can be leveraged toward the synthesis of phosphaalkynes. Heating bis-trimethylsilylated methyldichlorophosphines ((SiMe3)2CRPCl2) under vacuum results in the expulsion of two equivalents of chlorotrimethylsilane and the ultimate formation of a new phosphaalkyne. This synthetic strategy has been applied in the synthesis of 2-phenylphosphaacetylene and 2-trimethylsilylphosphaacetylene. As in the case of synthetic routes reliant upon the elimination of a hydrogen halide, this route is suspected to involve an intermediate phosphaethylene species containing a C=P double bond, though such a species has not yet been observed.
Elimination of hexamethyldisiloxane
Like the preceding method, the most popular method for synthesizing phosphaalkynes is reliant upon the expulsion of products containing strong silicon-element bonds. Specifically, it is possible to synthesize phosphaalkynes via the elimination of hexamethyldisiloxane (HMDSO) from certain silylated phosphaalkenes with the general structure RO(SiMe3)C=PSiMe3. These phosphaalkenes are formed rapidly following the synthesis of the appropriate acyl bis-trimethylsilylphosphine, which undergoes a rapid [1,3]-silyl shift to produce the relevant phosphaalkene. This synthetic strategy is particularly appealing because the precursors (an acyl chloride and tris-trimethylsilylphosphine or bis-trimethylsilylphosphide) are either readily available or simple to synthesize.
This method has been utilized to produce a variety of kinetically stable phosphaalkynes, including aryl, tertiary alkyl, secondary alkyl, and even primary alkyl phosphaalkynes in good yields.
By rearrangement of a putative phospha-isocyanide
Dihalophospaalkenes of the general form R-P=CX2, where X is Cl, Br, or I, undergo lithium-halogen exchange with organolithium reagents to yield intermediates of the form R-P=CXLi. These species then eject the corresponding lithium halide salt, LiX, to putatively give a phospha-isocyanide, which can rearrange, much in the same way as an isocyanide, to yield the corresponding phosphaalkyne. This rearrangement has been evaluated using the tools of computational chemistry, which has shown that this isomerization process should proceed very rapidly, in line with current experimental evidence showing that phosphaisonitriles are unobservable intermediates, even at –85 °C (–121 °C).
Other methods
It has been demonstrated by Cummins and coworkers that thermolysis of compounds of the general form C14H10PC(=PPh3)R leads to the extrusion of C14H10 (anthracene), triphenylphosphine, and the corresponding substituted phosphaacetylene: R-C≡P. Unlike the previous method, which derives the phosphaalkyne substituent from an acyl chloride, this method derives the substituent from a Wittig reagent.
Structure and bonding
The carbon-phosphorus triple bond in phosphaalkynes represents an exception to the so-called "double bond rule", which would suggest that phosphorus tends not to form multiple bonds to carbon, and the nature of bonding within phosphaalkynes has therefore attracted much interest from synthetic and theoretical chemists. For simple phosphaalkynes such as H-C≡P and Me-C≡P, the carbon-phosphorus bond length is known by microwave spectroscopy, and for certain more complex phosphaalkynes, these bond lengths are known from single-crystal X-ray diffraction experiments. These bond lengths can be compared to the theoretical bond length for a carbon-phosphorus triple bond predicted by Pekka Pyykkö of 1.54 Å. By bond length metrics, most structurally characterized alkyl and aryl substituted phosphaalkynes contain triple bonds between carbon and phosphorus, as their bond lengths are either equal to or less than the theoretical bond distance.
The carbon-phosphorus bond order in phosphaalkynes has also been the subject of computational inquiry, where quantum chemical calculations have been utilized to determine the nature of bonding in these molecules from first principles. In this context, natural bond orbital (NBO) theory has provided valuable insight into the bonding within these molecules. Lucas and coworkers have investigated the electronic structure of various substituted phosphaalkynes, including the cyaphide anion (C≡P–), using NBO, natural resonance theory (NRT), and quantum theory of atoms in molecules (QTAIM) in an attempt to better describe the bonding in these molecules. For the simplest systems, C≡P– and H-C≡P, NBO analysis suggests that the only relevant resonance structure is that in which there is a triple bond between carbon and phosphorus. For more complex molecules, such as Me-C≡P and (Me)3C-C≡P, the triple bonded resonance structure is still the most relevant, but accounts for only some of the overall electron density within the molecule (81.5% and 72.1%, respectively). This is due to interactions between the two carbon-phosphorus pi-bonds and the C-H or C-C sigma-bonds of the substituents, which can be visualized by inspecting the C-P pi-bonding molecular orbitals in these molecules.
Reactivity
Phosphaalkynes possess diverse reactivity profiles, and can be utilized in the synthesis of various phosphorus-containing saturated of unsaturated heterocyclic compounds.
Cycloaddition reactivity
One of the most developed areas of phosphaalkyne chemistry is that of cycloadditions. Like other multiply bonded molecular fragments, phosphaalkynes undergo myriad reactions such as [1+2] cycloadditions, [3+2] cycloadditions, and [4+2] cycloadditions. This reactivity is summarized in graphical format below, which includes some examples of 1,2-addition reactivity (which is not a form of cycloaddition).
Oligomerization
The pi-bonds of phosphaalkynes are weaker than most carbon-phosphorus sigma bonds, rendering phosphaalkynes reactive with respect to the formation of oligomeric species containing more sigma bonds. These oligomerization reactions are triggered thermally, or can be catalyzed by transition or main-group metals.
Uncatalyzed
Phosphaalkynes with small substituents (H, F, Me, Ph, etc.) undergo decomposition at or below room temperature by way of polymerization/oligimerization to yield mixtures of products which are challenging to characterize. The same is largely true of kinetically stable phosphaalkynes, which undergo oligomerization reactions at elevated temperature. In spite of the challenges associated with isolating and identifying the products of these oligimerizations, however, cuboidal tetramers of tert-butylphosphaalkyne and tert-pentylphosphaalkyne have been isolated (albeit in low yield) and identified following heating of the respective phosphaalkyne.
Computational chemistry has proved a valuable tool for studying these synthetically complex reactions, and it has been shown that while the formation of phosphaalkyne dimers is thermodynamically favorable, the formation of trimers, tetramers, and higher order oligomeric species tends to be more favorable, accounting for the generation of intractable mixtures upon inducing oligomerization of phosphaalkynes experimentally.
Metal-mediated
Unlike thermally initiated phosphaalkyne oligomerization reactions, transition metals and main group metals are capable of oligomerizing phosphaalkynes in a controlled manner, and have led to the isolation of phosphaalkyne dimers, trimers, tetramers, pentamers, and even hexamers. A nickel complex is capable of catalytically homocoupling tBu-C≡P to yield a diphosphatetrahedrane.
See also
Arsaalkyne
Cyaphide
References
Functional groups
Organophosphanes | Phosphaalkyne | [
"Chemistry"
] | 2,312 | [
"Functional groups"
] |
2,198,661 | https://en.wikipedia.org/wiki/Trypanosoma%20brucei | Trypanosoma brucei is a species of parasitic kinetoplastid belonging to the genus Trypanosoma that is present in sub-Saharan Africa. Unlike other protozoan parasites that normally infect blood and tissue cells, it is exclusively extracellular and inhabits the blood plasma and body fluids. It causes deadly vector-borne diseases: African trypanosomiasis or sleeping sickness in humans, and animal trypanosomiasis or nagana in cattle and horses. It is a species complex grouped into three subspecies: T. b. brucei, T. b. gambiense and T. b. rhodesiense. The first is a parasite of non-human mammals and causes nagana, while the latter two are zoonotic infecting both humans and animals and cause African trypanosomiasis.
T. brucei is transmitted between mammal hosts by an insect vector belonging to different species of tsetse fly (Glossina). Transmission occurs by biting during the insect's blood meal. The parasites undergo complex morphological changes as they move between insect and mammal over the course of their life cycle. The mammalian bloodstream forms are notable for their cell surface proteins, variant surface glycoproteins, which undergo remarkable antigenic variation, enabling persistent evasion of host adaptive immunity leading to chronic infection. T. brucei is one of only a few pathogens known to cross the blood brain barrier. There is an urgent need for the development of new drug therapies, as current treatments can have severe side effects and can prove fatal to the patient.
Whilst not historically regarded as T. brucei subspecies due to their different means of transmission, clinical presentation, and loss of kinetoplast DNA, genetic analyses reveal that T. equiperdum and T. evansi are evolved from parasites very similar to T. b. brucei, and are thought to be members of the brucei clade.
The parasite was discovered in 1894 by Sir David Bruce, after whom the scientific name was given in 1899.
History and discovery
Early records
Sleeping sickness in animals were described in ancient Egyptian writings. During the Middle Ages, Arabian traders noted the prevalence of sleeping sickness among Africans and their dogs. It was a major infectious diseases in southern and eastern Africa in the 19th century. The Zulu Kingdom (now part of South Africa) was severely struck by the disease, which became known to the British as nagana, a Zulu word for to be low or depressed in spirit. In other parts of Africa, Europeans called it the "fly disease."
John Aktins, an English naval surgeon, gave first medical description of human sleeping sickness in 1734. He attributed deaths which he called "sleepy distemper" in Guinea to the infection. Another English physician Thomas Masterman Winterbottom gave clearer description of the symptoms from Sierra Leone in 1803. Winterbottom described a key feature of the disease as swollen posterior cervical lymph nodes and slaves who developed such swellings were ruled unfit for trade. The symptom is eponymously known as "Winterbottom's sign."
Discovery of the parasite
The Royal Army Medical Corps appointed David Bruce, who at the time was assistant professor of pathology at the Army Medical School in Netley with a rank of Captain in the army, in 1894 to investigate a disease known as nagana in South Africa. The disease caused severe problems among the local cattle and British Army horses. On 27 October 1894, Bruce and his microbiologist-wife Mary Elizabeth Bruce (née Steele) moved to Ubombo Hill, where the disease was most prevalent.
On the sixth day of investigation, Bruce identified parasites from the blood of diseased cows. He initially noted them as a kind of filaria (tiny roundworms), but by the end of the year established that the parasites were "haematozoa" (protozoan) and were the cause of nagana. It was the discovery of Trypanosoma brucei. The scientific name was created by British zoologists Henry George Plimmer and John Rose Bradford in 1899 as Trypanosoma brucii due to printer's error. The genus Trypanosoma was already introduced by Hungarian physician David Gruby in his description of T. sanguinis, a species he discovered in frogs in 1843.
Outbreaks
In Uganda, the first case of human infection was reported in 1898. It was followed by an outbreak in 1900. By 1901, it became severe with death toll estimated to about 20,000. More than 250,000 people died in the epidemic that lasted for two decades. The disease commonly popularised as "negro lethargy." It was not known whether the human sleeping sickness and nagana were similar or the two disease were caused by similar parasites. Even the observations of Forde and Dutton did not indicate that the trypanosome was related to sleeping sickness.
Sleeping Sickness Commission
The Royal Society constituted a three-member Sleeping Sickness Commission on 10 May 1902 to investigate the epidemic in Uganda. The Commission comprised George Carmichael Low from the London School of Hygiene and Tropical Medicine as the leader, his colleague Aldo Castellani and Cuthbert Christy, a medical officer on duty in Bombay, India. At the time, a debate remained on the etiology, some favoured bacterial infection while some believed as helminth infection. The first investigation focussed on Filaria perstans (later renamed Mansonella perstans), a small roundworm transmitted by flies, and bacteria as possible causes, only to discover that the epidemic was not related to these pathogens. The team was described as an "ill-assorted group" and a "queer lot", and the expedition "a failure." Low, whose conduct was described as "truculent and prone to take offence," left the Commission and Africa after three months.
In February 1902, the British War Office, following a request from the Royal Society, appointed David Bruce to lead the second Sleeping Sickness Commission. With David Nunes Nabarro (from the University College Hospital), Bruce and his wife joined Castellani and Christy on 16 March. In November 1902, Castellani had found the trypanosomes in the cerebrospinal fluid of an infected person. He was convinced that the trypanosome was the causative parasite of sleeping sickness. Like Low, his conduct has been criticised and the Royal Society refused to publish his report. He was further infuriated when Bruce advised him not to make rash conclusion without further evidences, as there were many other parasites to consider. Castellani left Africa in April and published his report as "On the discovery of a species of Trypanosoma in the cerebrospinal fluid of cases of sleeping sickness" in The Lancet. By then the Royal Society had already published the report. By August 1903, Bruce and his team established that the disease was transmitted by the tsetse fly, Glossina palpalis. However, Bruce did not understand the trypanosoma life cycle and believed that the parasites were simply transmitted from one person to another.
Around the same time, Germany sent an expeditionary team led by Robert Koch to investigate the epidemic in Togo and East Africa. In 1909, one of the team members, Friedrich Karl Kleine discovered that the parasite had developmental stages in the tsetse flies. Bruce, in the third Sleeping Sickness Commission (1908–1912) that included Albert Ernest Hamerton, H.R. Bateman and Frederick Percival Mackie, established the basic developmental cycle through which the trypanosome in tsetse fly must pass. An open question, noted by Bruce at this stage, was how the trypanosome finds its way to the salivary glands. Muriel Robertson, in experiments carried out between 1911 and 1912, established how ingested trypanosomes finally reach the salivary glands of the fly.
Discovery of human trypanosomes
British Colonial Surgeon Robert Michael Forde was the first to find the parasite in human. He found it from an English steamboat captain who was admitted to a hospital at Bathurst, Gambia, in 1901. His report in 1902 indicates that he believed it to be a kind of filarial worm. From the same person, Forde's colleague Joseph Everett Dutton identified it as a protozoan belonging to the genus Trypanosoma. Knowing the distinct features, Dutton proposed a new species name in 1902:
Another human trypanosome (now called T. brucei rhodesiense) was discovered by British parasitologists John William Watson Stephens and Harold Benjamin Fantham. In 1910, Stephens noted in his experimental infection in rats that the trypanosome, obtained from an individual from Northern Rhodesia (later Zambia), was not the same as T. gambiense. The source of the parasite, an Englishman travelling in Rhodesia was found with the blood parasites in 1909, and was transported to and admitted at the Royal Southern Hospital in Liverpool under the care of Ronald Ross. Fantham described the parasite's morphology and found that it was a different trypanosome.
Species
T. brucei is a species complex that includes:
T. brucei gambiense which causes slow onset chronic trypanosomiasis in humans. It is most common in central and western Africa, where humans are thought to be the primary reservoir. In 1973, David Hurst Molyneux was the first to find infection of this strain in wildlife and domestic animals. Since 2002, there are several reports showing that animals, including cattle, are also infected. It is responsible for 98% of all human African trypanosomiasis, and is roughly 100% fatal.
T. brucei rhodesiense which causes fast onset acute trypanosomiasis in humans. A highly zoonotic parasite, it is prevalent in southern and eastern Africa, where game animals and livestock are thought to be the primary reservoir.
T. brucei brucei which causes animal trypanosomiasis, along with several other species of Trypanosoma. T. b. brucei is not infective to humans due to its susceptibility to lysis by trypanosome lytic factor-1 (TLF-1). However, it is closely related to, and shares fundamental features with the human-infective subspecies. Only rarely can the T. b. brucei infect a human.
The subspecies cannot be distinguished from their structure as they are all identical under microscopes. Geographical location is the main distinction. Molecular markers have been developed for individual identification. Serum resistance-associated (SRA) gene is used to differentiate T. b. rhodesiense from other subspecies. TgsGP gene, found only in type 1 T. b. gambiense is also a specific distinction between T. b. gambiense strains.
The subspecies lack many of the features commonly considered necessary to constitute monophyly. As such Lukeš et al., 2022 proposes a new polyphyly by ecotype.
Etymology
The genus name is derived from two Greek words: τρυπανον (trypanon or trupanon), which means "borer" or "auger", referring to the corkscrew-like movement; and σῶμα (sôma), meaning "body." The specific name is after David Bruce, who discovered the parasites in 1894. The subspecies, the human strains, are named after the regions in Africa where they were first identified: T. brucei gambiense was described from an Englishman in Gambia in 1901; T. brucei rhodesiense was found from another Englishman in Northern Rhodesia in 1909.
Structure
T. brucei is a typical unicellular eukaryotic cell, and measures 8 to 50 μm in length. It has an elongated body having a streamlined and tapered shape. Its cell membrane (called pellicle) encloses the cell organelles, including the nucleus, mitochondria, endoplasmic reticulum, Golgi apparatus, and ribosomes. In addition, there is an unusual organelle called the kinetoplast, which is a complex of thousands of interlinked circles of mitochondrial DNA known as mini- and maxicircles. The kinetoplast lies near the basal body with which it is indistinguishable under microscope. From the basal body arises a single flagellum that run towards the anterior end. Along the body surface, the flagellum is attached to the cell membrane forming an undulating membrane. Only the tip of the flagellum is free at the anterior end. The cell surface of the bloodstream form features a dense coat of variant surface glycoproteins (VSGs) which is replaced by an equally dense coat of procyclins when the parasite differentiates into the procyclic phase in the tsetse fly midgut.
Trypanosomatids show several different classes of cellular organisation of which two are adopted by T. brucei at different stages of the life cycle:
Epimastigote, which is found in tsetse fly. Its kinetoplast and basal body lie anterior to the nucleus, with a long flagellum attached along the cell body. The flagellum starts from the centre of the body.
Trypomastigote, which is found in mammalian hosts. The kinetoplast and basal body are posterior of nucleus. The flagellum arises from the posterior end of the body.
These names are derived from the Greek mastig- meaning whip, referring to the trypanosome's whip-like flagellum. The trypanosome flagellum has two main structures. It is made up of a typical flagellar axoneme, which lies parallel to the paraflagellar rod, a lattice structure of proteins unique to the kinetoplastids, euglenoids and dinoflagellates.
The microtubules of the flagellar axoneme lie in the normal 9+2 arrangement, orientated with the + at the anterior end and the − in the basal body. The cytoskeletal structure extends from the basal body to the kinetoplast. The flagellum is bound to the cytoskeleton of the main cell body by four specialised microtubules, which run parallel and in the same direction to the flagellar tubulin.
The flagellar function is twofold — locomotion via oscillations along the attached flagellum and cell body in human blood stream and tsetse fly gut, and attachment to the salivary gland epithelium of the fly during the epimastigote stage. The flagellum propels the body in such a way that the axoneme generates the oscillation and a flagellar wave is created along the undulating membrane. As a result, the body moves in a corkscrew pattern. In flagella of other organisms, the movement starts from the base towards the tip, while in T. brucei and other trypanosomatids, the beat originates from the tip and progresses towards the base, forcing the body to move towards the direction of the tip of the flagellum.
Life cycle
T. brucei completes its life cycle between tsetse fly (of the genus Glossina) and mammalian hosts, including humans, cattle, horses, and wild animals. In stressful environments, T. brucei produces exosomes containing the spliced leader RNA and uses the endosomal sorting complexes required for transport (ESCRT) system to secrete them as extracellular vesicles. When absorbed by other trypanosomes these EVs cause repulsive movement away from the area and so away from bad environments.
In mammalian host
Infection occurs when a vector tsetse fly bites a mammalian host. The fly injects the metacyclic trypomastigotes into the skin tissue. The trypomastigotes enter the lymphatic system and into the bloodstream. The initial trypomastigotes are short and stumpy (SS). They are protected from the host's immune system by producing antigentic variation called variant surface glycoproteins on their body surface. Once inside the bloodstream, they grow into long and slender forms (LS). Then, they multiply by binary fission. Some of the daughter cells then become short and stumpy again. Some of them remains as intermediate forms, representing a transitional stage between the long and short forms. The long slender forms are able to penetrate the blood vessel endothelium and invade extravascular tissues, including the central nervous system (CNS) and placenta in pregnant women.
Sometimes, wild animals can be infected by the tsetse fly and they act as reservoirs. In these animals, they do not produce the disease, but the live parasite can be transmitted back to the normal hosts. Besides preparation to be taken up and vectored to another host by a tsetse fly, transition from LS to SS in the mammal serves to prolong the host's lifespan controlling parasitemia aids in increasing the total transmitting duration of any particular infested host.
In tsetse fly
Unlike anopheline mosquitos and sandflies that transmit other protozoan infections in which only females are involved, both sexes of tsetse flies are blood feeders and equally transmit trypanosomes. The short and stumpy trypomastigotes (SS) are taken up by tsetse flies during a blood meal. Survival in the tsetse midgut is one reason for the particular adaptations of the SS stage. The trypomastigotes enter the midgut of the fly where they become procyclic trypomastigotes as they replace their VSG with other protein coats called procyclins. Because the fly faces digestive damage from immune factors in the bloodmeal, it produces serpins to suppress the infection. The serpins including GmmSRPN3, GmmSRPN5, GmmSRPN9, and especially GmmSRPN10 are then hijacked by the parasite to aid its own midgut infection, using them to inactivate bloodmeal trypanolytic factors which would otherwise make the fly host inhospitable.
The procyclic trypomastigotes cross the peritrophic matrix, undergo slight elongation and migrate to the anterior part of the midgut as non-proliferative long mesocyclic trypomastigotes. As they reach the proventriculus, they became thinner and undergo cytoplasmic rearrangement to give rise to proliferative epimastigotes. The epimastigotes divide asymmetrically to produce long and short epimastigotes. The long epimastigote cannot move to other places and simply die off by apoptosis. The short epimastigote migrate from the proventriculus via the foregut and proboscis to the salivary glands where they get attached to the salivary gland epithelium. Even all the short forms do not succeed in the complete migration to the salivary glands as most of them perish on the way–only up to five may survive.
In the salivary glands, the survivors undergo phases of reproduction. The first cycle in an equal mitosis by which a mother cell produces two similar daughter epimastigotes. They remain attach to the epithelium. This phase is the main reproduction in first-stage infection to ensure sufficient number of parasites in the salivary gland. The second cycle, which usually occurs in late-stage infection, involves unequal mitosis that produces two different daughter cells from the mother epimastigote. One daughter is an epimastigote that remains non-infective and the other is a trypomastigote. The trypomastigote detach from the epithelium and undergo transformation into short and stumpy trypomastigotes. The surface procyclins are replaced with VSGs and become the infective metacyclic trypomastigotes. Complete development in the fly takes about 20 days. They are injected into the mammalian host along with the saliva on biting, and are known as salivarian.
In the case of T. b. brucei infecting Glossina palpalis gambiensis, the parasite changes the proteome contents of the fly's head and causes behavioral changes such as unnecessarily increased feeding frequency, which increases transmission opportunities. This is related to altered glucose metabolism that causes a perceived need for more calories. (The metabolic change, in turn, being due to complete absence of glucose-6-phosphate 1-dehydrogenase in infected flies.) Monoamine neurotransmitter synthesis is also altered: production of aromatic L-amino acid decarboxylase involved in dopamine and serotonin synthesis, and α-methyldopa hypersensitive protein was induced. This is similar to the alterations in other dipteran vectors' head proteomes under infection by other eukaryotic parasites of mammals.
Reproduction
Binary fission
The reproduction of T. brucei is unusual compared to most eukaryotes. The nuclear membrane remains intact and the chromosomes do not condense during mitosis. The basal body, unlike the centrosome of most eukaryotic cells, does not play a role in the organisation of the spindle and instead is involved in division of the kinetoplast. The events of reproduction are:
The basal body duplicates and both remain associated with the kinetoplast. Each basal body forms a separate flagellum.
Kinetoplast DNA undergoes synthesis then the kinetoplast divides coupled with separation of the two basal bodies.
Nuclear DNA undergoes synthesis while a new flagellum extends from the younger, more posterior, basal body.
The nucleus undergoes mitosis.
Cytokinesis progresses from the anterior to posterior.
Division completes with abscission.
Meiosis
In the 1980s, DNA analyses of the developmental stages of T. brucei started to indicate that the trypomastigote in the tsetse fly undergoes meiosis, i.e., a sexual reproduction stage. But it is not always necessary for a complete life cycle. The existence of meiosis-specific proteins was reported in 2011. The haploid gametes (daughter cells produced after meiosis) were discovered in 2014. The haploid trypomastigote-like gametes can interact with each other via their flagella and undergo cell fusion (the process is called syngamy). Thus, in addition to binary fission, T. brucei can multiply by sexual reproduction. Trypanosomes belong to the supergroup Excavata and are one of the earliest diverging lineages among eukaryotes. The discovery of sexual reproduction in T. brucei supports the hypothesis that meiosis and sexual reproduction are ancestral and ubiquitous features of eukaryotes.
Infection and pathogenicity
The insect vectors for T. brucei are different species of tsetse fly (genus Glossina). The major vectors of T. b. gambiense, causing West African sleeping sickness, are G. palpalis, G. tachinoides, and G. fuscipes. While the principal vectors of T. b. rhodesiense, causing East African sleeping sickness, are G. morsitans, G. pallidipes, and G. swynnertoni. Animal trypanosomiasis is transmitted by a dozen species of Glossina.
In later stages of a T. brucei infection of a mammalian host the parasite may migrate from the bloodstream to also infect the lymph and cerebrospinal fluids. It is under this tissue invasion that the parasites produce the sleeping sickness.
In addition to the major form of transmission via the tsetse fly, T. brucei may be transferred between mammals via bodily fluid exchange, such as by blood transfusion or sexual contact, although this is thought to be rare. Newborn babies can be infected (vertical or congenital transmission) from infected mothers.
Chemotherapy
There are four drugs generally recommended for the first-line treatment of African trypanosomiasis: suramin developed in 1921, pentamidine developed in 1941, melarsoprol developed in 1949 and eflornithine developed in 1990. These drugs are not fully effective and are toxic to humans. In addition, drug resistance has developed in the parasites against all the drugs. The drugs are of limited application since they are effective against specific strains of T. brucei and the life cycle stages of the parasites. Suramin is used only for first-stage infection of T. b. rhodesiense, pentamidine for first-stage infection of T. b. gambiense, and eflornithine for second-stage infection of T. b. gambiense. Melarsopol is the only drug effective against the two types of parasite in both infection stages, but is highly toxic, such that 5% of treated individuals die of brain damage (reactive encephalopathy). Another drug, nifurtimox, recommended for Chagas disease (American trypanosomiasis), is itself a weak drug but in combination with melarsopol, it is used as the first-line medication against second-stage infection of T. b. gambiense.
Historically, arsenic and mercuric compounds were introduced in the early 20th century, with success particularly in animal infections. German physician Paul Ehrlich and his Japanese associate Kiyoshi Shiga developed the first specific trypanocidal drug in 1904 from a dye, trypan red, which they named Trypanroth. These chemical preparations were effective only at high and toxic dosages, and were not suitable for clinical use.
Animal trypanosomiasis is treated with six drugs: diminazene aceturate, homidium (homidium bromide and homidium chloride), isometamidium chloride, melarsomine, quinapyramine, and suramin. They are all highly toxic to animals, and drug resistance is prevalent. Homidium is the first prescription anti-trypanosomal drug. It was developed as a modified compound of phenantridine, which was found in 1938 to have trypanocidal activity against the bovine parasite, T. congolense. Among its products, dimidium bromide and its derivatives were first used in 1948 in animal cases in Africa, and became known as homidium (or as ethidium bromide in molecular biology).
Drug development
The major challenge against the human disease has been to find drugs that readily pass the blood-brain barrier. The latest drug that has come into clinical use is fexinidazol, but promising results have also been obtained with the benzoxaborole drug acoziborole (SCYX-7158). This drug is currently under evaluation as a single-dose oral treatment, which is a great advantage compared to currently used drugs. Another research field that has been extensively studied in Trypanosoma brucei is to target its nucleotide metabolism. The nucleotide metabolism studies have both led to the development of adenosine analogues looking promising in animal studies, and to the finding that downregulation of the P2 adenosine transporter is a common way to acquire partial drug resistance against the melaminophenyl arsenical and diamidine drug families (containing melarsoprol and pentamidine, respectively). This is particularly a problem with the veterinary drug diminazene aceturate. Drug uptake and degradation are two major issues to consider to avoid drug resistance development. In the case of nucleoside analogues, they need to be taken up by the P1 nucleoside transporter (instead of P2), and they also need to be resistant against cleavage in the parasite.
Phytochemicals. Some phytochemicals have shown research promise against the T. b. brucei strain. Aderbauer et al., 2008 and Umar et al., 2010 find Khaya senegalensis is effective in vitro and Ibrahim et al., 2013 and 2008 in vivo (in rats). Ibrahim et al., 2013 find a lower dose reduces parasitemia by this subspecies and a higher dose is curative and prevents injury.
Distribution
T. brucei is found where its tsetse fly vectors are prevalent in continental Africa. That is to say, tropical rainforest (Af), tropical monsoon (Am), and tropical savannah (Aw) areas of continental Africa. Hence, the equatorial region of Africa is called the "sleeping sickness" belt. However, the specific type of the trypanosome differs according to geography. T. b. rhodesiense is found primarily in East Africa (Botswana, Democratic Republic of the Congo, Ethiopia, Kenya, Malawi, Tanzania, Uganda and Zimbabwe), while T. b. gambiense is found in Central and West Africa.
Impact
T. brucei is a major cause of livestock disease in sub-Saharan Africa. It is thus of tremendous veterinary concern and one of the greatest limitations on agriculture in Africa and the economic life of sub-Saharan Africa.
Evolution
Trypanosoma brucei gambiense evolved from a single progenitor ~10,000 years ago. It is evolving asexually and its genome shows the Meselson effect.
Genetics
There are two subpopulations of T. b. gambiense that possesses two distinct groups that differ in genotype and phenotype. Group 2 is more akin to T. b. brucei than group 1 T. b. gambiense.
All T. b. gambiense are resistant to killing by a serum component — trypanosome lytic factor (TLF) of which there are two types: TLF-1 and TLF-2. Group 1 T. b. gambiense parasites avoid uptake of the TLF particles while those of group 2 are able to either neutralize or compensate for the effects of TLF.
In contrast, resistance in T. b. rhodesiense is dependent upon the expression of a serum resistance associated (SRA) gene. This gene is not found in T. b. gambiense.
Genome
The genome of T. brucei is made up of:
11 pairs of large chromosomes of 1 to 6 megabase pairs.
3–5 intermediate chromosomes of 200 to 500 kilobase pairs.
Around 100 minichromosomes of around 50 to 100 kilobase pairs. These may be present in multiple copies per haploid genome.
Most genes are held on the large chromosomes, with the minichromosomes carrying only VSG genes. The genome has been sequenced and is available on GeneDB.
The mitochondrial genome is found condensed into the kinetoplast, an unusual feature unique to the kinetoplastid protozoans. The kinetoplast and the basal body of the flagellum are strongly associated via a cytoskeletal structure
In 1993, a new base, ß-d-glucopyranosyloxymethyluracil (base J), was identified in the nuclear DNA of T. brucei.
VSG coat
The surface of T. brucei and other species of trypanosomes is covered by a dense external coat called variant surface glycoprotein (VSG). VSGs are 60-kDa proteins which are densely packed (~5 x 106 molecules) to form a 12–15 nm surface coat. VSG dimers make up about 90% of all cell surface proteins in trypanosomes. They also make up ~10% of total cell protein. For this reason, these proteins are highly immunogenic and an immune response raised against a specific VSG coat will rapidly kill trypanosomes expressing this variant. However, with each cell division there is a possibility that the progeny will switch expression to change the VSG that is being expressed.
This VSG coat enables an infecting T. brucei population to persistently evade the host's immune system, allowing chronic infection. VSG is highly immunogenic, and an immune response raised against a specific VSG coat rapidly kills trypanosomes expressing this variant. Antibody-mediated trypanosome killing can also be observed in vitro by a complement-mediated lysis assay. However, with each cell division there is a possibility that one or both of the progeny will switch expression to change the VSG that is being expressed. The frequency of VSG switching has been measured to be approximately 0.1% per division. As T. brucei populations can peak at a size of 1011 within a host this rapid rate of switching ensures that the parasite population is typically highly diverse. Because host immunity against a specific VSG does not develop immediately, some parasites will have switched to an antigenically distinct VSG variant, and can go on to multiply and continue the infection. The clinical effect of this cycle is successive 'waves' of parasitemia (trypanosomes in the blood).
Expression of VSG genes occurs through a number of mechanisms yet to be fully understood. The expressed VSG can be switched either by activating a different expression site (and thus changing to express the VSG in that site), or by changing the VSG gene in the active site to a different variant. The genome contains many hundreds if not thousands of VSG genes, both on minichromosomes and in repeated sections ('arrays') in the interior of the chromosomes. These are transcriptionally silent, typically with omitted sections or premature stop codons, but are important in the evolution of new VSG genes. It is estimated up to 10% of the T. brucei genome may be made up of VSG genes or pseudogenes. It is thought that any of these genes can be moved into the active site by recombination for expression. VSG silencing is largely due to the effects of histone variants H3.V and H4.V. These histones cause changes in the three-dimensional structure of the T. brucei genome that results in a lack of expression. VSG genes are typically located in the subtelomeric regions of the chromosomes, which makes it easier for them to be silenced when they are not being used. It remains unproven whether the regulation of VSG switching is purely stochastic or whether environmental stimuli affect switching frequency. Switching is linked to two factors: variation in activation of individual VSG genes; and differentiation to the "short stumpy" stage - triggered by conditions of high population density - which is the nonreproductive, interhost transmission stage. it also remains unexplained how this transition is timed and how the next surface protein gene is chosen. These questions of antigenic variation in T. brucei and other parasites are among the most interesting in the field of infection.
Killing by human serum and resistance to human serum killing
Trypanosoma brucei brucei (as well as related species T. equiperdum and T. evansi) is not human infective because it is susceptible to innate immune system 'trypanolytic' factors present in the serum of some primates, including humans. These trypanolytic factors have been identified as two serum complexes designated trypanolytic factors (TLF-1 and −2) both of which contain haptoglobin-related protein (HPR) and apolipoprotein LI (ApoL1). TLF-1 is a member of the high density lipoprotein family of particles while TLF-2 is a related high molecular weight serum protein binding complex. The protein components of TLF-1 are haptoglobin related protein (HPR), apolipoprotein L-1 (apoL-1) and apolipoprotein A-1 (apoA-1). These three proteins are colocalized within spherical particles containing phospholipids and cholesterol. The protein components of TLF-2 include IgM and apolipoprotein A-I.
Trypanolytic factors are found only in a few species, including humans, gorillas, mandrills, baboons and sooty mangabeys. This appears to be because haptoglobin-related protein and apolipoprotein L-1 are unique to primates. This suggests these genes originated in the primate genome -.
Human infective subspecies T. b. gambiense and T. b. rhodesiense have evolved mechanisms of resisting the trypanolytic factors, described below.
ApoL1
ApoL1 is a member of a six gene family, ApoL1-6, that have arisen by tandem duplication. These proteins are normally involved in host apoptosis or autophagic death and possess a Bcl-2 homology domain 3. ApoL1 has been identified as the toxic component involved in trypanolysis. ApoLs have been subject to recent selective evolution possibly related to resistance to pathogens.
The gene encoding ApoL1 is found on the long arm of chromosome 22 (22q12.3). Variants of this gene, termed G1 and G2, provide protection against T. b. rhodesiense. These benefits are not without their downside as a specific ApoL1 glomerulopathy has been identified. This glomerulopathy may help to explain the greater prevalence of hypertension in African populations.
The gene encodes a protein of 383 residues, including a typical signal peptide of 12 amino acids. The plasma protein is a single chain polypeptide with an apparent molecular mass of 42 kilodaltons. ApoL1 has a membrane pore forming domain functionally similar to that of bacterial colicins. This domain is flanked by the membrane addressing domain and both these domains are required for parasite killing.
Within the kidney, ApoL1 is found in the podocytes in the glomeruli, the proximal tubular epithelium and the arteriolar endothelium. It has a high affinity for phosphatidic acid and cardiolipin and can be induced by interferon gamma and tumor necrosis factor alpha.
Hpr
Hpr is 91% identical to haptoglobin (Hp), an abundant acute phase serum protein, which possesses a high affinity for hemoglobin (Hb). When Hb is released from erythrocytes undergoing intravascular hemolysis Hp forms a complex with the Hb and these are removed from circulation by the CD163 scavenger receptor. In contrast to Hp–Hb, the Hpr–Hb complex does not bind CD163 and the Hpr serum concentration appears to be unaffected by hemolysis.
Killing mechanism
The association of HPR with hemoglobin allows TLF-1 binding and uptake via the trypanosome haptoglobin-hemoglobin receptor (TbHpHbR). TLF-2 enters trypanosomes independently of TbHpHbR. TLF-1 uptake increases when haptoglobin level is low. TLF-1 overtakes haptoglobin and binds free hemoglobin in the serum. However the complete absence of haptoglobin is associated with a decreased killing rate by serum.
The trypanosome haptoglobin-hemoglobin receptor is an elongated three a-helical bundle with a small membrane distal head. This protein extends above the variant surface glycoprotein layer that surrounds the parasite.
The first step in the killing mechanism is the binding of TLF to high affinity receptors—the haptoglobin-hemoglobin receptors—that are located in the flagellar pocket of the parasite. The bound TLF is endocytosed via coated vesicles and then trafficked to the parasite lysosomes. ApoL1 is the main lethal factor in the TLFs and kills trypanosomes after insertion into endosomal / lysosomal membranes. After ingestion by the parasite, the TLF-1 particle is trafficked to the lysosome wherein ApoL1 is activated by a pH mediated conformational change. After fusion with the lysosome the pH drops from ~7 to ~5. This induces a conformational change in the ApoL1 membrane addressing domain which in turn causes a salt bridge linked hinge to open. This releases ApoL1 from the HDL particle to insert in the lysosomal membrane. The ApoL1 protein then creates anionic pores in the membrane which leads to depolarization of the membrane, a continuous influx of chloride and subsequent osmotic swelling of the lysosome. This influx in its turn leads to rupture of the lysosome and the subsequent death of the parasite.
Resistance mechanisms: T. b. gambiense
Trypanosoma brucei gambiense causes 97% of human cases of sleeping sickness. Resistance to ApoL1 is principally mediated by the hydrophobic β-sheet of the T. b. gambiense specific glycoprotein. Other factors involved in resistance appear to be a change in the cysteine protease activity and TbHpHbR inactivation due to a leucine to serine substitution (L210S) at codon 210. This is due to a thymidine to cytosine mutation at the second codon position.
These mutations may have evolved due to the coexistence of malaria where this parasite is found. Haptoglobin levels are low in malaria because of the hemolysis that occurs with the release of the merozoites into the blood. The rupture of the erythrocytes results in the release of free haem into the blood where it is bound by haptoglobin. The haem is then removed along with the bound haptoglobin from the blood by the reticuloendothelial system.
Resistance mechanisms: T. b. rhodesiense
Trypanosoma brucei rhodesiense relies on a different mechanism of resistance: the serum resistance associated protein (SRA). The SRA gene is a truncated version of the major and variable surface antigen of the parasite, the variant surface glycoprotein. However, it has little similarity (low sequence homology) with the VSG gene (<25%). SRA is an expression site associated gene in T. b. rhodesiense and is located upstream of the VSGs in the active telomeric expression site. The protein is largely localized to small cytoplasmic vesicles between the flagellar pocket and the nucleus. In T. b. rhodesiense the TLF is directed to SRA containing endosomes while some dispute remains as to its presence in the lysosome. SRA binds to ApoL1 using a coiled–coiled interaction at the ApoL1 SRA interacting domain while within the trypanosome lysosome. This interaction prevents the release of the ApoL1 protein and the subsequent lysis of the lysosome and death of the parasite.
Baboons are known to be resistant to T. b. rhodesiense. The baboon version of the ApoL1 gene differs from the human gene in a number of respects including two critical lysines near the C terminus that are necessary and sufficient to prevent baboon ApoL1 binding to SRA. Experimental mutations allowing ApoL1 to be protected from neutralization by SRA have been shown capable of conferring trypanolytic activity on T. b. rhodesiense. These mutations resemble those found in baboons, but also resemble natural mutations conferring protection of humans against T. b. rhodesiense which are linked to kidney disease.
See also
List of parasites (human)
Simon Gaskell, professor of chemistry and current principal of Queen Mary, University of London, researches various forms of mass spectrometry to determine the quantity and longevity of these proteins.
Tryptophol, a chemical compound produced by the T. brucei which induces sleep in humans
References
External links
African trypanosomiasis
Parasites of humans
Parasites of mammals
Parasitic excavates
Trypanosomatida
Protists described in 1899
Euglenozoa species | Trypanosoma brucei | [
"Biology"
] | 9,255 | [
"Parasites of humans",
"Humans and other species"
] |
2,198,678 | https://en.wikipedia.org/wiki/Wet%20wipe | A wet wipe, also known as a wet towel, wet one, moist towelette, disposable wipe, disinfecting wipe, or a baby wipe (in specific circumstances) is a small to medium-sized moistened piece of plastic or cloth that either comes folded and individually wrapped for convenience or, in the case of dispensers, as a large roll with individual wipes that can be torn off. Wet wipes are used for cleaning purposes like personal hygiene and household cleaning; each is a separate product depending on the chemicals added and medical or office cleaning wipes are not intended for skin hygiene.
In 2013, owing to increasing sales of the product in affluent countries, Consumer Reports reported that efforts to make the wipes "flushable" down the toilet had not entirely succeeded, according to their test.
Invention
American Arthur Julius is seen as the inventor of wet wipes. Julius worked in the cosmetics industry and in 1957, adjusted a soap portioning machine, putting it in a loft in Manhattan. Julius trademarked the name Wet-Nap in 1958, a name for the product that is still being used. After fine tuning his new hand-cleaning aid together with a mechanic, he unveiled his invention at the 1960 National Restaurant Show in Chicago and in 1963 started selling Wet-Nap products to Colonel Harland Sanders to be distributed to customers of Kentucky Fried Chicken.
Production
Ninety percent of wet wipes on the market are produced from nonwoven fabrics made of polyester or polypropylene.
The material is moistened with water or other liquids (e.g., isopropyl alcohol) depending on the applications. The material may be treated with softeners, lotions, or perfume to adjust the tactile and olfactory properties. Preservatives such as methylisothiazolinone are used to prevent bacterial or fungal growth in the package. The finished wet wipes are folded and put in pocket size package or a box dispenser.
Uses
Wet wipes can serve a number of personal and household purposes. Although marketed primarily for wiping infants' bottoms in diaper changing, it is not uncommon for consumers to also use the product to clean floors, toilet seats, and other surfaces around the home. Parents also use wet wipes, or as they are called for baby care, baby wipes, for wiping up baby vomit and to clean babies' hands and faces.
Baby wipes
Baby wipes are wet wipes used to cleanse the sensitive skin of infants. These are saturated with solutions anywhere from gentle cleansing ingredients to alcohol-based "cleaners". Baby wipes are typically different pack counts (ranging up to 80 or more sheets per pack), and come with dispensing mechanisms. The origin of baby wipes most likely came in the mid-1950s as more people were travelling and needed a way to clean up on the go. One of the first companies to produce these was a company called Nice-Pak. They made napkin sized paper cloth saturated with a scented skin cleanser.
The first wet-wipe products specifically marketed as baby wipes, such as Kimberly-Clark's Huggies wipes and Procter & Gamble's Pampers wipes, appeared on the market in 1990. As the technology to produce wipes matured and became more affordable, smaller brands began to appear. By the 1990s, most super stores like Kmart and Wal-Mart had their own private label brand of wipes made by other manufacturers. After this period there was a boom in the industry and many local brands started manufacturing because of low entry barriers.
In December 2018, a New Zealand company launched the country's first ever wet and baby wipe alternative, the BDÉT Foam Wash.
Toilet wet wipes
Toilet wet wipes are sometimes preferred to standard toilet paper. Many brands sell toilet wet wipes, claiming they are "flushable". However, they do not decompose in septic tanks as they are made of polyester or polypropylene. In 2013 a Consumer Reports article said that none of the leading brands could pass their test.
Personal hygiene
Wet wipes are often included as part of a standard sealed cutlery package offered in restaurants or along with airline meals.
Wet wipes began to be marketed as a luxury alternative to toilet paper by 2005 by companies such as Kimberly-Clark and Procter & Gamble. They are dispensed in the toilets of restaurants, service stations, doctors' offices, and other places with public use.
Wet wipes have also found a use among visitors to outdoor music festivals, particularly those who camp, as an alternative to communal showers.
Cleansing pads
Cleansing pads are fiber sponges which have been previously soaked with water, alcohol and other active ingredients for a specific intended use. They are ready to use hygiene products and they are simple and convenient solutions to dispose of dirt or other undesirable elements.
There are different type of cleansing pads offered by the beauty industry: make-up removing pads, anti-spot treatments and anti-acne pads that usually contain salicylic acid, vitamins, menthol and other treatments.
Cleansing pads for preventing infection are usually saturated with alcohol and bundled in sterile packages. Hands and instruments may be disinfected with these pads while treating wounds. Disinfecting cleansing pads are often included in first aid kits for this purpose. Since the outbreak of H1N1 sales of individual impregnated wet wipes and gels in sachets and flowpacks have dramatically increased in the UK following the Government's advice to keep hands and surfaces clean to prevent the spread of germs.
Industrial wipes
Pre-impregnated industrial-strength cleaning wipes with powerful cleaning fluid that cuts through the dirt as the high performance fabric absorbs the residue. Industrial wipes has the ability to clean a vast range of though substances from hands, tools and surfaces, including: grime, grease, oil- and water-based paints and coatings, adhesives, silicone and acrylic sealants, poly foam, epoxy, oil, tar and more.
Pain relief
There are pain relief pads sopping with alcohol and benzocaine. These pads are good for treating minor scrapes, burns, and insect bites. They disinfect the injury and also ease pain and itching.
Pet care
Wet wipes are produced specifically for pet care, for example eye, ear, or dental cleansing pads (with boric acid, potassium chloride, zinc sulfate, sodium borate) for dogs, cats, horses, and birds.
Healthcare
Medical wet wipes are available for various applications. These include alcohol wet wipes, chlorhexidine wipes (for disinfection of surfaces and noninvasive medical devices), and sporicidal wipes. Medical wipes can be used to prevent the spread of pathogens such as norovirus and Clostridioides difficile.
Effect on sewage systems
Water management companies ask people not to flush wet wipes down toilets, as their failure to break apart or dissolve in water can cause sewer blockages known as fatbergs.
Since the mid-2000s, wet wipes such as baby wipes have become more common for use as an alternative to toilet paper in affluent countries, including the United States and the United Kingdom. This usage has in some cases been encouraged by manufacturers, who have labelled some wet wipe brands as "flushable". Wet wipes, when flushed down the toilet, have been reported to clog internal plumbing, septic systems and public sewer systems. The tendency for fat and wet wipes to cling together allegedly encourages the growth of the problematic obstructions in sewers known as "fatbergs". In addition, some brands of wipes contain alcohol, which can kill bacteria and denature enzymes responsible for breaking down solid waste in septic tanks. In the late 2010s, other alternatives such as gel wipe had also come on to the market.
In 2014, a class action suit was filed in the U.S. District Court for the Northern District of Ohio against Target Corporation, and Nice-Pak Products Inc. on behalf of consumers in Ohio who purchased Target-brand flushable wipes. The lawsuit alleged the retailer misled consumers by marking the packaging on its Up & Up brand wipes as flushable and safe for sewer and septic systems. The lawsuit also alleged that the products were a public health hazard because they clogged pumps at municipal waste-treatment facilities. Target and Nice-Pak agreed to settle the case in 2018.
In 2015, the city of Wyoming, Minnesota, launched a class action suit against six companies, including Procter & Gamble, Kimberly-Clark, and Nice-Pak, alleging they were fraudulently promoting their products as "flushable". The city dropped the lawsuit in 2018 after concluding that the city had not experienced damage to its sewer systems or a rise in maintenance costs. Upon announcement of the withdrawal of the suit, an industry trade group representing the manufacturers of the wipes released a statement that disputed the claims that the products are harmful to sewer systems.
In 2019, the industry body Water UK announced a new standard for flushable wet wipes. Wipes will need to pass rigorous testing in order to gain a new and approved "Fine to Flush" logo. As of January 2019, only one product had been confirmed to meet the standard, although there were about seven others in the process of being tested.
See also
Oshibori – reusable Japanese wet hand towel
Washlet – a mechanical alternative to wet wipes
References
Personal hygiene products
Disinfectants
Paper products
Toilets
Babycare
Disposable products | Wet wipe | [
"Biology"
] | 1,975 | [
"Excretion",
"Toilets"
] |
2,198,731 | https://en.wikipedia.org/wiki/Latch | A latch or catch (called sneck in Northern England and Scotland) is a type of mechanical fastener that joins two or more objects or surfaces while allowing for their regular separation. A latch typically engages another piece of hardware on the other mounting surface. Depending upon the type and design of the latch, this engaged bit of hardware may be known as a keeper or strike.
A latch is not the same as the locking mechanism of a door or window, although often they are found together in the same product.
Latches range in complexity from flexible one-piece flat springs of metal or plastic, such as are used to keep blow molded plastic power tool cases closed, to multi-point cammed latches used to keep large doors closed.
Common types
Deadbolt latch
A deadbolt latch is a single-throw bolt. The bolt can be engaged in its strike plate only after the door is closed. The locking mechanism typically prevents the bolt from being retracted by force.
Spring latches
A latch bolt is an extremely common latch type, typically part of a lockset. It is a spring-loaded bolt with an angled edge. When the door is pushed closed, the angled edge of the latch bolt engages with the lip of the strike plate; a spring allows the bolt to retract. Once the door is fully closed, the bolt automatically extends into the strike plate, holding the door closed. The latch bolt is disengaged (retracted) typically when the user turns the door handle, which via the lockset's mechanism, manually retracts the latch bolt, allowing the door to open.
A deadlocking latch bolt (deadlatch) is an elaboration on the latch bolt which includes a guardbolt to prevent "shimming" or "jimmying" of the latch bolt. When the door is closed, the latch bolt and guardbolt are retracted together, and the door closes normally, with the latch bolt entering the strike plate. The strike plate, however, holds the guardbolt in its depressed position: a mechanism within the lockset holds the latch bolt in the projected position. This arrangement prevents the latch bolt from being depressed through the use of a credit card or some other tool, which would lead to unauthorized entry.
A draw latch is a two-part latch where one side has an arm that can clasp to the other half, and as it closes the clasp pulls the two parts together. It is frequently used on tool boxes, chests, crates, and windows and does not need to be fully closed to secure both halves.
A spring bolt lock (or night latch) is a locking mechanism used with a latch bolt
Slam latch
A slam latch uses a spring and is activated by the shutting or slamming of a door. Like all latches, a slam latch is a mechanism to hold a door closed. The slam latch derives its name from its ability to slam doors and drawers shut without damaging the latch. A slam latch is rugged and ideal for industrial, agricultural and construction applications.
Cam lock
A cam lock is a type of latch consisting of a base and a cam. The base is where the key or tool is used to rotate the cam, which is what does the latching. Cams can be straight or offset; offset cams are reversible. Commonly found on garage cabinets, file cabinets, tool chests, and other locations where privacy and security is needed.
Electronic cam lock
Electronic cam locks are an alternative to mechanical cam locks. The appearance of the electronic cam lock is similar to the mechanical cam lock, but it is different in the lock cylinder.
The keyhole of a mechanical cam lock is usually the same as an ordinary padlock. A physical key is used to unlock the lock. The physical key has a notch or slot corresponding to the obstacle in the cam lock, allowing it to rotate freely in the lock.
Different from mechanical cam locks, electronic cam locks use an electronic key to unlock. The key needs to be programmed which contains the user, unlocking date, and time period. The electronic cam lock has no mechanical keyhole, only three metal contacts are retained. When unlocking, the three contacts on the head end of the electronic key are in contact with the three contacts on the electronic cam lock. At this time, the key will supply power to the electronic cam lock and read the ID number of the electronic cam lock for verification and match. If successful, the lock can be unlocked.
The emergence of electronic cam locks aims to improve the safety and functionality of traditional mechanical cam locks.
Suffolk latch
A Suffolk latch is a type of latch incorporating a simple thumb-actuated lever and commonly used to hold wooden gates and doors closed.
The Suffolk latch originated in the English county of Suffolk in the 16th century and stayed in common use until the 19th century. They have recently come back into favour, particularly in traditional homes and country cottages. They were common from the 17th century to around 1825, and their lack of a back plate made them different from the later, and neighbouring Norfolk latch (introduced 1800–1820). Both the Suffolk latch and Norfolk latch are thought to have been named by architectural draughtsman William Twopenny (1797–1873). Many of these plates found their way into America and other parts of the world.
Norfolk latch
A Norfolk latch is a type of latch incorporating a simple thumb-actuated lever and commonly used to hold wooden gates and doors closed. In a Norfolk latch, the handle is fitted to a backplate independently of the thumb piece. Introduced around 1800–1820, Norfolk latches, originating in the English county of the same name, differ from the older Suffolk latch, which lacked a back plate to which the thumbpiece is attached.
Crossbar
A crossbar, sometimes called a bolt or draw bolt, is a historically common and simple means of barring a door. In its most primitive form it employs a plank or beam held by or placed onto open cleats on a door, which is shifted to be held fast by a corresponding cleat on an adjacent jamb.
A crossbar for double doors employs the same principle, but, in most cases, must be manually set in place and removed due to its width being greater than both doors.
A crossbar for a single jamb may be "captured" on the door by U-shaped bails, or anchored by a bolt on its inboard end and pivoted up and down into open cleats, making it a form of latch.
A "draw bolt" style closure adds a handle for sliding its bolt - the source of the term "bolting a door". A variant with a slot in the handle for dropping it over a hasp to secure it with a lock is known as an aldrop. Most modern draw bolts are made of metal, and may be used to secure a door from the outside or the in.
Cabin hook
A cabin hook is a hooked bar that engages into a staple. The bar is usually attached permanently to a ring or staple that is fixed with screws or nails to woodwork or a wall at the same level as the eye screw. The eye screw is usually screwed into the adjacent wall or onto the door itself. Used to hold a cupboard, door or gate open or shut.
A cabin hook is used in many situations to hold a door open, like on ships to prevent doors from swinging and banging against other woodwork as the ship moves due to wave action. This usage spread also to other domains, where a door was required to be held open or a self-closing device is used to close the door.
Many buildings are built with fire-resistant doors to separate different parts of buildings and to allow people to be protected from fire and smoke. When using a cabin hook in such a situation, one should keep in mind that a fire-resistant door is an expensive and heavy item, and it only works as a fire door if it is closed during a fire. To hold an often heavy fire door open simply, electromagnetic door holders are used that release when a building's fire alarm system is activated. As cabin hooks must be released manually, they are impractical for fire doors.
Toggle latch
Also named draw latch or draw catch. It has a claw or a loop that catches the strike plate (named catch plate in this case) when reaching a certain position.
Pawl
A pawl is a latch that will allow movement in one direction, but prevents return motion. It is commonly used in combination with a ratchet wheel.
Applications
Architecture
A latch of some type is typically fitted to a door or window.
Weaponry
Many types of weaponry incorporate latches with designs unique to the weapon.
Firearms
Firearms require specialized latches used during loading and firing of the weapon.
A break-action firearm is one whose barrels are hinged and a latch is operated to release the two parts of the weapon to expose the breech and allow loading and unloading of ammunition. It is then closed and re-latched prior to firing. A separate operation may be required for the cocking and latching-open of a hammer to fire the new round. Break open actions are universal in double-barrelled shotguns, double-barrelled rifles and combination guns, and are also common in single shot rifles, pistols, and shotguns, and can also be found in flare guns, grenade launchers, air guns and some older revolver designs.
Several latch designs have been used for loading revolvers. In a top-break revolver, the frame is hinged at the bottom front of the cylinder. The frame is in two parts, held together by a latch on the top rear of the cylinder. For a swing out cylinder, the cylinder is mounted on a pivot that is coaxial with the chambers, and the cylinder swings out and down. Some designs, such as the Ruger Super Redhawk or the Taurus Raging Bull, use a latches at the front and rear of the cylinder to provide a secure bond between cylinder and frame.
To fire a revolver, generally the hammer is first manually cocked and latched into place. The trigger, when pulled, releases the hammer, which fires the round in the chamber.
Knives
Various types of knives with folding or retractable blades rely on latches for their function. A switchblade uses an internal spring to produce the blade which is held in place by a button-activated latch. Likewise a ballistic knife uses a strong latch to restrain a powerful spring from firing the blade as a projectile until triggered by opening the latch. A gravity knife relies on a latch to hold the folding blade in an open position once released. A butterfly knife uses a single latch to hold the folding blade both open and closed, depending on the position of the handles; by rotating 180 degrees the same latch can be used in either configuration. Butterfly knife latches have numerous variations, including magnetic variants and some which can be opened via a spring when the handles are squeezed together.
Utility knives also often use a latch to hold a folding knife both open and closed. This allows it to be locked in orientation to the handle when in use, but also safely stowed otherwise. To open a knife of this type may require significantly more force than the weapons variety as an added safety feature.
Other
Crossbows incorporate a type of latch to hold the drawn bowstring prior to firing.
Automobiles
Automobiles incorporate numerous special-purpose latches as components of the doors, hood/bonnet, trunk/boot door, seat belts, etc.
On passenger cars, a hood may be held down by a concealed latch. On race cars or cars with aftermarket hoods (that do not use the factory latch system) the hood may be held down by hood pins.
The term Nader bolt is a nickname for the bolt on vehicles that allows a hinged door to remain safely latched and closed. It is named after consumer rights advocate and politician Ralph Nader, who in 1965 released the book Unsafe at Any Speed which claimed that American cars were fundamentally flawed with respect to operator safety.
Latches in seatbelts typically fasten the belt which constrains the occupant to the body of the car. Particularly in rear seats slightly different latches may be used for each seat in order to prevent adjacent seatbelts from being attached to the wrong point. Inertial seatbelt release is a potential circumstance where, in a collision, the seatbelt latch can unintentionally come loose leading to potential injury of the passenger. An additional risk of seatbelt latches is that in some cases the occupant may believe the latch is secure (e.g., by hearing a characteristic click) when in fact it is not.
A parking pawl is a device that latches the transmission on automatic vehicles when put in 'park'.
Bakeware
A spring latch (in this case an over-center-latch) is used to hold the walls of a springform pan in place.
See also
Door chain
Electric strike
Single-point locking
Snib
References
Norfolk
Fasteners
Door furniture
io:Klinko | Latch | [
"Engineering"
] | 2,643 | [
"Construction",
"Fasteners"
] |
2,198,839 | https://en.wikipedia.org/wiki/Unified%20Science | "Unified Science" can refer to any of three related strands in contemporary thought.
Belief in the unity of science was a central tenet of logical positivism. Different logical positivists construed this doctrine in several different ways, e.g. as a reductionist thesis, that the objects investigated by the special sciences reduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all of the theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common method.
The writings of Edward Haskell and a few associates, seeking to rework science into a single discipline employing a common artificial language. This work culminated in the 1972 publication of Full Circle: The Moral Force of Unified Science. The vast part of the work of Haskell and his contemporaries remains unpublished, however. Timothy Wilken and Anthony Judge have recently revived and extended the insights of Haskell and his coworkers.
Unified Science has been a consistent thread since the 1940s in Howard T. Odum's systems ecology and the associated Emergy Synthesis, modeling the "ecosystem": the geochemical, biochemical, and thermodynamic processes of the lithosphere and biosphere. Modeling such earthly processes in this manner requires a science uniting geology, physics, biology, and chemistry (H.T.Odum 1995). With this in mind, Odum developed a common language of science based on electronic schematics, with applications to ecology economic systems in mind (H.T.Odum 1994).
See also
Consilience — the unification of knowledge, e.g. science and the humanities
Tree of knowledge system
References
Odum, H.T. 1994. Ecological and General Systems: An Introduction to Systems Ecology. Colorado University Press, Colorado.
Odum, H.T. 1995. 'Energy Systems and the Unification of Science', in Hall, C.S. (ed.) Maximum Power: The Ideas and Applications of H.T. Odum. Colorado University Press, Colorado: 365-372.
External links
Future Positive Timothy Wilken's website, including a lot of material and diagrams on Edward Haskell's Unified Science
Cardioid Attractor Fundamental to Sustainability - 8 transactional games forming the heart of sustainable relationship Anthony Judge's further development of these ideas
Logical positivism
Metatheory of science
Science studies | Unified Science | [
"Mathematics"
] | 521 | [
"Mathematical logic",
"Logical positivism"
] |
2,198,949 | https://en.wikipedia.org/wiki/MakeIndex | MakeIndex is a computer program which provides a sorted index from unsorted raw data. MakeIndex can process raw data output by various programs, however, it is generally used with LaTeX and troff.
MakeIndex was written around the year 1986 by Pehong Chen in the C programming language and is free software. Six pages of documentation titled "MakeIndex: An Index Processor for LaTeX" by Leslie Lamport are available on the web and dated "17 February 1987."
See also
xindy
References
Wikibooks: LaTeX/Indexing
Pehong Chen and Michael A. Harrison: Index preparation and processing (distributed with MakeIndex)
Leslie Lamport: MakeIndex: an index processor for LaTeX
Frank Mittelbach et al., The LaTeX Companion, Addison-Wesley Professional, 2nd edition, 2004,
Free software programmed in C
Free TeX software
Troff
Index (publishing) | MakeIndex | [
"Mathematics"
] | 189 | [
"Troff",
"Mathematical markup languages"
] |
2,198,965 | https://en.wikipedia.org/wiki/Cross-covariance | In probability and statistics, given two stochastic processes and , the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual notation for the expectation operator, if the processes have the mean functions and , then the cross-covariance is given by
Cross-covariance is related to the more commonly used cross-correlation of the processes in question.
In the case of two random vectors and , the cross-covariance would be a matrix (often denoted ) with entries Thus the term cross-covariance is used in order to distinguish this concept from the covariance of a random vector , which is understood to be the matrix of covariances between the scalar components of itself.
In signal processing, the cross-covariance is often called cross-correlation and is a measure of similarity of two signals, commonly used to find features in an unknown signal by comparing it to a known one. It is a function of the relative time between the signals, is sometimes called the sliding dot product, and has applications in pattern recognition and cryptanalysis.
Cross-covariance of random vectors
Cross-covariance of stochastic processes
The definition of cross-covariance of random vectors may be generalized to stochastic processes as follows:
Definition
Let and denote stochastic processes. Then the cross-covariance function of the processes is defined by:
where and .
If the processes are complex-valued stochastic processes, the second factor needs to be complex conjugated:
Definition for jointly WSS processes
If and are a jointly wide-sense stationary, then the following are true:
for all ,
for all
and
for all
By setting (the time lag, or the amount of time by which the signal has been shifted), we may define
.
The cross-covariance function of two jointly WSS processes is therefore given by:
which is equivalent to
.
Uncorrelatedness
Two stochastic processes and are called uncorrelated if their covariance is zero for all times. Formally:
.
Cross-covariance of deterministic signals
The cross-covariance is also relevant in signal processing where the cross-covariance between two wide-sense stationary random processes can be estimated by averaging the product of samples measured from one process and samples measured from the other (and its time shifts). The samples included in the average can be an arbitrary subset of all the samples in the signal (e.g., samples within a finite time window or a sub-sampling of one of the signals). For a large number of samples, the average converges to the true covariance.
Cross-covariance may also refer to a "deterministic" cross-covariance between two signals. This consists of summing over all time indices. For example, for discrete-time signals and the cross-covariance is defined as
where the line indicates that the complex conjugate is taken when the signals are complex-valued.
For continuous functions and the (deterministic) cross-covariance is defined as
.
Properties
The (deterministic) cross-covariance of two continuous signals is related to the convolution by
and the (deterministic) cross-covariance of two discrete-time signals is related to the discrete convolution by
.
See also
Autocovariance
Autocorrelation
Correlation
Convolution
Cross-correlation
References
External links
Cross Correlation from Mathworld
http://scribblethink.org/Work/nvisionInterface/nip.html
http://www.phys.ufl.edu/LIGO/stochastic/sign05.pdf
http://www.staff.ncl.ac.uk/oliver.hinton/eee305/Chapter6.pdf
Covariance and correlation
Time domain analysis
Signal processing | Cross-covariance | [
"Technology",
"Engineering"
] | 819 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
2,198,995 | https://en.wikipedia.org/wiki/Conjugal%20visit | A conjugal visit is a scheduled period in which an inmate of a prison or jail is permitted to spend several hours or days in private with a visitor. The visitor is usually their legal partner. The generally recognized basis for permitting such visits in modern times is to preserve family bonds and increase the chances of success for a prisoner's eventual return to ordinary life after release from prison. They also provide an incentive for inmates to comply with the various day-to-day rules and regulations of the prison.
Conjugal visits usually take place in designated rooms or a structure provided for that purpose, such as a trailer or a small cabin. Supplies such as soap, condoms, lubricant, bed linens, and towels may be provided.
Country
Australia
In Australia, conjugal visits are permitted in the Australian Capital Territory and Victoria. Other jurisdictions, including Western Australia and Queensland, do not permit conjugal visits.
Brazil
In Brazil, male prisoners are eligible to be granted conjugal visits for both heterosexual and homosexual relationships, while women's conjugal visits are tightly regulated, if granted at all.
Canada
In Canada, all inmates in federal correctional facilities, with the exception of those on disciplinary restrictions or at risk for family violence, are permitted "private family visits" of up to 72 hours' duration once every two months. Eligible visitors, who may not themselves be prison inmates, are: spouse, or common-law partner of at least six months; children; parents; foster parents; siblings; grandparents; and "persons with whom, in the opinion of the institutional head, the inmate has a close familial bond". Food is provided by the institution but paid for by the inmates and visitors, who are also responsible for cleaning the unit after the visit. Prison staff have regular contact with the inmate and visitors during a visit.
Czech Republic
In the Czech Republic, a prison warden has the authority to allow an inmate "a visit without visual and auditory supervision of the employees of the Prison Service". Inmate's medical check and mental health check is required before such visit is permitted.
Denmark
In Denmark, conjugal visits are permissible. The State Prison of East Jutland has apartments for couples, where inmates who have been sentenced to more than eight years in prison can have visitation for 47 hours per visit.
Estonia
Conjugal visits of up to 72 hours with (including de facto) spouses or registered partners or relatives are permitted at least once every half year. This is permitted assuming no safety issues with the inmate or lack of confidence in the reputability of the visitor. The visits last 24 hours by default, but may be extended to 72 hours rewarding inmates' good behaviour. Visits take place in designated rooms on prison grounds without supervision.
France
In France, inmates are permitted conjugal visits. Visits last up to 72 hours and take place in mini-apartments consisting of two small rooms, a kitchen and a dining area.
Germany
Germany allows prisoners and their spouses or partners to apply for conjugal visits. Those who are approved are allowed unsupervised visits so that prisoners can preserve intimate bonds with their partners. Prisoners are to be searched before being allowed a visit. In 2010, an inmate murdered his girlfriend and attempted suicide during a visit, leading to additional criticism of the lax security in German prisons.
Hong Kong
Hong Kong does not permit conjugal visits.
India
In 2015, the Punjab and Haryana High Court held that the right of married convicts and jail inmates to have conjugal visits or artificial insemination for pregnancy was a fundamental right. In January 2018, Madras High court allowed a two week conjugal visit to an inmate serving life term in Tamil Nadu prison for the "purpose of procreation".
In October 2022, Punjab became the first state in India to allow conjugal visits to prisoners. According to a senior official, this decision was taken to keep the stress levels of inmates under control and ensure their re-entry into society, and this also fulfills a basic biological need. Under this scheme, prisoners who exhibited good conduct would be allowed to spend two hours in private with their spouses after every two months. Some categories of prisoners are kept out of this program, which includes high–risk prisoners, terrorists, gangsters and those imprisoned for domestic violence, child abuse and sexual crimes. Moreover, both spouses must be free from infectious diseases like HIV, STDs and Tuberculosis, to avail this program.
Ireland
Ireland does not allow conjugal visits. Marie and Noel Murray, an anarchist married couple imprisoned for a 1976 murder, lost a 1991 appeal for conjugal rights. The Supreme Court ruled that the Constitutional right to beget children within marriage was suspended while a spouse was lawfully imprisoned.
Israel
The Israel Prison Service (IPS) allows standard conjugal visits to inmates who are married or are in a common-law relationship or if their partner has been visiting them frequently for at least two years, and have a record of good behavior. Inmates who receive prison furloughs are not eligible for conjugal visits. Conjugal visits can be withheld on security grounds or as a means of punishment for misbehavior. IPS guidelines were clarified in July 2013 to allow conjugal visits of same-sex partners.
Israel only extends this right to citizens of the state, while Palestinians and Gazans imprisoned in Israeli jails are denied conjugal visits.
Japan
In Japan, conjugal visits are not allowed.
Mexico
Conjugal visits are a universal practice in Mexico, independent of a prisoner's marital status; in some correctional facilities entire families are allowed to live in prisons with their imprisoned relative for extended periods. Specifically in Mexico City, in July 2007, the prison system in that city has begun to allow gay prisoners to have conjugal visits from their partners, on the basis of a 2003 law which bans discrimination based on sexual orientation.
Netherlands
The Netherlands allows for one unsupervised visit (Bezoek zonder Toezicht) per month, provided the imprisonment period is at least six months and there is a close and durable relation between the partners. This does not apply to maximum security penitentiaries.
New Zealand
New Zealand does not permit conjugal visits.
Pakistan
In Pakistan, conjugal visits prior to 2009 were permitted only under special circumstances. In August 2009, Federal Shariat Court ruled that married prisoners should be allowed conjugal visits at the designated facilities within the jail complex and alternatively, they should be granted a short parole to visit their spouses. Following the ruling, the Province of Sindh was the first to adopt legislation providing conjugal visits for married prisoners within Jail premises. Human Rights Book 2010 reports that Conjugal visits are now available for Prisoners in all Provinces and Federal territories if they are male and married. Since homosexuality is considered a criminal offense in Pakistan and same-sex marriage is not recognized by law, this privilege applies only to heterosexual couples.
Russia
In the Russian penal system, since a campaign of prison reform that began in 2001, well-behaved prisoners are granted an eighteen-day holiday furlough from incarceration to see loved ones. Prisoners also get extended on-site family visits, approximately once per month.
Spain
In Spain, prisoners are allowed conjugal visits every four to eight weeks. They are held in private rooms and can last up to three hours. Couples are provided with condoms, shower facilities, and clean towels.
Turkey
Since April 2013, the Turkish General Directorate of Prisons and Detention offers conjugal visits as a reward to well-behaved prisoners.
United Kingdom
The English, Welsh, Scottish and Northern Irish prison systems do not allow conjugal visits. However, home visits, with a greater emphasis on building other links with the outside world to which the prisoner will be returned, are allowed. These home visits are usually only granted to prisoners who have a few weeks to a few months remaining of a long sentence. Furthermore, home visits are more likely to be granted if the prisoner is deemed to have a low risk of absconding (i.e. prisoners being held in open prisons have a better chance of being granted home visits than prisoners being held in closed conditions).
United States
The first state to implement conjugal visits was Mississippi in the Mississippi State Penitentiary (Parchman). It was enacted to convince black male prisoners to work harder in their manual labor. This was done unofficially at first, but had become official policy at Parchman Penitentiary by the 1950s.
In Lyons v. Gilligan (1974), the United States District Court for the Northern District of Ohio held that prisoners have no federal constitutional right to conjugal visits with their spouses during sentences.
As of 2008, conjugal visitation programs are now known as the extended-family visits or family-reunion visits because mothers, fathers, and other family members may attend these visits. The focus is on family ties and rehabilitation.
Federal prisons
The United States Federal Bureau of Prisons does not allow conjugal visits for prisoners in federal custody.
State prisons
For prisoners in state custody, the availability of conjugal visits is governed by the law of the particular state. The four states that currently allow conjugal visits are California, Connecticut, New York, and Washington.
Where conjugal visits are allowed, inmates must meet certain requirements to qualify for this privilege: The visitor may be required to undergo a background check, and the inmate must also be free of any sexually transmitted diseases. As a matter of procedure, both visitor and inmate are searched before and after the visit, to ensure that the visitor has not attempted to smuggle any items into or out of the facility.
Jorja Leap, a professor of social welfare at the Luskin School of Public Affairs at the University of California, Los Angeles stated that criminologists believe allowing conjugal visits would build family ties and reduce recidivism. Over the last 40 years, most new prisons included special buildings specifically designed for conjugal visits.
By the early 1990s, 17 states had conjugal programs. According to Leap, conjugal visits declined after an increase in attitudes that prison should be a place for punishment and that conjugal visits were not appropriate for people being punished, and also because academic literature in the 1980s and 1990s argued that it was not possible to rehabilitate some criminals. Many states that once allowed conjugal visits have since eliminated the programs. In April 2011, New York adopted legislation to allow family visits for married partners. In January 2014, the head of the Mississippi Department of Corrections, Chris Epps, terminated the state conjugal program. New Mexico announced it was also ending its program in May 2014.
In June 2007, the California Department of Corrections announced it would allow same-sex conjugal visits. The policy was enacted to comply with a 2005 state law requiring state agencies to give the same rights to domestic partners that heterosexual couples receive. The new rules allow for visits only by registered married same sex couples or domestic partners who are not themselves incarcerated. Further, the same-sex marriage or domestic partnership must have been established before the prisoner was incarcerated.
See also
Same-sex conjugal visit, in the article LGBTQ people in prison
Relationships for incarcerated individuals
References
Further reading
Human sexuality
Penal imprisonment
Prison sexuality
Imprisonment and detention | Conjugal visit | [
"Biology"
] | 2,305 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
2,199,027 | https://en.wikipedia.org/wiki/Finnish%20Armed%20Forces%20radio%20alphabet | The Finnish Defence Forces switched over to the NATO phonetic alphabet in 2005, but the Finnish one is used for Å, Ä, Ö and digits. International operations use only the NATO alphabet.
On the Finnish rail network the Finnish Armed Forces spelling alphabet was used until May 31, 2020 and starting on July 1 the railways switched to NATO phonetic alphabet, but still retained Finnish spelling words for Å, Ä, Ö and numbers.
See also
Radio alphabet
Swedish Armed Forces' phonetic alphabet
References
Spelling alphabets
Military communications
Radio alphabet | Finnish Armed Forces radio alphabet | [
"Engineering"
] | 103 | [
"Military communications",
"Telecommunications engineering"
] |
2,199,064 | https://en.wikipedia.org/wiki/Makeup%20sex | Makeup sex is an informal term for sexual intercourse which may be experienced after conflict in an intimate personal relationship. These conflicts may range from minor arguments to major arguments. Sex under these circumstances may be more gratifying and invested with additional emotional significance. It is sometimes conceived as a physical expression of reconciliation and rediscovery of one's partner following the cathartic experience of a fight and may resolve underlying conflicts.
Makeup sex has been attributed to increased sexual desires stemming from romantic conflict. After conflict during a relationship, arousal transfer may occur which shifts anger into arousal. Experts disagree on the outlook of makeup sex, some believe makeup sex is unhealthy as it rewards "fighting, drama, and generally bad behavior". Sexologist and television personality Jessica O'Reilly describes makeup sex as positive, settling conflicts that can only be resolved through sex. Makeup sex may be more intense, as it may assist in releasing underlying emotions.
Additionally, makeup sex can be used as a form of manipulation in PUA (Pickup Artist) tactics. Some people may believe that resolving any conflict through initiating sex will appease their partners. Over time, this can lead to being easily controlled or manipulated by their partners.
References
Human sexuality | Makeup sex | [
"Biology"
] | 246 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
2,199,220 | https://en.wikipedia.org/wiki/Aromatization | Aromatization is a chemical reaction in which an aromatic system is formed from a single nonaromatic precursor. Typically aromatization is achieved by dehydrogenation of existing cyclic compounds, illustrated by the conversion of cyclohexane into benzene. Aromatization includes the formation of heterocyclic systems.
Industrial practice
Although not practiced under the name, aromatization is a cornerstone of oil refining. One of the major reforming reactions is the dehydrogenation of paraffins and naphthenes into aromatics.
The process, which is catalyzed by platinum supported by aluminium oxide, is exemplified in the conversion methylcyclohexane (a naphthene) into toluene (an aromatic). Dehydrocyclization converts paraffins (acyclic hydrocarbons) into aromatics. A related aromatization process includes dehydroisomerization of methylcyclopentane to benzene:
As of alkanes, they first dehydrogenate to olefins, then form rings at the place of the double bond, becoming cycloalkanes, and finally gradually lose hydrogen to become aromatic hydrocarbons.
For cyclohexane, cyclohexene, and cyclohexadiene, dehydrogenation is the conceptually simplest pathway for aromatization. The activation barrier decreases with the degree of unsaturation. Thus, cyclohexadienes are especially prone to aromatization. Formally, dehydrogenation is a redox process. Dehydrogenative aromatization is the reverse of arene hydrogenation. As such, hydrogenation catalysts are effective for the reverse reaction. Platinum-catalyzed dehydrogenations of cyclohexanes and related feedstocks are the largest scale applications of this reaction (see above).
Biochemical processes
Aromatases are enzymes that aromatize rings within steroids. The specific conversions are testosterone to estradiol and androstenedione to estrone. Each of these aromatizations involves the oxidation of the C-19 methyl group to allow for the elimination of formic acid concomitant with aromatization. Such conversions are relevant to estrogen tumorogenesis in the development of breast cancer and ovarian cancer in postmenopausal women and gynecomastia in men. Aromatase inhibitors like exemestane (which forms a permanent and deactivating bond with the aromatase enzyme) and anastrozole and letrozole (which compete for the enzyme) have been shown to be more effective than anti-estrogen medications such as tamoxifen likely because they prevent the formation of estradiol.
Laboratory methods
Although practiced on a very small scale compared to the petrochemical routes, diverse methods have been developed for fine chemical syntheses.
Oxidative dehydrogenation
2,3-Dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) is often the reagent of choice. DDQ and an acid catalyst has been used to synthesise a steroid with a phenanthrene core by oxidation accompanied by a double methyl migration. In the process, DDQ is itself reduced into an aromatic hydroquinone product.
Sulfur and selenium are traditionally used in aromatization, the leaving group being hydrogen sulfide.
Soluble transition metal complexes can induce oxidative aromatization concomitant with complexation. α-Phellandrene (2-methyl-5-iso-propyl-1,3-cyclohexadiene) is oxidised to p-iso-propyltoluene with the reduction of ruthenium trichloride.
Oxidative dehydrogenation of dihydropyridine results in aromatization, giving pyridine.
Dehydration
Non-aromatic rings can be aromatized in many ways. Dehydration allows the Semmler-Wolff reaction
of 2-cyclohexenone oxime to aniline under acidic conditions.
Tautomerization
The isomerization of cyclohexadienones gives the aromatic tautomer phenol. Isomerization of 1,4-naphthalenediol at 200 °C produces a 2:1 mixture with its keto form, 1,4-dioxotetralin.
Hydride and proton abstraction
Classically, aromatization reactions involve changing the C:H ratio of a substrate. When applied to cyclopentadiene, proton removal gives the aromatic conjugate base cyclopentadienyl anion, isolable as sodium cyclopentadienide:
2 Na + 2 C5H6 → 2 NaC5H5 + H2
Aromatization can entail removal of hydride. Tropylium, arises by the aromatization reaction of cycloheptatriene with hydride acceptors.
+ → + +
From acyclic precursors
The aromatization of acyclic precursors is rarer in organic synthesis, although it is a significant component of the BTX production in refineries.
Among acyclic precursors, alkynes are relatively prone to aromatizations since they are partially dehydrogenated. The Bergman cyclization converts an enediyne to a dehydrobenzene intermediate diradical, which abstracts hydrogen to aromatize. The enediyne moiety can be included within an existing ring, allowing access to a bicyclic system under mild conditions as a consequence of the ring strain in the reactant. Cyclodeca-3-en-1,5-diyne reacts with 1,3-cyclohexadiene to produce benzene and tetralin at 37 °C, the reaction being highly favorable owing to the formation of two new aromatic rings:
See also
Aromatase
Aromatic hydrocarbon
References
Hydrogen
Oil refining
Organic redox reactions | Aromatization | [
"Chemistry"
] | 1,268 | [
"Organic redox reactions",
"Petroleum technology",
"Oil refining",
"Organic reactions"
] |
2,199,238 | https://en.wikipedia.org/wiki/Saxifraga%20%C3%97%20urbium | Saxifraga × urbium, London pride, is an evergreen perennial garden flowering plant. Alternative names for it include St. Patrick's cabbage, whimsey, prattling Parnell, and look up and kiss me. Before 1700 the “London pride” appellation was given to the Sweet William (Dianthus barbatus).
In 1846, Theresa Cornwallis West made a journey to Ireland. Near Dunloe in County Kerry "heareabouts grew quantities of our London Pride, and upon my expressing a wish for some roots to carry home, Sullivan [the driver] sprang down and tore up a large tuft. 'Ah, then,' said [our guide Spillane], 'that's too much entirely; why wouldn't ye leave some for the next comer?'" (A Summer Visit to the West of Ireland in 1846, p. 99).
Taxonomy
The true London pride is a hybrid between Saxifraga umbrosa, native to the Spanish Pyrenees, and Saxifraga spathularis (the plant to which the name St Patrick's cabbage more correctly belongs, from western Ireland). The hybrid has been known at least since the 17th century.
The name is sometimes applied to any of several closely related plants of the saxifrage genus. The section Gymnopera is collectively referred to as "London Pride saxifrages", and others of them have "London pride" in their common names, for example the lesser London pride, S. cuneifolia, and the miniature London pride, S. umbrosa var. primuloides.
Description
London pride is tolerant of dry, shady conditions. It grows to a height of and provides rapid ground cover without being aggressively invasive, and in late spring produces a mass of small pale pink rosette flowers growing from succulent stems. It will grow well in neglected or unfavourable urban spaces where few other flowers flourish, and is a common garden escapee.
This plant has gained the Royal Horticultural Society's Award of Garden Merit.
Symbolism
Bishop Walsham How (1823–1897) wrote a poem to the flower rebuking it for having the sin of pride. When told the flower had the name because Londoners were proud of it he wrote another poem apologising to it.
Tradition holds that Saxifraga × urbium rapidly colonised the bombed sites left by the London Blitz of the early 1940s. As such it is symbolic of the resilience of London and ordinary Londoners, and of the futility of seeking to bomb them into submission. A song by Noël Coward, celebrating London and the flower, achieved great popularity during the World War II years.
In the language of flowers, London Pride is held to stand for frivolity, and its day is 27 July.
References
External links
Words of the song by Noel Coward
Garden plants of Europe
Hybrid plants
urbium
United Kingdom home front during World War II | Saxifraga × urbium | [
"Biology"
] | 608 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
2,199,445 | https://en.wikipedia.org/wiki/Gravity%20train | A gravity train is a theoretical means of transportation for purposes of commuting between two points on the surface of a sphere, by following a straight tunnel connecting the two points through the interior of the sphere.
In a large body such as a planet, this train could be left to accelerate using just the force of gravity, since during the first half of the trip (from the point of departure until the middle), the downward pull towards the center of gravity would pull it towards the destination. During the second half of the trip, the acceleration would be in the opposite direction relative to the trajectory, but, ignoring the effects of friction, the momentum acquired during the first half of the trajectory would equal this deceleration, and as a result, the train's speed would reach zero at approximately the moment the train reached its destination.
Origin of the concept
In the 17th century, British scientist Robert Hooke presented the idea of an object accelerating inside a planet in a letter to Isaac Newton. A gravity train project was seriously presented to the French Academy of Sciences in the 19th century. The same idea was proposed, without calculation, by Lewis Carroll in 1893 in Sylvie and Bruno Concluded. The idea was rediscovered in the 1960s when physicist Paul Cooper published a paper in the American Journal of Physics suggesting that gravity trains be considered for a future transportation project.
Mathematical considerations
Under the assumption of a spherical planet with uniform density, and ignoring relativistic effects as well as friction, a gravity train has the following properties:
The duration of a trip depends only on the density of the planet and the gravitational constant, but not on the diameter of the planet.
The maximum speed is reached at the middle point of the trajectory.
For gravity trains between points which are not the antipodes of each other, the following hold:
The shortest time tunnel through a homogeneous earth is a hypocycloid; in the special case of two antipodal points, the hypocycloid degenerates to a straight line.
All straight-line gravity trains on a given planet take exactly the same amount of time to complete a journey (that is, no matter where on the surface the two endpoints of its trajectory are located).
On the planet Earth specifically, since a gravity train's movement is the projection of a very-low-orbit satellite's movement onto a line, it has the following parameters:
The travel time equals 2530.30 seconds (nearly 42.2 minutes, half the period of a low Earth orbit satellite), assuming Earth were a perfect sphere of uniform density.
By taking into account the realistic density distribution inside the Earth, as known from the preliminary reference Earth model, the expected fall-through time is reduced from 42 to 38 minutes.
To put some numbers in perspective, the deepest current bore hole is the Kola Superdeep Borehole with a true depth of 12,262 meters; covering the distance between London and Paris (350 km) via a hypocycloidical path would require the creation of a hole 111,408 metres deep. Not only is such a depth nine times as great, but it would also necessitate a tunnel that passes through the Earth's mantle.
Mathematical derivation
Using the approximations that the Earth is perfectly spherical and of uniform density , and the fact that within a uniform hollow sphere there is no gravity, the gravitational acceleration experienced by a body within the Earth is proportional to the ratio of the distance from the center to the Earth's radius . This is because underground at distance from the center is like being on the surface of a planet of radius , within a hollow sphere which contributes nothing.
On the surface, , so the gravitational acceleration is . Hence, the gravitational acceleration at is
Diametric path to antipodes
In the case of a straight line through the center of the Earth, the acceleration of the body is equal to that of gravity: it is falling freely straight down. We start falling at the surface, so at time (treating acceleration and velocity as positive downwards):
Differentiating twice:
where . This class of problems, where there is a restoring force proportional to the displacement away from zero, has general solutions of the form , and describes simple harmonic motion such as in a spring or pendulum.
In this case so that , we begin at the surface at time zero, and oscillate back and forth forever.
The travel time to the antipodes is half of one cycle of this oscillator, that is the time for the argument to to sweep out radians. Using simple approximations of that time is
Straight path between two arbitrary points
For the more general case of the straight line path between any two points on the surface of a sphere we calculate the acceleration of the body as it moves frictionlessly along its straight path.
The body travels along AOB, O being the midpoint of the path, and the closest point to the center of the Earth on this path. At distance along this path, the force of gravity depends on distance to the center of the Earth as above. Using the shorthand for length OC:
The resulting acceleration on the body, because is it on a frictionless
inclined surface, is :
But is , so substituting:
which is exactly the same for this new , distance along AOB away from O, as for the in the diametric case along ACD. So the remaining analysis is the same, accommodating the initial condition that the maximal is the complete equation of motion is
The time constant is the same as in the diametric case so the journey time is still 42 minutes; it's just that all the distances and speeds are scaled by the constant .
Dependence on radius of planet
The time constant depends only on so if we expand that we get
which depends only on the gravitational constant and the density of the planet. The size of the planet is immaterial; the journey time is the same if the density is the same.
In fiction
In the 2012 movie Total Recall, a gravity train called "The Fall" goes through the center of the Earth to commute between Western Europe and Australia.
See also
Brachistochrone curve
Funicular
Hyperloop
Rail energy storage
Schuler tuning
Colonization of the asteroid belt
Space elevator
References
Description of the concept Gravity train and mathematical solution (Alexandre Eremenko web page at Purdue University).
External links
A simulation of this motion; includes tunnels that do not pass through the center of the earth. Also shows a satellite with same period.
The Gravity Express
To Everywhere in 42 Minutes
Mechanics
Fictional technology
Hypothetical technology
High-speed rail
Train
Differential equations
Travel to the Earth's center | Gravity train | [
"Physics",
"Mathematics",
"Engineering"
] | 1,346 | [
"Mathematical objects",
"Differential equations",
"Equations",
"Mechanics",
"Mechanical engineering"
] |
2,199,482 | https://en.wikipedia.org/wiki/Methoxsalen | Methoxsalen (or Xanthotoxin, 8-methoxypsoralen) sold under the brand name Oxsoralen among others, is a medication used to treat psoriasis, eczema, vitiligo, and some cutaneous lymphomas in conjunction with exposing the skin to ultraviolet (UVA) light from lamps or sunlight. Methoxsalen modifies the way skin cells receive the UVA radiation, allegedly clearing up the disease. Levels of individual patient PUVA exposure were originally determined using the Fitzpatrick scale. The scale was developed after patients demonstrated symptoms of phototoxicity after oral ingestion of methoxsalen followed by PUVA therapy.
Chemically, methoxsalen is a derivative of psoralen and belongs to a class of organic natural molecules known as furanocoumarins. They consist of coumarin annulated with furan. It can also be injected and used topically.
Natural sources
In 1947, methoxsalen was isolated (under the name "ammoidin") from the plant Ammi majus, bishop's weed.
In 1970, Nielsen extracted 8-methoxypsoralen from four species of the genus Heracleum in the carrot family Apiaceae, including Heracleum mantegazzianum and Heracleum sphondylium. An additional 32 species of the genus Heracleum were found to contain 5-methoxypsoralen (bergapten) or other furanocoumarins.
Biosynthesis
The biosynthetic pathway is a combination of the shikimate pathway, which produces umbelliferone, and the mevalonate pathway.
Synthesis of umbelliferone
Umbelliferone is a phenylpropanoid and as such is synthesized from L-phenylalanine, which in turn is produced via the shikimate pathway. Phenylalanine is lysated into cinnamic acid, followed by hydroxylation by cinnamate 4-hydroxylase to yield 4-coumaric acid. The 4-coumaric acid is again hydroxylated by cinnamate/coumarate 2-hydroxylase to yield 2,4-dihydroxy-cinnamic acid (umbellic acid) followed by a bond rotation of the unsaturated bond adjacent to the carboxylic acid group. Finally an intramolecular attack from the hydroxyl group of C2' to the carboxylic acid group closes the ring and forms the lactone umbelliferone.
Synthesis of methoxsalen
The biosynthetic route then continues with the activation of dimethylallyl pyrophosphate (DMAPP), produced via the mevalonate pathway, to form a carbo-cation via the cleavage of the diphosphates. Once activated, the enzyme umbelliferone 6-prenyltransferase catalyzes a C-alkylation between DMAPP and umbelliferone at the activated position ortho to the phenol, yielding demethylsuberosin. This is then followed by a hydroxylation catalyzed by the enzyme marmesin synthase to yield marmesin. Another hydroxylation is catalyzed by psoralen synthase to yield psoralen. A third hydroxylation by the enzyme psoralen 8-monooxygenase yields xanthotoxol which is followed by a methylation via the enzyme xanthotoxol O-methyltransferase and S-adenosyl methionine to yield methoxsalen.
Risks and side effects
Patients with high blood pressure or a history of liver problems are at risk for inflammation and irreparable damage to both liver and skin. The eyes must be protected from UVA radiation. Side effects include nausea, headaches, dizziness, and in rare cases insomnia.
Methoxsalen has also been classified as an IARC Group 1 carcinogen (known to cause cancer) but is only cancerous when combined with light - UVA radiation.
Society and culture
Author John Howard Griffin (1920–1980) used the chemical to darken his skin in order to investigate racial segregation in the American South. He wrote the book Black Like Me (1961) about his experiences.
References
CYP1A2 inhibitors
CYP3A4 inhibitors
Photosensitizing agents
IARC Group 1 carcinogens
Furanocoumarins
O-methylated coumarins
Catechol ethers
General cytochrome P450 inhibitors
Orphan drugs
Plant toxins | Methoxsalen | [
"Chemistry"
] | 974 | [
"Chemical ecology",
"Plant toxins"
] |
2,200,251 | https://en.wikipedia.org/wiki/Nag%20champa | Nag champa is a commercial fragrance of Indian origin. It is made from a combination of sandalwood and either champak or frangipani. When frangipani is used, the fragrance is usually referred to simply as champa.
Nag champa is commonly used in incense, soap, perfume oil, candles, wax melts, and personal toiletries. It is a popular and recognizable incense fragrance.
Composition
A number of flower species in India are known as champa or champak:
Magnolia champaca, formerly classified as Michelia champaca (swarna champa or yellow champa)
Plumeria rubra (frangipani)
Mesua ferrea (nagkeshar or nagchampa)
Of these—Magnolia champaca is mostly used to prepare the nag champa scent, while Plumeria or Mesua ferrea may be used for scents termed champa and sometimes nag champa.
Nag champa perfume ingredients vary with the manufacturer, though generally they include sandalwood and magnolia, which, as the plant is related to star anise, gives the scent a little spice. Other ingredients will depend on the finished product. Perfume-dipped incenses and soaps would use essential oils or scents, while masala incenses would use finely ground fragrant ingredients as well as essential oils.
References
External links
Incense material
Incense in India
Perfume ingredients | Nag champa | [
"Physics"
] | 286 | [
"Incense material",
"Materials",
"Matter"
] |
2,200,436 | https://en.wikipedia.org/wiki/Front%20velocity | In physics, front velocity is the speed at which the first rise of a pulse above zero moves forward.
In mathematics, it is used to describe the velocity of a propagating front in the solution of hyperbolic partial differential equation.
Various velocities
Associated with propagation of a disturbance are several different velocities. For definiteness, consider an amplitude modulated electromagnetic carrier wave. The phase velocity is the speed of the underlying carrier wave. The group velocity is the speed of the modulation or envelope. Initially it was thought that the group velocity coincided with the speed at which information traveled. However, it turns out that this speed can exceed the speed of light in some circumstances, causing confusion by an apparent conflict with the theory of relativity. That observation led to consideration of what constitutes a signal.
By definition, a signal involves new information or an element of 'surprise' that cannot be predicted from the wave motion at an earlier time. One possible form for a signal (at the point of emission) is:
where u(t) is the Heaviside step function. Using such a form for a signal, it can be shown, subject to the (expected) condition that the refractive index of any medium tends to one as the frequency tends to infinity, that the wave discontinuity, called the front, propagates at a speed less than or equal to the speed of light c in any medium. In fact, the earliest appearance of the front of an electromagnetic disturbance (the precursor) travels at the front velocity, which is c, no matter what the medium. However, the process always starts from zero amplitude and builds up.
References
Wave mechanics | Front velocity | [
"Physics"
] | 338 | [
"Physical phenomena",
"Classical mechanics stubs",
"Classical mechanics",
"Waves",
"Wave mechanics"
] |
2,200,492 | https://en.wikipedia.org/wiki/Sephadex | Sephadex is a cross-linked dextran gel used for gel filtration. It was launched by Pharmacia in 1959, after development work by Jerker Porath and Per Flodin. The name is derived from separation Pharmacia dextran. It is normally manufactured in a bead form and most commonly used for gel filtration columns. By varying the degree of cross-linking, the fractionation properties of the gel can be altered.
These highly specialized gel filtration and chromatographic media are composed of macroscopic beads synthetically derived from the polysaccharide dextran. The organic chains are cross-linked to give a three-dimensional network having functional ionic groups attached by ether linkages to glucose units of the polysaccharide chains.
Available forms include anion and cation exchangers, as well as gel filtration resins, with varying degrees of porosity; bead sizes fall in discrete ranges between 20 and 300 μm.
Sephadex is also used for ion-exchange chromatography.
Sephadex is crosslinked with epichlorohydrin.
Applications
Sephadex is used to separate molecules by molecular weight. Sephadex is a faster alternative to dialysis (de-salting), requiring a low dilution factor (as little as 1.4:1), with high activity recoveries. Sephadex is also used for buffer exchange and the removal of small molecules during the preparation of large biomolecules, such as ampholytes, detergents, radioactive or fluorescent labels, and phenol (during DNA purification).
A special hydroxypropylated form of Sephadex resin, named Sephadex LH-20, is used for the separation and purification of small organic molecules such as steroids, terpenoids, lipids. An example of use is the purification of cholesterol.
Fractionation
Exclusion chromatography.
Fractionation Range of Globular Proteins and Dextrans (Da).
Ion-exchange chromatography.
Sephadex ion exchangers are produced by introducing functional groups onto the cross-linked dextran matrix. These groups are attached to glucose units in the matrix by stable ether linkages.
See also
PEGylation
Size exclusion chromatography
Superose
Sepharose
References
Biochemistry methods
Chromatography
Swedish brands | Sephadex | [
"Chemistry",
"Biology"
] | 498 | [
"Biochemistry methods",
"Biochemistry",
"Chromatography",
"Separation processes"
] |
2,200,530 | https://en.wikipedia.org/wiki/Sorghum%20%C3%97%20drummondii | Sorghum × drummondii (Sudan grass), is a hybrid-derived species of grass raised for forage and grain, native to tropical and subtropical regions of Eastern Africa. It may also be known as Sorghum bicolor × Sorghum arundinaceum after its parents. Some authorities consider all three species to be subspecies under S. bicolor.
The plant is cultivated in Southern Europe, South America, Central America, North America and Southern Asia, for forage or as a cover crop. When treated as a weed, it is known as shattercane. It is distinguished from the grain sorghum (Sorghum bicolor) by the grain (caryopsis) not being exposed at maturity.
References
Biogas substrates
drummondii
Tropical agriculture
Hybrid plants | Sorghum × drummondii | [
"Biology"
] | 158 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
2,200,715 | https://en.wikipedia.org/wiki/Soil%20crust | Soil crusts are soil surface layers that are distinct from the rest of the bulk soil, often hardened with a platy surface. Depending on the manner of formation, soil crusts can be biological or physical. Biological soil crusts are formed by communities of microorganisms that live on the soil surface whereas physical crusts are formed by physical impact such as that of raindrops.
Biological soil crusts
Biological soil crusts are communities of living organisms on the soil surface in arid- and semi-arid ecosystems. They are found throughout the world with varying species composition and cover depending on topography, soil characteristics, climate, plant community, microhabitats, and disturbance regimes. Biological soil crusts perform important ecological roles including carbon fixation, nitrogen fixation, soil stabilization, alter soil albedo and water relations, and affect germination and nutrient levels in vascular plants. They can be damaged by fire, recreational activity, grazing, and other disturbance and can require long time periods to recover composition and function. Biological soil crusts are also known as cryptogamic, microbiotic, microphytic, or cryptobiotic soils.
Physical soil crusts
Physical (as opposed to biological) soil crusts results from raindrop or trampling impacts. They are often hardened relative to uncrusted soil due to the accumulation of salts and silica. These can coexist with biological soil crusts, but have different ecological impact due to their difference in formation and composition. Physical soil crusts often reduce water infiltration, can inhibit plant establishment, and when disrupted can be eroded rapidly.
References
External links
Cryptobiotic soils by the USGS
Crust, soil
Soil physics
Lichenology | Soil crust | [
"Physics",
"Biology"
] | 345 | [
"Lichenology",
"Applied and interdisciplinary physics",
"Soil biology",
"Soil physics"
] |
2,200,848 | https://en.wikipedia.org/wiki/The%20PracTeX%20Journal | The PracTeX Journal, or simply PracTeX, also known as TPJ, was an online journal focussing on practical use of the TeX typesetting system. The first issue appeared in March 2005. It was published by the TeX Users Group and intended to be a complement to their primary print journal, TUGboat. The PracTeX Journal was last published in October 2012.
Topics covered in PracTeX included:
publishing projects or activities accomplished through the use of TeX
problems that were resolved through the use of TeX or problems with TeX that were resolved
how to use certain LaTeX packages
questions & answers
introductions for beginners
The editorial board included many long-time and well-known TeX developers, including Lance Carnes, Arthur Ogawa, and Hans Hagen.
References
External links
Journal home page
Online magazines published in the United States
Defunct computer magazines published in the United States
Magazines established in 2005
Magazines disestablished in 2012
TeX
Typesetting | The PracTeX Journal | [
"Mathematics",
"Technology"
] | 192 | [
"Mathematical markup languages",
"Computer magazine stubs",
"Digital typography stubs",
"Computing stubs",
"TeX"
] |
2,200,949 | https://en.wikipedia.org/wiki/Uranium-233 | Uranium-233 ( or U-233) is a fissile isotope of uranium that is bred from thorium-232 as part of the thorium fuel cycle. Uranium-233 was investigated for use in nuclear weapons and as a reactor fuel. It has been used successfully in experimental nuclear reactors and has been proposed for much wider use as a nuclear fuel. It has a half-life of 160,000 years.
Uranium-233 is produced by the neutron irradiation of thorium-232. When thorium-232 absorbs a neutron, it becomes thorium-233, which has a half-life of only 22 minutes. Thorium-233 decays into protactinium-233 through beta decay. Protactinium-233 has a half-life of 27 days and beta decays into uranium-233; some proposed molten salt reactor designs attempt to physically isolate the protactinium from further neutron capture before beta decay can occur, to maintain the neutron economy (if it misses the 233U window, the next fissile target is 235U, meaning a total of 4 neutrons needed to trigger fission).
233U usually fissions on neutron absorption, but sometimes retains the neutron, becoming uranium-234. For both thermal neutrons and fast neutrons, the capture-to-fission ratio of uranium-233 is smaller than those of the other two major fissile fuels, uranium-235 and plutonium-239.
Fissile material
In 1946, the public first became informed of uranium-233 bred from thorium as "a third available source of nuclear energy and atom bombs" (in addition to uranium-235 and plutonium-239), following a United Nations report and a speech by Glenn T. Seaborg.
The United States produced, over the course of the Cold War, approximately 2 metric tons of uranium-233, in varying levels of chemical and isotopic purity. These were produced at the Hanford Site and Savannah River Site in reactors that were designed for the production of plutonium-239.
Nuclear fuel
Uranium-233 has been used as a fuel in several different reactor types, and is proposed as a fuel for several new designs (see thorium fuel cycle), all of which breed it from thorium. Uranium-233 can be bred in either fast reactors or thermal reactors, unlike the uranium-238-based fuel cycles which require the superior neutron economy of a fast reactor in order to breed plutonium, that is, to produce more fissile material than is consumed.
The long-term strategy of the nuclear power program of India, which has substantial thorium reserves, is to move to a nuclear program breeding uranium-233 from thorium feedstock.
Energy released
The fission of one atom of uranium-233 generates 197.9 MeV = 3.171·10−11 J (i.e. 19.09 TJ/mol = 81.95 TJ/kg = 22764 MWh/kg that is 1.8 million times more than the same mass of diesel).
Weapon material
As a potential weapon material, pure uranium-233 is more similar to plutonium-239 than uranium-235 in terms of source (bred vs natural), half-life and critical mass (both 4–5 kg in beryllium-reflected sphere). Unlike reactor-bred plutonium, it has a very low spontaneous fission rate, which combined with its low critical mass made it initially attractive for compact gun-type weapons, such as small-diameter artillery shells.
A declassified 1966 memo from the US nuclear program stated that uranium-233 has been shown to be highly satisfactory as a weapons material, though it was only superior to plutonium in rare circumstances. It was claimed that if the existing weapons were based on uranium-233 instead of plutonium-239, Livermore would not be interested in switching to plutonium.
The co-presence of uranium-232 can complicate the manufacture and use of uranium-233, though the Livermore memo indicates a likelihood that this complication can be worked around.
While it is thus possible to use uranium-233 as the fissile material of a nuclear weapon, speculation aside, there is scant publicly available information on this isotope actually having been weaponized:
The United States detonated an experimental device in the 1955 Operation Teapot "MET" test which used a plutonium/233U composite pit; its design was based on the plutonium/235U pit from the TX-7E, a prototype Mark 7 nuclear bomb design used in the 1951 Operation Buster-Jangle "Easy" test. Although not an outright fizzle, MET's actual yield of 22 kilotons was sufficiently below the predicted 33 kt that the information gathered was of limited value.
The Soviet Union detonated its first hydrogen bomb the same year, the RDS-37, which contained a fissile core of 235U and 233U.
In 1998, as part of its Pokhran-II tests, India detonated an experimental 233U device of low-yield (0.2 kt) called Shakti V.
The B Reactor and others at the Hanford Site optimized for the production of weapons-grade material have been used to manufacture 233U.
Overall the United States is thought to have produced two tons of 233U, of various levels of purity, some with 232U impurity content as low as 6 ppm.
232U impurity
Production of 233U (through the irradiation of thorium-232) invariably produces small amounts of uranium-232 as an impurity, because of parasitic (n,2n) reactions on uranium-233 itself, or on protactinium-233, or on thorium-232:
232Th (n,γ) → 233Th (β−) → 233Pa (β−) → 233U (n,2n) → 232U
232Th (n,γ) → 233Th (β−) → 233Pa (n,2n) → 232Pa (β−)→ 232U
232Th (n,2n) → 231Th (β−) → 231Pa (n,γ) → 232Pa (β−) → 232U
Another channel involves neutron capture reaction on small amounts of thorium-230, which is a tiny fraction of natural thorium present due to the decay of uranium-238:
230Th (n,γ) → 231Th (β−) → 231Pa (n,γ) → 232Pa (β−) → 232U
The decay chain of 232U quickly yields strong gamma radiation emitters. Thallium-208 is the strongest of these, at 2.6 MeV:
232U (α, 68.9 y)
228Th (α, 1.9 y)
224Ra (α, 5.44 MeV, 3.6 d, with a γ of 0.24 MeV)
220Rn (α, 6.29 MeV, 56 s, with a γ of 0.54 MeV)
216Po (α, 0.15 s)
212Pb (β−, 10.64 h)
212Bi (α, 61 min, 0.78 MeV)
208Tl (β−, 1.8 MeV, 3 min, with a γ of 2.6 MeV)
208Pb (stable)
This makes manual handling in a glove box with only light shielding (as commonly done with plutonium) too hazardous, (except possibly in a short period immediately following chemical separation of the uranium from its decay products) and instead requiring complex remote manipulation for fuel fabrication.
The hazards are significant even at 5 parts per million. Implosion nuclear weapons require 232U levels below 50 ppm (above which the 233U is considered "low grade"; cf. "Standard weapon grade plutonium requires a 240Pu content of no more than 6.5%." which is 65,000 ppm, and the analogous 238Pu was produced in levels of 0.5% (5,000 ppm) or less). Gun-type fission weapons additionally need low levels (1 ppm range) of light impurities, to keep the neutron generation low.
The production of "clean" 233U, low in 232U, requires a few factors: 1) obtaining a relatively pure 232Th source, low in 230Th (which also transmutes to 232U), 2) moderating the incident neutrons to have an energy not higher that 6 MeV (too-high energy neutrons cause the 232Th (n,2n) → 231Th reaction) and 3) removing the thorium sample from neutron flux before the 233U concentration builds up to a too high level, in order to avoid fissioning the 233U itself (which would produce energetic neutrons).
The Molten-Salt Reactor Experiment (MSRE) used 233U, bred in light water reactors such as Indian Point Energy Center, that was about 220 ppm 232U.
Further information
Thorium, from which 233U is bred, is roughly three to four times more abundant in the Earth's crust than uranium.
The decay chain of 233U itself is part of the neptunium series, the decay chain of its grandparent 237Np.
Uses for uranium-233 include the production of the medical isotopes actinium-225 and bismuth-213 which are among its daughters, low-mass nuclear reactors for space travel applications, use as an isotopic tracer, nuclear weapons research, and reactor fuel research including the thorium fuel cycle.
The radioisotope bismuth-213 is a decay product of uranium-233; it has promise for the treatment of certain types of cancer, including acute myeloid leukemia and cancers of the pancreas, kidneys and other organs.
See also
Breeder reactor
Liquid fluoride thorium reactor
Notes
Actinides
Isotopes of uranium
Fissile materials
Special nuclear materials | Uranium-233 | [
"Chemistry"
] | 2,035 | [
"Explosive chemicals",
"Fissile materials",
"Isotopes of uranium",
"Isotopes"
] |
2,200,954 | https://en.wikipedia.org/wiki/Magnetic%20force%20microscope | Magnetic force microscopy (MFM) is a variety of atomic force microscopy, in which a sharp magnetized tip scans a magnetic sample; the tip-sample magnetic interactions are detected and used to reconstruct the magnetic structure of the sample surface. Many kinds of magnetic interactions are measured by MFM, including magnetic dipole–dipole interaction. MFM scanning often uses non-contact atomic force microscopy (NC-AFM) and is considered to be non-destructive with respect to the test sample. In MFM, the test sample(s) do not need to be electrically conductive to be imaged.
Overview
In MFM measurements, the magnetic force between the test sample and the tip can be expressed as
where is the magnetic moment of the tip (approximated as a point dipole), is the magnetic stray field from the sample surface, and μ0 is the magnetic permeability of free space.
Because the stray magnetic field from the sample can affect the magnetic state of the tip, and vice versa, interpretation of the MFM measurement is not straightforward. For instance, the geometry of the tip magnetization must be known for quantitative analysis.
Typical resolution of 30 nm can be achieved, although resolutions as low as 10 to 20 nm are attainable.
Important dates
A boost in the interest to MFM resulted from the following inventions:
Scanning tunneling microscope (STM) 1982, Tunneling current between the tip and sample is used as the signal. Both the tip and sample must be electrically conductive.
Atomic force microscopy (AFM) 1986, forces (atomic/electrostatic) between the tip and sample are sensed from the deflections of a flexible lever (cantilever). The cantilever tip flies above the sample with a typical distance of tens of nanometers.
Magnetic Force Microscopy (MFM), 1987 Derives from AFM. The magnetic forces between the tip and sample are sensed. Image of the magnetic stray field is obtained by scanning the magnetized tip over the sample surface in a raster scan.
MFM components
The main components of an MFM system are:
Piezoelectric scanning
Moves the sample in an x, y and z directions.
Voltage is applied to separate electrodes for different directions. Typically, a 1 volt potential results in 1 to 10 nm displacement.
Image is put together by slowly scanning sample surface in a raster fashion.
Scan areas range from a few to 200 micrometers.
Imaging times range from a few minutes to 30 minutes.
Restoring force constants on the cantilever range from 0.01 to 100 N/m depending on the material of the cantilever.
Magnetized tip at one end of a flexible lever (cantilever); generally an AFM probe with a magnetic coating.
In the past, tips were made of etched magnetic metals such as nickel.
Nowadays, tips are batch fabricated (tip-cantilever) using a combination of micromachining and photolithography. As a result, smaller tips are possible, and better mechanical control of the tip-cantilever is obtained.
Cantilever: can be made of single-crystalline silicon, silicon dioxide (SiO2), or silicon nitride (Si3N4). The Si3N4 cantilever-tip modules are usually more durable and have smaller restoring force constants (k).
Tips are coated with a thin (< 50 nm) magnetic film (such as Ni or Co), usually of high coercivity, so that the tip magnetic state (or magnetization M) does not change during the imaging.
The tip-cantilever module is driven close to the resonance frequency by a piezoelectric crystal with typical frequencies ranging from 10 kHz to 1 MHz.
Scanning procedure
Often, MFM is operated with the so-called "lift height" method. When the tip scans the surface of a sample at close distances (< 10 nm), not only magnetic forces are sensed, but also atomic and electrostatic forces. The lift height method helps to enhance the magnetic contrast through the following:
First, the topographic profile of each scan line is measured. That is, the tip is brought into a close proximity of the sample to take AFM measurements.
The magnetized tip is then lifted further away from the sample.
On the second pass, the magnetic signal is extracted.
Modes of operation
Static (DC) mode
The stray field from the sample exerts a force on the magnetic tip. The force is detected by measuring the displacement of the cantilever by reflecting a laser beam from it. The cantilever end is either deflected away or towards the sample surface by a distance Δz = Fz/k (perpendicular to the surface).
Static mode corresponds to measurements of the cantilever deflection. Forces in the range of tens of piconewtons are normally measured.
Dynamic (AC) mode
For small deflections, the tip-cantilever can be modeled as a damped harmonic oscillator with an effective mass (m) in [kg], an ideal spring constant (k) in [N/m], and a damper (D) in [N·s/m].
If an external oscillating force Fz is applied to the cantilever, then the tip will be displaced by an amount z. Moreover, the displacement will also harmonically oscillate, but with a phase shift between applied force and displacement given by:
where the amplitude and phase shifts are given by:
Here the quality factor of resonance, resonance angular frequency, and damping factor are:
Dynamic mode of operation refers to measurements of the shifts in the resonance frequency.
The cantilever is driven to its resonance frequency and frequency shifts are detected.
Assuming small vibration amplitudes (which is generally true in MFM measurements), to a first-order approximation, the resonance frequency can be related to the natural frequency and the force gradient. That is, the shift in the resonance frequency is a result of changes in the spring constant due to the (repelling and attraction) forces acting on the tip.
The change in the natural resonance frequency is given by
, where
For instance, the coordinate system is such that positive z is away from or perpendicular to the sample surface, so that an attractive force would be in the negative direction (F<0), and thus the gradient is positive. Consequently, for attractive forces, the resonance frequency of the cantilever decreases (as described by the equation). The image is encoded in such a way that attractive forces are generally depicted in black color, while repelling forces are coded white.
Image formation
Calculating forces acting on magnetic tips
Theoretically, the magneto-static energy (U) of the tip-sample system can be calculated in one of two ways:
One can either compute the magnetization (M) of the tip in the presence of an applied magnetic field () of the sample or compute the magnetization () of the sample in the presence of the applied magnetic field of the tip (whichever is easier).
Then, integrate the (dot) product of the magnetization and stray field over the interaction volume () as
and compute the gradient of the energy over distance to obtain the force F. Assuming that the cantilever deflects along the z-axis, and the tip is magnetized along a certain direction (e.g. the z-axis), then the equations can be simplified to
Since the tip is magnetized along a specific direction, it will be sensitive to the component of the magnetic stray field of the sample which is aligned to the same direction.
Imaging samples
The MFM can be used to image various magnetic structures including domain walls (Bloch and Neel), closure domains, recorded magnetic bits, etc. Furthermore, motion of domain wall can also be studied in an external magnetic field. MFM images of various materials can be seen in the following books and journal publications: thin films, nanoparticles, nanowires, permalloy disks, and recording media.
Advantages
The popularity of MFM originates from several reasons, which include:
The sample does not need to be electrically conductive.
Measurement can be performed at ambient temperature, in ultra high vacuum (UHV), in liquid environment, at different temperatures, and in the presence of variable external magnetic fields.
Measurement is nondestructive to the crystal lattice or the test sample's material matrix.
Long-range magnetic interactions are not sensitive to surface contamination.
No special surface preparation or coating is required.
Deposition of thin non-magnetic layers on the sample does not alter the results.
Detectable magnetic field intensity, H, is in the range of 10 A/m
Detectable magnetic field, B, is in the range of 0.1 gauss (10 microteslas).
Typical measured forces are as low as 10−14 N, with the spatial resolutions as low as 20 nm.
MFM can be combined with other scanning methods like STM.
Limitations
There are some shortcomings or difficulties when working with an MFM, such as: the recorded image depends on the type of the tip and magnetic coating, due to tip-sample interactions. Magnetic field of the tip and sample can change each other's magnetization, M, which can result in nonlinear interactions. This hinders image interpretation. Relatively short lateral scanning range (order of hundreds micrometers). Scanning (lift) height affects the image. Housing of the MFM system is important to shield electromagnetic noise (Faraday cage), acoustic noise (anti-vibration tables), air flow (air isolation), and static charge on the sample.
Advances
There have been several attempts to overcome the limitations mentioned above and to improve the resolution limits of MFM. For example, the limitations from air flow has been overcome by MFMs that operate at vacuum. The tip-sample effects have been understood and solved by several approaches. Wu et al., have used a tip with antiferromagnetically coupled magnetic layers in an attempt to produce a dipole only at the apex.
References
External links
Magnetic measurements application notes
Ultra MFM project
Scanning probe microscopy | Magnetic force microscope | [
"Chemistry",
"Materials_science"
] | 2,074 | [
"Nanotechnology",
"Scanning probe microscopy",
"Microscopy"
] |
2,201,217 | https://en.wikipedia.org/wiki/Magic%20item | A magic item is any object that has magical powers inherent in it. These may act on their own or be the tools of the person or being whose hands they fall into. Magic items are commonly found in both folklore and modern fantasy. Their fictional appearance is as old as the Iliad in which Aphrodite's magical girdle is used by Hera as a love charm.
Magic items often act as a plot device to grant magical abilities. They may give magical abilities to a person lacking in them, or enhance the power of a wizard. For instance, in J.R.R. Tolkien's The Hobbit, the magic ring allows Bilbo Baggins to be instrumental in the quest, exceeding the abilities of the dwarves.
Magic items are often, also, used as MacGuffins. The characters in a story must collect an arbitrary number of magical items, and when they have the full set, the magic is sufficient to resolve the plot. In video games, these types of items are usually collected in fetch quests.
Fairy tales
Certain kinds of fairy tales have their plots dominated by the magic items they contain. One such is the tale where the hero has a magic item that brings success, loses the item either accidentally (The Tinder Box) or through an enemy's actions (The Bronze Ring), and must regain it to regain his success. Another is the magic item that runs out of control when the character knows how to start it but not to stop it: the mill in Why the Sea Is Salt or the pot in Sweet Porridge. A third is the tale in which a hero has two rewards stolen from him, and a third reward attacks the thief.
Types of magic items
Many works of folklore and fantasy include very similar items, that can be grouped into types. These include:
Magic swords
Sentient weapons
Magic rings
Cloaks of invisibility
Potions
Magic carpets
Seven-league boots
Fairy ointments
Wands
Artifacts
In role-playing games and fantasy literature, an artifact is a magical object with great power. Often, this power is so great that it cannot be duplicated by any known art allowed by the premises of the fantasy world, and often cannot be destroyed by ordinary means. Artifacts often serve as MacGuffins, the central focus of quests to locate, capture, or destroy them. The One Ring of The Lord of the Rings is a typical artifact: it was alarmingly powerful, of ancient and obscure origin, and nearly indestructible.
In fiction
In Dungeons & Dragons
In Dungeons & Dragons, artifacts are magic items that either cannot be created by players or the secrets to their creation is not given. In any event, artifacts have no market price and have no hit points (that is, they are indestructible by normal spells). Artifacts typically have no inherent limit of using their powers. Under strict rules, any artifact can theoretically be destroyed by the sorcerer/wizard spell Mordenkainen's Disjunction, but for the purposes of a campaign centered on destroying an artifact, a plot-related means of destruction is generally substituted. Artifacts in D&D are split into two categories. Minor artifacts are common, but they can no longer be created, whereas major artifacts are unique – only one of each item exists.
In Harry Potter
In the Harry Potter series by J. K. Rowling, several magical objects exist for the use of the characters. Some of them play a crucial role in the main plot. There are objects for different purposes such as communication, transportation, games, storage, as well as legendary artifacts and items with dark properties.
References
Fantasy tropes
Fictional objects | Magic item | [
"Physics"
] | 746 | [
"Magic items",
"Physical objects",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.