id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
2,942,153 | https://en.wikipedia.org/wiki/Maladaptation | In evolution, a maladaptation () is a trait that is (or has become) more harmful than helpful, in contrast with an adaptation, which is more helpful than harmful. All organisms, from bacteria to humans, display maladaptive and adaptive traits. In animals (including humans), adaptive behaviors contrast with maladaptive ones. Like adaptation, maladaptation may be viewed as occurring over geological time, or within the lifetime of one individual or a group.
It can also signify an adaptation that, whilst reasonable at the time, has become less and less suitable and more of a problem or hindrance in its own right, as time goes on. This is because it is possible for an adaptation to be poorly selected or become more of a dysfunction than a positive adaptation, over time.
It can be noted that the concept of maladaptation, as initially discussed in a late 19th-century context, is based on a flawed view of evolutionary theory. It was believed that an inherent tendency for an organism's adaptations to degenerate would translate into maladaptations and soon become crippling if not "weeded out" (see also eugenics). In reality, the advantages conferred by any one adaptation are rarely decisive for survival on its own, but rather balanced against other synergistic and antagonistic adaptations, which consequently cannot change without affecting others.
In other words, it is usually impossible to gain an advantageous adaptation without incurring "maladaptations". Consider a seemingly trivial example: it is apparently extremely hard for an animal to evolve the ability to breathe well in air and in water. Better adapting to one means being less able to do the other.
Examples
Neuroplasticity is defined as "the brain's ability to reorganize itself by forming new neural connections throughout life". Neuroplasticity is seen as an adaptation that helps humans to adapt to new stimuli, especially through motor functions in musically inclined people, as well as several other hand-eye coordination activities. An example of maladaptation in neuroplasticity within the evolution of the brain is phantom pain in individuals who have lost limbs. While the brain is exceptionally good at responding to stimuli and reorganizing itself in a new way to then later respond even better and faster in the future, it is sometimes unable to cope with the loss of a limb, even though the neurological connections are lost. According to the findings of one journal "Adaptation and Maladaptation" in some cases, the changes that had previously aided the human brain to best suit an environment could also become maladaptive. In this case, with the loss of a limb, the brain is perceiving pain, though there are no nerves or signals from the now missing limb to give the brain that perception.
See also
Black robin
Ecological traps
Evolutionary mismatch
Maladaptive coping
Evolutionary suicide
Fisherian runaway
References
All articles with unsourced statements
Evolutionary biology
Selection | Maladaptation | [
"Biology"
] | 609 | [
"Evolutionary biology",
"Evolutionary processes",
"Selection"
] |
2,942,638 | https://en.wikipedia.org/wiki/Gloss%20%28optics%29 | Gloss is an optical property which indicates how well a surface reflects light in a specular (mirror-like) direction. It is one of the important parameters that are used to describe the visual appearance of an object. Other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables, including gloss among the involved aspects. The factors that affect gloss are the refractive index of the material, the angle of incident light and the surface topography.
Apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions.
Theory
When light illuminates an object, it interacts with it in a number of ways:
Absorbed within it (largely responsible for colour)
Transmitted through it (dependent on the surface transparency and opacity)
Scattered from or within it (diffuse reflection, haze and transmission)
Specularly reflected from it (gloss)
Variations in surface texture directly influence the level of specular reflection. Objects with a smooth surface, i.e. highly polished or containing coatings with finely dispersed pigments, appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull. The image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted.
Substrate material type also influences the gloss of a surface. Non-metallic materials, i.e. plastics etc. produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending on the colour of the material. Metals do not suffer from this effect producing higher amounts of reflection at any angle.
The Fresnel formula gives the specular reflectance, , for an unpolarized light of intensity , at angle of incidence , giving the intensity of specularly reflected beam of intensity , while the refractive index of the surface specimen is .
The Fresnel equation is given as follows :
Surface roughness
Surface roughness influences the specular reflectance levels; in the visible frequencies, the surface finish in the micrometre range is most relevant. The diagram on the right depicts the reflection at an angle on a rough surface with a characteristic roughness height variation . The path difference between rays reflected from the top and bottom of the surface bumps is:
When the wavelength of the light is , the phase difference will be:
If is small, the two beams (see Figure 1) are nearly in phase, resulting in constructive interference; therefore, the specimen surface can be considered smooth. But when , then beams are not in phase and through destructive interference, cancellation of each other will occur. Low intensity of specularly reflected light means the surface is rough and it scatters the light in other directions. If the middle phase value is taken as criterion for smooth surface, , then substitution into the equation above will produce:
This smooth surface condition is known as the Rayleigh roughness criterion.
History
The earliest studies of gloss perception are attributed to Leonard R. Ingersoll who in 1914 examined the effect of gloss on paper. By quantitatively measuring gloss using instrumentation Ingersoll based his research around the theory that light is polarised in specular reflection whereas diffusely reflected light is non-polarized. The Ingersoll "glarimeter" had a specular geometry with incident and viewing angles at 57.5°. Using this configuration gloss was measured using a contrast method which subtracted the specular component from the total reflectance using a polarizing filter.
In the 1930s work by A. H. Pfund, suggested that although specular shininess is the basic (objective) evidence of gloss, actual surface glossy appearance (subjective) relates to the contrast between specular shininess and the diffuse light of the surrounding surface area (now called "contrast gloss" or "luster").
If black and white surfaces of the same shininess are visually compared, the black surface will always appear glossier because of the greater contrast between the specular highlight and the black surroundings as compared to that with white surface and surroundings. Pfund was also the first to suggest that more than one method was needed to analyze gloss correctly.
In 1937 R. S. Hunter, as part of his research paper on gloss, described six different visual criteria attributed to apparent gloss. The following diagrams show the relationships between an incident beam of light, I, a specularly reflected beam, S, a diffusely reflected beam, D and a near-specularly reflected beam, B.
Specular gloss – the perceived brightness and the brilliance of highlights
Defined as the ratio of the light reflected from a surface at an equal but opposite angle to that incident on the surface.
Sheen – the perceived shininess at low grazing angles
Defined as the gloss at grazing angles of incidence and viewing
Contrast gloss – the perceived brightness of specularly and diffusely reflecting areas
Defined as the ratio of the specularly reflected light to that diffusely reflected normal to the surface;
Absence of bloom – the perceived cloudiness in reflections near the specular direction
Defined as a measure of the absence of haze or a milky appearance adjacent to the specularly reflected light: haze is the inverse of absence-of-bloom
Distinctness of image gloss – identified by the distinctness of images reflected in surfaces
Defined as the sharpness of the specularly reflected light
Surface texture gloss – identified by the lack of surface texture and surface blemishes
Defined as the uniformity of the surface in terms of visible texture and defects (orange peel, scratches, inclusions etc.)
A surface can therefore appear very shiny if it has a well-defined specular reflectance at the specular angle. The perception of an image reflected in the surface can be degraded by appearing unsharp, or by appearing to be of low contrast. The former is characterised by the measurement of the distinctness-of-image and the latter by the haze or contrast gloss.
In his paper Hunter also noted the importance of three main factors in the measurement of gloss:
The amount of light reflected in the specular direction
The amount and way in which the light is spread around the specular direction
The change in specular reflection as the specular angle changes
For his research he used a glossmeter with a specular angle of 45° as did most of the first photoelectric methods of that type, later studies however by Hunter and D. B. Judd in 1939, on a larger number of painted samples, concluded that the 60 degree geometry was the best angle to use so as to provide the closest correlation to a visual observation.
Standard gloss measurement
Standardisation in gloss measurement was led by Hunter and ASTM (American Society for Testing and Materials) who produced ASTM D523 Standard test method for specular gloss in 1939. This incorporated a method for measuring gloss at a specular angle of 60°. Later editions of the Standard (1951) included methods for measuring at 20° for evaluating high gloss finishes, developed at the DuPont Company (Horning and Morse, 1947) and 85° (matte, or low, gloss).
ASTM has a number of other gloss-related standards designed for application in specific industries including the old 45° method which is used primarily now used for glazed ceramics, polyethylene and other plastic films.
In 1937, the paper industry adopted a 75° specular-gloss method because the angle gave the best separation of coated book papers. This method was adopted in 1951 by the Technical Association of Pulp and Paper Industries as TAPPI Method T480.
In the paint industry, measurements of the specular gloss are made according to International Standard ISO 2813 (BS 3900, Part 5, UK; DIN 67530, Germany; NFT 30-064, France; AS 1580, Australia; JIS Z8741, Japan, are also equivalent). This standard is essentially the same as ASTM D523 although differently drafted.
Studies of polished metal surfaces and anodised aluminium automotive trim in the 1960s by Tingle, Potter and George led to the standardisation of gloss measurement of high gloss surfaces by goniophotometry under the designation ASTM E430. In this standard it also defined methods for the measurement of distinctness of image gloss and reflection haze.
See also
List of optical topics
Distinctness of image
References
Sources
External links
PCI Magazin article: What is the Level of Confidence in Measuring Gloss?
NPL: Good practice guide for the measurement of Gloss
Optics
Physical properties | Gloss (optics) | [
"Physics",
"Chemistry"
] | 1,780 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
"Physical properties",
" and optical physics"
] |
2,942,788 | https://en.wikipedia.org/wiki/HTML%2BTIME | HTML+TIME (Timed Interactive Multimedia Extensions) was the name of a W3C submission from Microsoft, Compaq/DEC and Macromedia that proposed an integration of SMIL semantics with HTML and CSS. The specifics of the integration were modified considerably by W3C working groups, and eventually emerged as the W3C Note XHTML+SMIL. The submission also proposed new animation and timing features that were adopted (with revisions) in SMIL 2.0.
Microsoft modified their implementation in IE 5.5 to (mostly) match the W3C Note, but continues to use the HTML+TIME moniker to refer to the associated feature set.
See also
SMIL
XHTML+SMIL
Microsoft Vizact
External links
Original HTML+TIME submission
XHTML+SMIL W3C Note
Introduction to HTML+TIME
HTML+TIME Overviews and Tutorials
HTML+TIME demos and some how-to's
Markup languages | HTML+TIME | [
"Technology"
] | 194 | [
"Computing stubs",
"World Wide Web stubs"
] |
1,483,778 | https://en.wikipedia.org/wiki/Yarrow%20oil | Yarrow essential oil is a volatile oil including the chemical proazulene. The dark blue essential oil is extracted by steam distillation of the flowers of yarrow (Achillea millefolium).
It kills the larvae of the mosquito Aedes albopictus.
References
Essential oils
Further reading
Supercritical CO2 extraction of essential oil from yarrow
Production of Yarrow (Achillea millefolium L.) in Norway: Essential Oil Content and Quality
Physicochemical Characteristics and Fatty Acid Profile of Yarrow (Achillea tenuifolia) Seed Oil
Essential oil composition of three polyploids in the Achillea millefolium ‘complex’
Phytochemical analysis of the essential oil of Achillea millefolium L. from various European Countries
Essential oil composition of two yarrow taxonomic forms | Yarrow oil | [
"Chemistry"
] | 175 | [
"Essential oils",
"Natural products"
] |
1,483,799 | https://en.wikipedia.org/wiki/Geometrical%20frustration | In condensed matter physics, geometrical frustration (or in short, frustration) is a phenomenon where the combination of conflicting inter-atomic forces leads to complex structures. Frustration can imply a plenitude of distinct ground states at zero temperature, and usual thermal ordering may be suppressed at higher temperatures. Much-studied examples include amorphous materials, glasses, and dilute magnets.
The term frustration, in the context of magnetic systems, has been introduced by Gerard Toulouse in 1977. Frustrated magnetic systems had been studied even before. Early work includes a study of the Ising model on a triangular lattice with nearest-neighbor spins coupled antiferromagnetically, by G. H. Wannier, published in 1950. Related features occur in magnets with competing interactions, where both ferromagnetic as well as antiferromagnetic couplings between pairs of spins or magnetic moments are present, with the type of interaction depending on the separation distance of the spins. In that case commensurability, such as helical spin arrangements may result, as had been discussed originally, especially, by A. Yoshimori, T. A. Kaplan, R. J. Elliott, and others, starting in 1959, to describe experimental findings on rare-earth metals. A renewed interest in such spin systems with frustrated or competing interactions arose about two decades later, beginning in the 1970s, in the context of spin glasses and spatially modulated magnetic superstructures. In spin glasses, frustration is augmented by stochastic disorder in the interactions, as may occur experimentally in non-stoichiometric magnetic alloys. Carefully analyzed spin models with frustration include the Sherrington–Kirkpatrick model, describing spin glasses, and the ANNNI model, describing commensurability magnetic superstructures. Recently, the concept of frustration has been used in brain network analysis to identify the non-trivial assemblage of neural connections and highlight the adjustable elements of the brain.
Magnetic ordering
Geometrical frustration is an important feature in magnetism, where it stems from the relative arrangement of spins. A simple 2D example is shown in Figure 1. Three magnetic ions reside on the corners of a triangle with antiferromagnetic interactions between them; the energy is minimized when each spin is aligned opposite to neighbors. Once the first two spins align antiparallel, the third one is frustrated because its two possible orientations, up and down, give the same energy. The third spin cannot simultaneously minimize its interactions with both of the other two. Since this effect occurs for each spin, the ground state is sixfold degenerate. Only the two states where all spins are up or down have more energy.
Similarly in three dimensions, four spins arranged in a tetrahedron (Figure 2) may experience geometric frustration. If there is an antiferromagnetic interaction between spins, then it is not possible to arrange the spins so that all interactions between spins are antiparallel. There are six nearest-neighbor interactions, four of which are antiparallel and thus favourable, but two of which (between 1 and 2, and between 3 and 4) are unfavourable. It is impossible to have all interactions favourable, and the system is frustrated.
Geometrical frustration is also possible if the spins are arranged in a non-collinear way. If we consider a tetrahedron with a spin on each vertex pointing along the easy axis (that is, directly towards or away from the centre of the tetrahedron), then it is possible to arrange the four spins so that there is no net spin (Figure 3). This is exactly equivalent to having an antiferromagnetic interaction between each pair of spins, so in this case there is no geometrical frustration. With these axes, geometric frustration arises if there is a ferromagnetic interaction between neighbours, where energy is minimized by parallel spins. The best possible arrangement is shown in Figure 4, with two spins pointing towards the centre and two pointing away. The net magnetic moment points upwards, maximising ferromagnetic interactions in this direction, but left and right vectors cancel out (i.e. are antiferromagnetically aligned), as do forwards and backwards. There are three different equivalent arrangements with two spins out and two in, so the ground state is three-fold degenerate.
Mathematical definition
The mathematical definition is simple (and analogous to the so-called Wilson loop in quantum chromodynamics): One considers for example expressions ("total energies" or "Hamiltonians") of the form
where G is the graph considered, whereas the quantities are the so-called "exchange energies" between nearest-neighbours, which (in the energy units considered) assume the values ±1 (mathematically, this is a signed graph), while the are inner products of scalar or vectorial spins or pseudo-spins. If the graph G has quadratic or triangular faces P, the so-called "plaquette variables" PW, "loop-products" of the following kind, appear:
and respectively,
which are also called "frustration products". One has to perform a sum over these products, summed over all plaquettes. The result for a single plaquette is either +1 or −1. In the last-mentioned case the plaquette is "geometrically frustrated".
It can be shown that the result has a simple gauge invariance: it does not change – nor do other measurable quantities, e.g. the "total energy" – even if locally the exchange integrals and the spins are simultaneously modified as follows:
Here the numbers εi and εk are arbitrary signs, i.e. +1 or −1, so that the modified structure may look totally random.
Water ice
Although most previous and current research on frustration focuses on spin systems, the phenomenon was first studied in ordinary ice. In 1936 Giauque and Stout published The Entropy of Water and the Third Law of Thermodynamics. Heat Capacity of Ice from 15 K to 273 K, reporting calorimeter measurements on water through the freezing and vaporization transitions up to the high temperature gas phase. The entropy was calculated by integrating the heat capacity and adding the latent heat contributions; the low temperature measurements were extrapolated to zero, using Debye's then recently derived formula. The resulting entropy, S1 = 44.28 cal/(K·mol) = 185.3 J/(mol·K) was compared to the theoretical result from statistical mechanics of an ideal gas, S2 = 45.10 cal/(K·mol) = 188.7 J/(mol·K). The two values differ by S0 = 0.82 ± 0.05 cal/(K·mol) = 3.4 J/(mol·K). This result was then explained by Linus Pauling to an excellent approximation, who showed that ice possesses a finite entropy (estimated as 0.81 cal/(K·mol) or 3.4 J/(mol·K)) at zero temperature due to the configurational disorder intrinsic to the protons in ice.
In the hexagonal or cubic ice phase the oxygen ions form a tetrahedral structure with an O–O bond length 2.76 Å (276 pm), while the O–H bond length measures only 0.96 Å (96 pm). Every oxygen (white) ion is surrounded by four hydrogen ions (black) and each hydrogen ion is surrounded by 2 oxygen ions, as shown in Figure 5. Maintaining the internal H2O molecule structure, the minimum energy position of a proton is not half-way between two adjacent oxygen ions. There are two equivalent positions a hydrogen may occupy on the line of the O–O bond, a far and a near position. Thus a rule leads to the frustration of positions of the proton for a ground state configuration: for each oxygen two of the neighboring protons must reside in the far position and two of them in the near position, so-called ‘ice rules’. Pauling proposed that the open tetrahedral structure of ice affords many equivalent states satisfying the ice rules.
Pauling went on to compute the configurational entropy in the following way: consider one mole of ice, consisting of N O2− and 2N protons. Each O–O bond has two positions for a proton, leading to 22N possible configurations. However, among the 16 possible configurations associated with each oxygen, only 6 are energetically favorable, maintaining the H2O molecule constraint. Then an upper bound of the numbers that the ground state can take is estimated as Ω < 22N()N. Correspondingly the configurational entropy S0 = kBln(Ω) = NkBln() = 0.81 cal/(K·mol) = 3.4 J/(mol·K) is in amazing agreement with the missing entropy measured by Giauque and Stout.
Although Pauling's calculation neglected both the global constraint on the number of protons and the local constraint arising from closed loops on the Wurtzite lattice, the estimate was subsequently shown to be of excellent accuracy.
Spin ice
A mathematically analogous situation to the degeneracy in water ice is found in the spin ices. A common spin ice structure is shown in Figure 6 in the cubic pyrochlore structure with one magnetic atom or ion residing on each of the four corners. Due to the strong crystal field in the material, each of the magnetic ions can be represented by an Ising ground state doublet with a large moment. This suggests a picture of Ising spins residing on the corner-sharing tetrahedral lattice with spins fixed along the local quantization axis, the <111> cubic axes, which coincide with the lines connecting each tetrahedral vertex to the center. Every tetrahedral cell must have two spins pointing in and two pointing out in order to minimize the energy. Currently the spin ice model has been approximately realized by real materials, most notably the rare earth pyrochlores Ho2Ti2O7, Dy2Ti2O7, and Ho2Sn2O7. These materials all show nonzero residual entropy at low temperature.
Extension of Pauling’s model: General frustration
The spin ice model is only one subdivision of frustrated systems. The word frustration was initially introduced to describe a system's inability to simultaneously minimize the competing interaction energy between its components. In general frustration is caused either by competing interactions due to site disorder (see also the Villain model) or by lattice structure such as in the triangular, face-centered cubic (fcc), hexagonal-close-packed, tetrahedron, pyrochlore and kagome lattices with antiferromagnetic interaction. So frustration is divided into two categories: the first corresponds to the spin glass, which has both disorder in structure and frustration in spin; the second is the geometrical frustration with an ordered lattice structure and frustration of spin. The frustration of a spin glass is understood within the framework of the RKKY model, in which the interaction property, either ferromagnetic or anti-ferromagnetic, is dependent on the distance of the two magnetic ions. Due to the lattice disorder in the spin glass, one spin of interest and its nearest neighbors could be at different distances and have a different interaction property, which thus leads to different preferred alignment of the spin.
Artificial geometrically frustrated ferromagnets
With the help of lithography techniques, it is possible to fabricate sub-micrometer size magnetic islands whose geometric arrangement reproduces the frustration found in naturally occurring spin ice materials. Recently R. F. Wang et al. reported the discovery of an artificial geometrically frustrated magnet composed of arrays of lithographically fabricated single-domain ferromagnetic islands. These islands are manually arranged to create a two-dimensional analog to spin ice. The magnetic moments of the ordered ‘spin’ islands were imaged with magnetic force microscopy (MFM) and then the local accommodation of frustration was thoroughly studied. In their previous work on a square lattice of frustrated magnets, they observed both ice-like short-range correlations and the absence of long-range correlations, just like in the spin ice at low temperature. These results solidify the uncharted ground on which the real physics of frustration can be visualized and modeled by these artificial geometrically frustrated magnets, and inspires further research activity.
These artificially frustrated ferromagnets can exhibit unique magnetic properties when studying their global response to an external field using Magneto-Optical Kerr Effect. In particular, a non-monotonic angular dependence of the square lattice coercivity is found to be related to disorder in the artificial spin ice system.
Geometric frustration without lattice
Another type of geometrical frustration arises from the propagation of a local order. A main question that a condensed matter physicist faces is to explain the stability of a solid.
It is sometimes possible to establish some local rules, of chemical nature, which lead to low energy configurations and therefore govern structural and chemical order. This is not generally the case and often the local order defined by local interactions cannot propagate freely, leading to geometric frustration. A common feature of all these systems is that, even with simple local rules, they present a large set of, often complex, structural realizations. Geometric frustration plays a role in fields of condensed matter, ranging from clusters and amorphous solids to complex fluids.
The general method of approach to resolve these complications follows two steps. First, the constraint of perfect space-filling is relaxed by allowing for space curvature. An ideal, unfrustrated, structure is defined in this curved space. Then, specific distortions are applied to this ideal template in order to embed it into three dimensional Euclidean space. The final structure is a mixture of ordered regions, where the local order is similar to that of the template, and defects arising from the embedding. Among the possible defects, disclinations play an important role.
Simple two-dimensional examples
Two-dimensional examples are helpful in order to get some understanding about the origin of the competition between local rules and geometry in the large. Consider first an arrangement of identical discs (a model for a hypothetical two-dimensional metal) on a plane; we suppose that the interaction between discs is isotropic and locally tends to arrange the disks in the densest way as possible. The best arrangement for three disks is trivially an equilateral triangle with the disk centers located at the triangle vertices. The study of the long range structure can therefore be reduced to that of plane tilings with equilateral triangles. A well known solution is provided by the triangular tiling with a total compatibility between the local and global rules: the system is said to be "unfrustrated".
But now, the interaction energy is supposed to be at a minimum when atoms sit on the vertices of a regular pentagon. Trying to propagate in the long range a packing of these pentagons sharing edges (atomic bonds) and vertices (atoms) is impossible. This is due to the impossibility of tiling a plane with regular pentagons, simply because the pentagon vertex angle does not divide 2. Three such pentagons can easily fit at a common vertex, but a gap remains between two edges. It is this kind of discrepancy which is called "geometric frustration". There is one way to overcome this difficulty. Let the surface to be tiled be free of any presupposed topology, and let us build the tiling with a strict application of the local interaction rule. In this simple example, we observe that the surface inherits the topology of a sphere and so receives a curvature. The final structure, here a pentagonal dodecahedron, allows for a perfect propagation of the pentagonal order. It is called an "ideal" (defect-free) model for the considered structure.
Dense structures and tetrahedral packings
The stability of metals is a longstanding question of solid state physics, which can only be understood in the quantum mechanical framework by properly taking into account the interaction between the positively charged ions and the valence and conduction electrons. It is nevertheless possible to use a very simplified picture of metallic bonding and only keeps an isotropic type of interactions, leading to structures which can be represented as densely packed spheres. And indeed the crystalline simple metal structures are often either close packed face-centered cubic (fcc) or hexagonal close packing (hcp) lattices. Up to some extent amorphous metals and quasicrystals can also be modeled by close packing of spheres. The local atomic order is well modeled by a close packing of tetrahedra, leading to an imperfect icosahedral order.
A regular tetrahedron is the densest configuration for the packing of four equal spheres. The dense random packing of hard spheres problem can thus be mapped on the tetrahedral packing problem. It is a practical exercise to try to pack table tennis balls in order to form only tetrahedral configurations. One starts with four balls arranged as a perfect tetrahedron, and try to add new spheres, while forming new tetrahedra. The next solution, with five balls, is trivially two tetrahedra sharing a common face; note that already with this solution, the fcc structure, which contains individual tetrahedral holes, does not show such a configuration (the tetrahedra share edges, not faces). With six balls, three regular tetrahedra are built, and the cluster is incompatible with all compact crystalline structures (fcc and hcp). Adding a seventh sphere gives a new cluster consisting in two "axial" balls touching each other and five others touching the latter two balls, the outer shape being an almost regular pentagonal bi-pyramid. However, we are facing now a real packing problem, analogous to the one encountered above with the pentagonal tiling in two dimensions. The dihedral angle of a tetrahedron is not commensurable with 2; consequently, a hole remains between two faces of neighboring tetrahedra. As a consequence, a perfect tiling of the Euclidean space R3 is impossible with regular tetrahedra. The frustration has a topological character: it is impossible to fill Euclidean space with tetrahedra, even severely distorted, if we impose that a constant number of tetrahedra (here five) share a common edge.
The next step is crucial: the search for an unfrustrated structure by allowing for curvature in the space, in order for the local configurations to propagate identically and without defects throughout the whole space.
Regular packing of tetrahedra: the polytope {3,3,5}
Twenty irregular tetrahedra pack with a common vertex in such a way that the twelve outer vertices form a regular icosahedron. Indeed, the icosahedron edge length l is slightly longer than the circumsphere radius r (l ≈ 1.05r). There is a solution with regular tetrahedra if the space is not Euclidean, but spherical. It is the polytope {3,3,5}, using the Schläfli notation, also known as the 600-cell.
There are one hundred and twenty vertices which all belong to the hypersphere S3 with radius equal to the golden ratio (φ = ) if the edges are of unit length. The six hundred cells are regular tetrahedra grouped by five around a common edge and by twenty around a common vertex. This structure is called a polytope (see Coxeter) which is the general name in higher dimension in the series containing polygons and polyhedra. Even if this structure is embedded in four dimensions, it has been considered as a three dimensional (curved) manifold. This point is conceptually important for the following reason. The ideal models that have been introduced in the curved Space are three dimensional curved templates. They look locally as three dimensional Euclidean models. So, the {3,3,5} polytope, which is a tiling by tetrahedra, provides a very dense atomic structure if atoms are located on its vertices. It is therefore naturally used as a template for amorphous metals, but one should not forget that it is at the price of successive idealizations.
Literature
References
Condensed matter physics
Thermodynamic entropy
Magnetic ordering | Geometrical frustration | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,199 | [
"Physical quantities",
"Phases of matter",
"Electric and magnetic fields in matter",
"Thermodynamic entropy",
"Materials science",
"Magnetic ordering",
"Entropy",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
1,483,960 | https://en.wikipedia.org/wiki/Charge%20invariance | Charge invariance refers to the fixed value of the electric charge of a particle regardless of its motion. Like mass, total spin and magnetic moment, particle's charge quantum number remains unchanged between two reference frames in relative motion. For example, an electron has a specific charge e, total spin , and invariant mass me. Accelerate that electron, and the charge, spin and mass assigned to it in all physical laws in the frame at rest and the moving frame remain the same – e, , me. In contrast, the particle's total relativistic energy or de Broglie wavelength change values between the reference frames.
The origin of charge invariance, and all relativistic invariants, is presently unclear. There may be some hints proposed by string/M-theory. It is possible the concept of charge invariance may provide a key to unlocking the mystery of unification in physics – the single theory of gravity, electromagnetism, the strong, and weak nuclear forces.
The property of charge invariance is embedded in the charge density – current density four-vector , whose vanishing divergence then signifies charge conservation.
See also
Charge conservation
Pohlmeyer charge
References
Particle physics | Charge invariance | [
"Physics"
] | 246 | [
"Particle physics"
] |
1,484,098 | https://en.wikipedia.org/wiki/Titanium%20carbide | Titanium carbide, TiC, is an extremely hard (Mohs 9–9.5) refractory ceramic material, similar to tungsten carbide. It has the appearance of black powder with the sodium chloride (face-centered cubic) crystal structure.
It occurs in nature as a form of the very rare mineral () - (Ti,V,Fe)C. It was discovered in 1984 on Mount Arashan in the Chatkal District, USSR (modern Kyrgyzstan), near the Uzbek border. The mineral was named after Ibragim Khamrabaevich Khamrabaev, director of Geology and Geophysics of Tashkent, Uzbekistan. Its crystals as found in nature range in size from 0.1 to 0.3 mm.
Physical properties
Titanium carbide has an elastic modulus of approximately 400 GPa and a shear modulus of 188 GPa.
Titanium carbide is soluble in solid titanium oxide, with a range of compositions which are collectively named "titanium oxycarbide" and created by carbothermic reduction of the oxide.
Manufacturing and machining
Tool bits without tungsten content can be made of titanium carbide in nickel-cobalt matrix cermet, enhancing the cutting speed, precision, and smoothness of the workpiece.
The resistance to wear, corrosion, and oxidation of a tungsten carbide–cobalt material can be increased by adding 6–30% of titanium carbide to tungsten carbide. This forms a solid solution that is more brittle and susceptible to breakage.
Titanium carbide can be etched with reactive-ion etching.
Applications
Titanium carbide is used in preparation of cermets, which are frequently used to machine steel materials at high cutting speed. It is also used as an abrasion-resistant surface coating on metal parts, such as tool bits and watch mechanisms. Titanium carbide is also used as a heat shield coating for atmospheric reentry of spacecraft.
7075 aluminium alloy (AA7075) is almost as strong as steel, but weighs one third as much. Using thin AA7075 rods with TiC nanoparticles allows larger alloys pieces to be welded without phase-segregation induced cracks.
See also
Metallocarbohedryne, a family of metal-carbon clusters including
References
Carbides
Ceramic materials
Refractory materials
Superhard materials
Titanium(IV) compounds
Rock salt crystal structure | Titanium carbide | [
"Physics",
"Engineering"
] | 499 | [
"Refractory materials",
"Materials",
"Superhard materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
1,484,228 | https://en.wikipedia.org/wiki/Montel%27s%20theorem | In complex analysis, an area of mathematics, Montel's theorem refers to one of two theorems about families of holomorphic functions. These are named after French mathematician Paul Montel, and give conditions under which a family of holomorphic functions is normal.
Locally uniformly bounded families are normal
The first, and simpler, version of the theorem states that a family of holomorphic functions defined on an open subset of the complex numbers is normal if and only if it is locally uniformly bounded.
This theorem has the following formally stronger corollary. Suppose that
is a family of
meromorphic functions on an open set . If is such that
is not normal at , and is a neighborhood of , then is dense
in the complex plane.
Functions omitting two values
The stronger version of Montel's theorem (occasionally referred to as the Fundamental Normality Test) states that a family of holomorphic functions, all of which omit the same two values is normal.
Necessity
The conditions in the above theorems are sufficient, but not necessary for normality. Indeed,
the family is normal, but does not omit any complex value.
Proofs
The first version of Montel's theorem is a direct consequence of Marty's theorem (which
states that a family is normal if and only if the spherical derivatives are locally bounded)
and Cauchy's integral formula.
This theorem has also been called the Stieltjes–Osgood theorem, after Thomas Joannes Stieltjes and William Fogg Osgood.
The Corollary stated above is deduced as follows. Suppose that all the functions in omit the same neighborhood of the point . By postcomposing with the map we obtain a uniformly bounded family, which is normal by the first version of the theorem.
The second version of Montel's theorem can be deduced from the first by using the fact that there exists a holomorphic universal covering from the unit disk to the twice punctured plane . (Such a covering is given by the elliptic modular function).
This version of Montel's theorem can be also derived from Picard's theorem,
by using Zalcman's lemma.
Relationship to theorems for entire functions
A heuristic principle known as Bloch's principle (made precise by Zalcman's lemma) states that properties that imply that an entire function is constant correspond to properties that ensure that a family of holomorphic functions is normal.
For example, the first version of Montel's theorem stated above is the analog of Liouville's theorem, while the second version corresponds to Picard's theorem.
See also
Montel space
Fundamental normality test
Riemann mapping theorem
Notes
References
Compactness theorems
Theorems in complex analysis | Montel's theorem | [
"Mathematics"
] | 574 | [
"Compactness theorems",
"Theorems in mathematical analysis",
"Theorems in complex analysis",
"Theorems in topology"
] |
1,484,457 | https://en.wikipedia.org/wiki/Atmospheric%20focusing | Atmospheric focusing is a type of wave interaction causing shock waves to affect areas at a greater distance than otherwise expected. Variations in the atmosphere create distortions in the wavefront by refracting a segment, allowing it to converge at certain points and constructively interfere. In the case of destructive shock waves, this may result in areas of damage far beyond the theoretical extent of its blast effect. Examples of this are seen during supersonic booms, large extraterrestrial impacts from objects like meteors, and nuclear explosions.
Density variations in the atmosphere (e.g. due to temperature variations), or airspeed variations cause refraction along the shock wave, allowing the uniform wavefront to separate and eventually interfere, dispersing the wave at some points and focusing it at others. A similar effect occurs in water when a wave travels through a patch of different density fluid, causing it to diverge over a large distance. For powerful shock waves this can cause damage farther than expected; the shock wave energy density will decrease beyond expected values based on uniform geometry ( falloff for weak shock or acoustic waves, as expected at large distances).
Types of atmospheric focusing
Supersonic booms
Atmospheric focusing from supersonic booms is a modern occurrence and a result of the actions of air forces across the world. When objects like planes travel faster than the speed of sound, they create sonic booms and pressure waves that can be focused. Atmospheric factors present when these waves are created can focus the waves and cause damage.
Planes can also create boom waves and explosion waves that can be focused. Consideration for atmospheric focusing in flight plans is critical. The wind and altitude during a flight can create environments for atmospheric focusing, which can be determined through reference to a focusing curve. When this is the case, supersonic flight may cause damage on the ground.
Meteor impacts
Meteors can also cause shock waves that can be focused. As the meteor enters Earth’s atmosphere and reaches lower altitudes, it can create a shock wave. The shock wave is impacted by what the meteor is made of, temperature, and pressure. Because the meteors need to have a large size and mass, there is only a small percentage of meteors that can create these shock waves. Radar and Infrasonic methodologies are able to detect meteor shock waves. These tools are used to study these shock waves and can help create new methods of learning about meteor shock waves.
Nuclear explosions and bombs
Nuclear explosions and bombs can also lead to atmospheric focusing. The effects of focusing may be found hundreds of kilometers from the blast site. An example of this is the case of the Tsar Bomba test, where damage was caused up to approximately 1,000 km away. Atmospheric focusing can increase the damage caused by these explosions.
See also
Knudsen number
Nuclear weapons testing
Rankine–Hugoniot conditions
References
Shock waves
Nuclear weapons | Atmospheric focusing | [
"Physics"
] | 575 | [
"Waves",
"Physical phenomena",
"Shock waves"
] |
1,484,495 | https://en.wikipedia.org/wiki/Mobile%20privatization | Mobile privatization can be described as an individual's attachment to a mobile device. This leads to a feeling of being "at home" while connected to a device in a mobile setting. Using a mobile device, an individual can feel as though they could travel anywhere in the world while still feeling comfortable because of the connectivity of their mobile device. The connection creates a sense of familiarity, resulting in the individual's identity becoming attached to their mobile service provider. This concept leads to the idea that "home" does not need to be a domestic structure featuring walls and a roof, but that the mobile sense of connection provides a portable community similar to a home environment.
History of the concept
The term was first used by Raymond Williams in his 1974 book Television: Technology and Cultural Form (Routledge, 3rd ed., 2003, ). Williams described the main contradiction in modern society as the one between mobility and home-centered living. He considered that television can negotiate that contradiction by providing users privacy to view the world.
Paul du Gay, of the Copenhagen Business School, developed this theory in 2001. His main perspective was that home, for Williams, is a shrunken social space where isolated individuals gain vicariously increased mobility. Accordingly, he introduced the concept of “mobile privatised social relations”. Henrikson applied the concept of Technological Determinism to conclude that “Technologies can be designed, consciously or unconsciously, to open certain social options and close others”.
In 2005, Kenichi Fujimoto, Professor of Informatics and Mediology at Mukogawa Women's University, came up with a theory called "Nagara Mobilism". Nagara means people have the ability to handle different process like text, video and sound at the same time. He reaffirmed the contradiction between the physical and virtual home, and explained that increased privacy of public space can make the contradiction stronger. In 2007, the term glocalization was introduced. It means that when individuals utilize mobile technology, their social networks expand while making themselves much closer to the local community.
Hans Geser, a professor at the University of Zürich, has isolated four main features of mobile technology that weaken societal development:
By increasing the pervasiveness of primary, particularistic social bonds.
By reducing the need for time based scheduling and coordination.
By undermining institutional controls and replacing location-based communication systems with person-based.
By providing support for anachronistic “pervasive roles”.
Sources
Mass media technology | Mobile privatization | [
"Technology"
] | 505 | [
"Information and communications technology",
"Mass media technology"
] |
1,484,527 | https://en.wikipedia.org/wiki/Frass | Frass refers loosely to the more or less solid excreta of insects, and to certain other related matter.
Definition and etymology
Frass is an informal term and accordingly it is variously used and variously defined. It is derived from the German word Fraß, which means the food takeup of an animal. The English usage applies to excreted residues of anything that insects had eaten, and similarly, to other chewed or mined refuse that insects leave behind. It does not generally refer to fluids such as honeydew, but the point does not generally arise, and is largely ignored in this article.
Such usage in English originated in the mid-nineteenth century at the latest. Modern technical English sources differ on the precise definition, though there is little direct contradiction on the practical realities. One glossary from the early twentieth century speaks of "...excrement; usually the excreted pellets of caterpillars." In some contexts frass refers primarily to fine, masticated material, often powdery, that phytophagous insects pass as indigestible waste after they have processed plant tissues as completely as their physiology would permit. Other common examples of frass types include the fecal material that larvae of codling moths leave as they feed inside fruit or seed, or that Terastia meticulosalis larvae leave as they bore in the pith of Erythrina twigs.
Various forms of frass may result from the nature of the food and the digestive systems of the species of insect that excreted the material. For example, many caterpillars, especially large, leaf-eating caterpillars in families such as Saturniidae, produce quite elaborately moulded pellets that may be conspicuous on the ground beneath plants in which they feed. In the tunnels they eat in the leaves, leaf miners commonly leave visible amorphous frass residues of the pulp of the mesophyll. Their frass commonly does not fill the tunnel.
In contrast, larvae of most powder post beetles (Lyctus) partly eject their finely granular frass from their tunnels when boring in the wood on which they feed, while the larvae of most dry-wood Cerambycidae leave their frass packed tightly into the tunnels behind them. Many other species of wood borers also leave the tunnels behind them tightly packed with dry frass, which may be either finely powdery or coarsely sawdusty. Possibly this is a defence against other borer larvae, many species of which are cannibalistic, or it might reduce attacks from some kinds of predatory mites or soak up fluids that a live tree might secrete into the tunnel.
Loose, fibrous frass of some moths in the family Cossidae, such as Coryphodema tristis, may be seen protruding from the mouths of their tunnels in tree trunks, especially shortly before they emerge as adult moths. In this respect, their frass differs from the powdery frass of powder post beetles such as Lyctus.
Borer tunnels may occur either in dry or rotting wood or under bark, in the comparatively soft, nutritious bast tissue, either dead or living.
Some boring insects do not digest the wood or other medium itself, but bore tunnels in which yeasts or other fungi grow, possibly stimulated by excretions and secretions of the insects. Such tunnels obviously cannot be permitted to become clogged, or the insects could not access their own pastures, so they must either eject at least part of their frass, or otherwise leave room for the edible growth. Examples of such boring-insect/fungal associations include ambrosia beetles with ambrosia fungi, the Sirex noctilio with its fungal partner Amylostereum areolatum, and more.
In a significantly different sense the term "frass" also may refer to excavated wood shavings that carpenter ants, carpenter bees and other insects with similar wood-boring habits eject from their galleries during the tunneling process. Such material differs from the frass residues of foods, because insects that tunnel to construct such nests do not eat the wood, so the material that they discard as they tunnel has not passed through their gut. Even professional entomologists might need suitable instruments and detailed examination to distinguish this from food-derived frass.
Ecological considerations
Contact with frass causes plants to secrete chitinase in response to its high chitin levels. Some frass, such as that of the fall armyworm, can also reduce plants' herbivory defenses. Frass is a microbial inoculant, in particular a soil inoculant, a source of desirable microbes, that promotes the formation of compost.
Many insect species, usually in their larval stages, accumulate their frass and cover themselves with it either to disguise their presence, or as a repugnatorial covering.
Gallery
See also
Feces
Guano
Chitosan
European spruce bark beetle
References
Citations
Further reading
Allaby, Michael (ed.) (2004). "frass." A Dictionary of Ecology. Oxford Paperback Reference.
Speight, Martin R., Mark D. Hunter and Allan D. Watt (1999). Ecology of Insects: concepts and applications. Wiley Blackwell.
External links
Everything You Ever Wanted to Know About Insect Poop: insects that put their poop to good use — About.com: Insects, by Debbie Hadley
Insect ecology
Feces | Frass | [
"Biology"
] | 1,120 | [
"Excretion",
"Feces",
"Animal waste products"
] |
1,484,541 | https://en.wikipedia.org/wiki/Wavefront | In physics, the wavefront of a time-varying wave field is the set (locus) of all points having the same phase. The term is generally meaningful only for fields that, at each point, vary sinusoidally in time with a single temporal frequency (otherwise the phase is not well defined).
Wavefronts usually move with time. For waves propagating in a unidimensional medium, the wavefronts are usually single points; they are curves in a two dimensional medium, and surfaces in a three-dimensional one.
For a sinusoidal plane wave, the wavefronts are planes perpendicular to the direction of propagation, that move in that direction together with the wave. For a sinusoidal spherical wave, the wavefronts are spherical surfaces that expand with it. If the speed of propagation is different at different points of a wavefront, the shape and/or orientation of the wavefronts may change by refraction. In particular, lenses can change the shape of optical wavefronts from planar to spherical, or vice versa.
In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic bending pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength, as shown in the inserted image. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. If there are multiple, closely spaced openings (e.g., a diffraction grating), a complex pattern of varying intensity can result.
Simple wavefronts and propagation
Optical systems can be described with Maxwell's equations, and linear propagating waves such as sound or electron beams have similar wave equations. However, given the above simplifications, Huygens' principle provides a quick method to predict the propagation of a wavefront through, for example, free space. The construction is as follows: Let every point on the wavefront be considered a new point source. By calculating the total effect from every point source, the resulting field at new points can be computed. Computational algorithms are often based on this approach. Specific cases for simple wavefronts can be computed directly. For example, a spherical wavefront will remain spherical as the energy of the wave is carried away equally in all directions. Such directions of energy flow, which are always perpendicular to the wavefront, are called rays creating multiple wavefronts.
The simplest form of a wavefront is the plane wave, where the rays are parallel to one another. The light from this type of wave is referred to as collimated light. The plane wavefront is a good model for a surface-section of a very large spherical wavefront; for instance, sunlight strikes the earth with a spherical wavefront that has a radius of about 150 million kilometers (1 AU). For many purposes, such a wavefront can be considered planar over distances of the diameter of Earth.
In an isotropic medium wavefronts travel with the same speed in all directions.
Wavefront aberrations
Methods using wavefront measurements or predictions can be considered an advanced approach to lens optics, where a single focal distance may not exist due to lens thickness or imperfections. For manufacturing reasons, a perfect lens has a spherical (or toroidal) surface shape though, theoretically, the ideal surface would be aspheric. Shortcomings such as these in an optical system cause what are called optical aberrations. The best-known aberrations include spherical aberration and coma.
However, there may be more complex sources of aberrations such as in a large telescope due to spatial variations in the index of refraction of the atmosphere. The deviation of a wavefront in an optical system from a desired perfect planar wavefront is called the wavefront aberration. Wavefront aberrations are usually described as either a sampled image or a collection of two-dimensional polynomial terms. Minimization of these aberrations is considered desirable for many applications in optical systems.
Wavefront sensor and reconstruction techniques
A wavefront sensor is a device which measures the wavefront aberration in a coherent signal to describe the optical quality or lack thereof in an optical system. There are many applications that include adaptive optics, optical metrology and even the measurement of the aberrations in the eye itself. In this approach, a weak laser source is directed into the eye and the reflection off the retina is sampled and processed. Another application of software reconstruction of the phase is the control of telescopes through the use of adaptive optics.
Mathematical techniques like phase imaging or curvature sensing are also capable of providing wavefront estimations. These algorithms compute wavefront images from conventional brightfield images at different focal planes without the need for specialised wavefront optics. While Shack-Hartmann lenslet arrays are limited in lateral resolution to the size of the lenslet array, techniques such as these are only limited by the resolution of digital images used to compute the wavefront measurements. That said, those wavefront sensors suffer from linearity issues and so are much less robust than the original SHWFS, in term of phase measurement.
There are several types of wavefront sensors, including:
Shack–Hartmann wavefront sensor: a very common method using a Shack–Hartmann lenslet array.
Phase-shifting Schlieren technique
Wavefront curvature sensor: also called the Roddier test. It yields good correction but needs an already good system as a starting point.
Pyramid wavefront sensor
Common-path interferometer
Foucault knife-edge test
Multilateral shearing interferometer
Ronchi tester
Shearing interferometer
Although an amplitude splitting interferometer such as the Michelson interferometer could be called a wavefront sensor, the term is normally applied to instruments that do not require an unaberrated reference beam to interfere with.
See also
Huygens-Fresnel principle
Wavefront sensor
Adaptive optics
Deformable mirror
Wave field synthesis
Hamilton–Jacobi equation
References
Further reading
Textbooks and books
Concepts of Modern Physics (4th Edition), A. Beiser, Physics, McGraw-Hill (International), 1987,
Physics with Modern Applications, L. H. Greenberg, Holt-Saunders International W. B. Saunders and Co, 1978,
Principles of Physics, J. B. Marion, W. F. Hornyak, Holt-Saunders International Saunders College, 1984,
Introduction to Electrodynamics (3rd Edition), D. J. Griffiths, Pearson Education, Dorling Kindersley, 2007,
Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers, Y. B. Band, John Wiley & Sons, 2010,
The Light Fantastic – Introduction to Classic and Quantum Optics, I. R. Kenyon, Oxford University Press, 2008,
McGraw Hill Encyclopaedia of Physics (2nd Edition), C. B. Parker, 1994,
Journals
Wavefront tip/tilt estimation from defocused images
External links
LightPipes – Free Unix wavefront propagation software
AO Tutorial: Wave-front Sensors
Wavefront sensing: Establishments Research groups and companies with interests in wavefront sensing and adaptive optics.
Optics
Waves | Wavefront | [
"Physics",
"Chemistry"
] | 1,512 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Optics",
"Waves",
"Motion (physics)",
" molecular",
"Atomic",
" and optical physics"
] |
1,484,606 | https://en.wikipedia.org/wiki/Rail%20directions | Rail directions are used to describe train directions on rail systems. The terms used may be derived from such sources as compass directions, altitude directions, or other directions. These directions are often specific to system, country, or region.
Radial directions
Many rail systems use the concept of a centre (usually a major city) to define rail directions.
Up and down
In British practice, railway directions are usually described as "up" and "down", with "up" being towards a major location. This convention is applied not only to the trains and the tracks, but also to items of lineside equipment and to areas near a track. Since British trains run on the left, the "up" side of a line is usually on the left when proceeding in the "up" direction.
On most of the network, "up" is the direction towards London. In most of Scotland, with the exception of the West and East Coast Main Lines, and the Borders Railway, "up" is towards Edinburgh. The Valley Lines network around Cardiff has its own peculiar usage, relating to the literal meaning of travelling "up" and "down" the valley. On the former Midland Railway "up" was towards Derby. On the Northern Ireland Railways network, "up" generally means toward Belfast (the specific zero milepost varying from line to line); except for cross-border services to Dublin, where Belfast is "down". Mileposts normally increase in the "down" direction, but there are exceptions, such as the Trowbridge line between Bathampton Junction and Hawkeridge Junction, where mileage increases in the "up" direction.
Individual tracks will have their own names, such as Up Main or Down Loop. Trains running towards London are normally referred to as "up" trains, and those away from London as "down". Hence the down Night Riviera runs to and the up Flying Scotsman to London King's Cross. This distinction is less meaningful for trains not travelling towards or away from London; for instance a CrossCountry train from to uses "up" lines as far as and "down" lines thereafter.
In China, railway directions with terminus in Beijing are described as "up" (, shàngxíng) and "down" (, xiàxíng), with "up" towards Beijing; while trains leaving Beijing are "down". Trains run through Beijing may have two or more numbers, for example, the train from Harbin to Shanghai K58/55 uses two different numbers: on the Harbin–Tianjin section, the train runs toward Beijing, the train is known as K58, but on the Tianjin–Shanghai section, the train is known as K55; the opposite train from Shanghai to Harbin is known as K56/57, while K56 is used from Shanghai to Tianjin and K57 is used from Tianjin to Harbin. Generally even numbers denote trains heading towards Beijing while odd numbers are those heading away from the capital.
In Japan, railway directions are referred to as and , and these terms are widely employed in timetables, as well as station announcements and signage. For JR Group trains, trains heading towards Tokyo Station are considered "up" trains, while those heading away are "down" trains, with a notable exceptions for the Yamanote and Osaka Loop lines which are both loop lines operated by JR Group companies. There is also an exception for the Keihin Tohoku line and other similar trains that runs past Tokyo Station, as officially the line is part of Tohoku Line north of Tokyo Station and Tokaido Line south, so the trains are referred as Northbound/Southbound. For other, private railway operators, the designation of "up" or "down" (if at all) usually relies on where the company is headquartered as "up".
In Hong Kong, most lines have their "down" direction towards the terminal closer to Central, with the exception of Disneyland Resort line, where the down line is towards Disneyland to be consistent with Tung Chung line where it branches from. On Tuen Ma line, the "down" end is Wu Kai Sha. The up/down direction was switched in the former Ma On Shan line such that it could be connected with the former West Rail line. The direction is signposted along the track, with the mileage increasing in the up direction, and also on the platform ends.
The railway systems of the Australian states have generally followed the practices of railways in the United Kingdom. Railway directions are usually described as "up" and "down", with "up" being towards the major location in most states, which is usually the capital city of the state. In New South Wales, trains running away from Sydney are "down" trains, while in Victoria, trains running away from Melbourne are "down" trains. An interstate train travelling from Sydney to Melbourne is a "down" train until it crosses the state border at Albury, where it changes its classification to an "up" train. Even in states that follow this practice, exceptions exist for individual lines. In the state of Queensland, "up" and "down" directions are individually defined for each line. Therefore, a train heading towards the main railway station in Brisbane (Roma Street station) would be classified as an "up" train on some lines but as a "down" train on other lines. In South Australia,
there are two (2) up/down origins: Port Augusta and Adelaide.
In Taiwan, trains travelling north towards Keelung on the Western Trunk Line and towards Badu on the Yilan Line are considered "up" trains. However, on other parts of the network, the terminology "clockwise" and "counter-clockwise" is used instead.
In Sweden, where trains run on the left (unlike roads which switched to running on the right in 1967), "up" (uppspår) refers to trains heading northbound, while "down" (nedspår) refers to trains heading southbound. Even numbers are always used for "up" trains while odd numbers are always used for "down" trains.
Inbound and outbound
In many commuter rail and rapid transit services in the United States, the rail directions are related to the location of the city centre. The term inbound is used for the direction leading in toward the city centre and outbound is used for the opposite direction leading out of the city centre.
City name directions
Some British rail directions commonly used are London and Country. The London end of a station platform or train is the end nearer to London. First class accommodation, where provided, is usually at this end. The country end is the opposite end. This usage is problematic where more than one route to London exists (e.g. at Exeter St Davids via Salisbury or Bristol, or Edinburgh Waverley).
Even and odd
In France, railway directions are usually described as Pair and Impair (meaning Even and Odd), corresponding to Up and Down in the British system. Pair means heading toward Paris, and Impair means heading away from Paris. This convention is applied not only to the trains and the tracks, but also to items of lineside equipment. Pair is also quasi-homophonic with Paris, so direction P is equivalent either with direction Pair or with direction Paris.
A similar system is in use in Italy, where directions can be Pari or Dispari (Even and Odd respectively). Pari (Even) trains conventionally travel north- and west-bound. The city of Paris is referenced in colloquial use (Parigi in Italian), with Pari trains virtually leading towards it (Paris being in a north-western direction from any point in Italy).
Polish railways also use parzysty and nieparzysty (even and odd) to designate line directions, with odd directions usually heading away from major cities (with historical exceptions in place) and thus functionally the equivalent of the British "down" direction. The odd direction is the direction of increasing mileage. With rail traffic in Poland operating on the right-hand side, down/odd tracks are usually on the right on double-track lines, and signalling equipment numbering follows this. Train numbers adhere to this directional principle to the extreme: trains entering a line in opposite direction of their previous line will change numbers accordingly (with numbering pairs: 0/1, 2/3, 4/5, 6/7, 8/9), and to give an example, 1300 and 1301 are the exact same train in Poland, with the even and odd numbers applying over different sections of its journey.
In Russia (and ex-USSR countries), the "even direction" is usually north- and eastbound, while the "odd direction" is south- and westbound. Trains travelling "even" and "odd" usually receive even and odd numbers as well as track and signal numbers, respectively.
Circumferential directions
In double track loop lines – such as those encircling a city – the tracks, trains and trackside equipment can be identified by their relative distance from the centre of the loop. Inner refers to the track and its trains that are closer to the topological centre. Outer refers to the track and its trains that are furthermost from the topological centre. One example is the City Circle line in the Sydney Trains system.
For circle routes, the directions may indicate clockwise or counterclockwise (anti-clockwise) bound trains. For example, on the Circle line of London Underground or the loop of the Central line, the directions are often referred to as "inner rail" (anti-clockwise) or "outer rail" (clockwise).
The same practice is used for circle routes in Japan, such as the Yamanote Line in Tokyo and the Osaka Loop Line, where directions are usually referred to as and , in a system where trains go clockwise on the outer track and counter-clockwise on the inner track.
Geographical directions
Cardinal directions
Most railroads in the United States use nominal cardinal directions for the directions of their lines, which often differ from actual compass directions. These directions are often referred to as "railroad" north, south, east, or west, to avoid confusion with the compass directions.
Typically an entire railroad system (the lines of a railroad or a related group of railroads) will describe all of its lines by only two directions, either east and west, or north and south. This greatly reduces the possibility of misunderstanding the direction in which a train is travelling as it traverses lines which may twist and turn or even reverse direction for a distance. These directions also have significance in resolving conflicts between trains running in opposite directions. For example, many railroads specify that trains of equal class running to the east are superior to those running west. This means that, if two trains are approaching a passing siding on a single-track line, the inferior westbound train must "take the siding" and wait there for the superior eastbound train to pass.
In the United States, most railroads use "east and west", and it is unusual for a railroad to designate "north and south" (the New York City Subway, the Chicago "L", and the Washington Metro are rare examples). Even-numbered trains (superior) travel east (or north). Odd-numbered trains (inferior) travel west (or south).
On the London Underground, geographic direction naming generally prevails (e.g. eastbound, westbound) except for the Circle line where it is Outer Rail and Inner Rail.
Other names for north and south
In New York City, the terms uptown and downtown are used in the subway to refer to northbound and southbound respectively. The nominal railroad direction is determined by how the line will travel when it enters Manhattan.
For railways in China that are not connected with Beijing, north and west are used as "up", and east and south as "down". Odd numbered train codes are used for "down" trains, while even numbers are used for "up"; for example, train T27 from Beijing West to Lhasa is "down" (going away from Beijing) since 27 is odd.
Other
Germany
In Germany, the tracks outside of station limits are called "Regelgleis" (usual track) and "Gegengleis" (opposite track). As trains in Germany usually drive on the right side, the Regelgleis is typically the right-side track, with some exceptions. When the direction of travel changes, the tracks' names also change, so the names of the adjacent stations are added. For example, the usual track from A-town to B-ville would also be the opposite track from B-ville to A-town. If two or more lines run parallel (German railway lines can only have one or two tracks outside station limits by definition), the name of the railway line is also added (usually something like goods line, S-Bahn, long-distance tracks, regional tracks, etc.).
Before being called Regel- and Gegengleis, the tracks were referred to as "right" (as in correct) and "false" track, with the right track being on the right side. As the use of the word "false" implied that it was wrong to drive on it, Deutsche Bahn considered changing the names to "Right" and "Left" track. However, this would have led to some cases where the "Right" track would be on the left side of the line and vice versa.
References
Rail transport operations
Orientation (geometry) | Rail directions | [
"Physics",
"Mathematics"
] | 2,742 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
1,484,696 | https://en.wikipedia.org/wiki/Dependency%20injection | In software engineering, dependency injection is a programming technique in which an object or function receives other objects or functions that it requires, as opposed to creating them internally. Dependency injection aims to separate the concerns of constructing objects and using them, leading to loosely coupled programs. The pattern ensures that an object or function that wants to use a given service should not have to know how to construct those services. Instead, the receiving "client" (object or function) is provided with its dependencies by external code (an "injector"), which it is not aware of. Dependency injection makes implicit dependencies explicit and helps solve the following problems:
How can a class be independent from the creation of the objects it depends on?
How can an application, and the objects it uses support different configurations?
Dependency injection is often used to keep code in-line with the dependency inversion principle.
In statically typed languages using dependency injection means that a client only needs to declare the interfaces of the services it uses, rather than their concrete implementations, making it easier to change which services are used at runtime without recompiling.
Application frameworks often combine dependency injection with inversion of control. Under inversion of control, the framework first constructs an object (such as a controller), and then passes control flow to it. With dependency injection, the framework also instantiates the dependencies declared by the application object (often in the constructor method's parameters), and passes the dependencies into the object.
Dependency injection implements the idea of "inverting control over the implementations of dependencies", which is why certain Java frameworks generically name the concept "inversion of control" (not to be confused with inversion of control flow).
Roles
Dependency injection involves four roles: services, clients, interfaces and injectors.
Services and clients
A service is any class which contains useful functionality. In turn, a client is any class which uses services. The services that a client requires are the client's dependencies.
Any object can be a service or a client; the names relate only to the role the objects play in an injection. The same object may even be both a client (it uses injected services) and a service (it is injected into other objects). Upon injection, the service is made part of the client's state, available for use.
Interfaces
Clients should not know how their dependencies are implemented, only their names and API. A service which retrieves emails, for instance, may use the IMAP or POP3 protocols behind the scenes, but this detail is likely irrelevant to calling code that merely wants an email retrieved. By ignoring implementation details, clients do not need to change when their dependencies do.
Injectors
The injector, sometimes also called an assembler, container, provider or factory, introduces services to the client.
The role of injectors is to construct and connect complex object graphs, where objects may be both clients and services. The injector itself may be many objects working together, but must not be the client, as this would create a circular dependency.
Because dependency injection separates how objects are constructed from how they are used, it often diminishes the importance of the new keyword found in most object-oriented languages. Because the framework handles creating services, the programmer tends to only directly construct value objects which represents entities in the program's domain (such as an Employee object in a business app or an Order object in a shopping app).
Analogy
As an analogy, cars can be thought of as services which perform the useful work of transporting people from one place to another. Car engines can require gas, diesel or electricity, but this detail is unimportant to the client—a driver—who only cares if it can get them to their destination.
Cars present a uniform interface through their pedals, steering wheels and other controls. As such, which engine they were 'injected' with on the factory line ceases to matter and drivers can switch between any kind of car as needed.
Advantages and disadvantages
Advantages
A basic benefit of dependency injection is decreased coupling between classes and their dependencies.
By removing a client's knowledge of how its dependencies are implemented, programs become more reusable, testable and maintainable.
This also results in increased flexibility: a client may act on anything that supports the intrinsic interface the client expects.
More generally, dependency injection reduces boilerplate code, since all dependency creation is handled by a singular component.
Finally, dependency injection allows concurrent development. Two developers can independently develop classes that use each other, while only needing to know the interface the classes will communicate through. Plugins are often developed by third-parties that never even talk to developers of the original product.
Testing
Many of dependency injection's benefits are particularly relevant to unit-testing.
For example, dependency injection can be used to externalize a system's configuration details into configuration files, allowing the system to be reconfigured without recompilation. Separate configurations can be written for different situations that require different implementations of components.
Similarly, because dependency injection does not require any change in code behavior, it can be applied to legacy code as a refactoring. This makes clients more independent and are easier to unit test in isolation, using stubs or mock objects, that simulate other objects not under test.
This ease of testing is often the first benefit noticed when using dependency injection.
Disadvantages
Critics of dependency injection argue that it:
Creates clients that demand configuration details, which can be onerous when obvious defaults are available.
Makes code difficult to trace because it separates behavior from construction.
Is typically implemented with reflection or dynamic programming, hindering IDE automation.
Typically requires more upfront development effort.
Encourages dependence on a framework.
Types of dependency injection
There are three main ways in which a client can receive injected services:
Constructor injection, where dependencies are provided through a client's class constructor.
Method Injection, where dependencies are provided to a method only when required for specific functionality.
Setter injection, where the client exposes a setter method which accepts the dependency.
Interface injection, where the dependency's interface provides an injector method that will inject the dependency into any client passed to it.
In some frameworks, clients do not need to actively accept dependency injection at all. In Java, for example, reflection can make private attributes public when testing and inject services directly.
Without dependency injection
In the following Java example, the Client class contains a Service member variable initialized in the constructor. The client directly constructs and controls which service it uses, creating a hard-coded dependency.
public class Client {
private Service service;
Client() {
// The dependency is hard-coded.
this.service = new ExampleService();
}
}
Constructor injection
The most common form of dependency injection is for a class to request its dependencies through its constructor. This ensures the client is always in a valid state, since it cannot be instantiated without its necessary dependencies.
public class Client {
private Service service;
// The dependency is injected through a constructor.
Client(final Service service) {
if (service == null) {
throw new IllegalArgumentException("service must not be null");
}
this.service = service;
}
}
Method Injection
Dependencies are passed as arguments to a specific method, allowing them to be used only during that method's execution without maintaining a long-term reference. This approach is particularly useful for temporary dependencies or when different implementations are needed for various method calls.
public class Client {
public void performAction(Service service) {
if (service == null) {
throw new IllegalArgumentException("service must not be null");
}
service.execute();
}
}
Setter injection
By accepting dependencies through a setter method, rather than a constructor, clients can allow injectors to manipulate their dependencies at any time. This offers flexibility, but makes it difficult to ensure that all dependencies are injected and valid before the client is used.
public class Client {
private Service service;
// The dependency is injected through a setter method.
public void setService(final Service service) {
if (service == null) {
throw new IllegalArgumentException("service must not be null");
}
this.service = service;
}
}
Interface injection
With interface injection, dependencies are completely ignorant of their clients, yet still send and receive references to new clients.
In this way, the dependencies become injectors. The key is that the injecting method is provided through an interface.
An assembler is still needed to introduce the client and its dependencies. The assembler takes a reference to the client, casts it to the setter interface that sets that dependency, and passes it to that dependency object which in turn passes a reference to itself back to the client.
For interface injection to have value, the dependency must do something in addition to simply passing back a reference to itself. This could be acting as a factory or sub-assembler to resolve other dependencies, thus abstracting some details from the main assembler. It could be reference-counting so that the dependency knows how many clients are using it. If the dependency maintains a collection of clients, it could later inject them all with a different instance of itself.
public interface ServiceSetter {
void setService(Service service);
}
public class Client implements ServiceSetter {
private Service service;
@Override
public void setService(final Service service) {
if (service == null) {
throw new IllegalArgumentException("service must not be null");
}
this.service = service;
}
}
public class ServiceInjector {
private final Set<ServiceSetter> clients = new HashSet<>();
public void inject(final ServiceSetter client) {
this.clients.add(client);
client.setService(new ExampleService());
}
public void switch() {
for (final Client client : this.clients) {
client.setService(new AnotherExampleService());
}
}
}
public class ExampleService implements Service {}
public class AnotherExampleService implements Service {}
Assembly
The simplest way of implementing dependency injection is to manually arrange services and clients, typically done at the program's root, where execution begins.
public class Program {
public static void main(final String[] args) {
// Build the service.
final Service service = new ExampleService();
// Inject the service into the client.
final Client client = new Client(service);
// Use the objects.
System.out.println(client.greet());
}
}
Manual construction may be more complex and involve builders, factories, or other construction patterns.
Frameworks
Manual dependency injection is often tedious and error-prone for larger projects, promoting the use of frameworks which automate the process. Manual dependency injection becomes a dependency injection framework once the constructing code is no longer custom to the application and is instead universal. While useful, these tools are not required in order to perform dependency injection.
Some frameworks, like Spring, can use external configuration files to plan program composition:
import org.springframework.beans.factory.BeanFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class Injector {
public static void main(final String[] args) {
// Details about which concrete service to use are stored in configuration separate from the program itself.
final BeanFactory beanfactory = new ClassPathXmlApplicationContext("Beans.xml");
final Client client = (Client) beanfactory.getBean("client");
System.out.println(client.greet());
}
}
Even with a potentially long and complex object graph, the only class mentioned in code is the entry point, in this case Client.Client has not undergone any changes to work with Spring and remains a POJO. By keeping Spring-specific annotations and calls from spreading out among many classes, the system stays only loosely dependent on Spring.
Examples
AngularJS
The following example shows an AngularJS component receiving a greeting service through dependency injection.
function SomeClass(greeter) {
this.greeter = greeter;
}
SomeClass.prototype.doSomething = function(name) {
this.greeter.greet(name);
}
Each AngularJS application contains a service locator responsible for the construction and look-up of dependencies.
// Provide the wiring information in a module
var myModule = angular.module('myModule', []);
// Teach the injector how to build a greeter service.
// greeter is dependent on the $window service.
myModule.factory('greeter', function($window) {
return {
greet: function(text) {
$window.alert(text);
}
};
});
We can then create a new injector that provides components defined in the myModule module, including the greeter service.
var injector = angular.injector(['myModule', 'ng']);
var greeter = injector.get('greeter');
To avoid the service locator antipattern, AngularJS allows declarative notation in HTML templates which delegates creating components to the injector.
<div ng-controller="MyController">
<button ng-click="sayHello()">Hello</button>
</div>
function MyController($scope, greeter) {
$scope.sayHello = function() {
greeter.greet('Hello World');
};
}
The ng-controller directive triggers the injector to create an instance of the controller and its dependencies.
C#
This sample provides an example of constructor injection in C#.
using System;
namespace DependencyInjection;
// Our client will only know about this interface, not which specific gamepad it is using.
interface IGamepadFunctionality {
string GetGamepadName();
void SetVibrationPower(float power);
}
// The following services provide concrete implementations of the above interface.
class XboxGamepad : IGamepadFunctionality {
float vibrationPower = 1.0f;
public string GetGamepadName() => "Xbox controller";
public void SetVibrationPower(float power) => this.vibrationPower = Math.Clamp(power, 0.0f, 1.0f);
}
class PlaystationJoystick : IGamepadFunctionality {
float vibratingPower = 100.0f;
public string GetGamepadName() => "PlayStation controller";
public void SetVibrationPower(float power) => this.vibratingPower = Math.Clamp(power * 100.0f, 0.0f, 100.0f);
}
class SteamController : IGamepadFunctionality {
double vibrating = 1.0;
public string GetGamepadName() => "Steam controller";
public void SetVibrationPower(float power) => this.vibrating = Convert.ToDouble(Math.Clamp(power, 0.0f, 1.0f));
}
// This class is the client which receives a service.
class Gamepad {
IGamepadFunctionality gamepadFunctionality;
// The service is injected through the constructor and stored in the above field.
public Gamepad(IGamepadFunctionality gamepadFunctionality) => this.gamepadFunctionality = gamepadFunctionality;
public void Showcase() {
// The injected service is used.
var gamepadName = this.gamepadFunctionality.GetGamepadName();
var message = $"We're using the {gamepadName} right now, do you want to change the vibrating power?";
Console.WriteLine(message);
}
}
class Program {
static void Main() {
var steamController = new SteamController();
// We could have also passed in an XboxController, PlaystationJoystick, etc.
// The gamepad doesn't know what it's using and doesn't need to.
var gamepad = new Gamepad(steamController);
gamepad.Showcase();
}
}
Go
Go does not support classes and usually dependency injection is either abstracted by a dedicated library that utilizes reflection or generics (the latter being supported since Go 1.18). A simpler example without using dependency injection libraries is illustrated by the following example of an MVC web application.
First, pass the necessary dependencies to a router and then from the router to the controllers:
package router
import (
"database/sql"
"net/http"
"example/controllers/users"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/redis/go-redis/v9"
"github.com/rs/zerolog"
)
type RoutingHandler struct {
// passing the values by pointer further down the call stack
// means we won't create a new copy, saving memory
log *zerolog.Logger
db *sql.DB
cache *redis.Client
router chi.Router
}
// connection, logger and cache initialized usually in the main function
func NewRouter(
log *zerolog.Logger,
db *sql.DB,
cache *redis.Client,
) (r *RoutingHandler) {
rtr := chi.NewRouter()
return &RoutingHandler{
log: log,
db: db,
cache: cache,
router: rtr,
}
}
func (r *RoutingHandler) SetupUsersRoutes() {
uc := users.NewController(r.log, r.db, r.cache)
r.router.Get("/users/:name", func(w http.ResponseWriter, r *http.Request) {
uc.Get(w, r)
})
}
Then, you can access the private fields of the struct in any method that is its pointer receiver, without violating encapsulation.
package users
import (
"database/sql"
"net/http"
"example/models"
"github.com/go-chi/chi/v5"
"github.com/redis/go-redis/v9"
"github.com/rs/zerolog"
)
type Controller struct {
log *zerolog.Logger
storage models.UserStorage
cache *redis.Client
}
func NewController(log *zerolog.Logger, db *sql.DB, cache *redis.Client) *Controller {
return &Controller{
log: log,
storage: models.NewUserStorage(db),
cache: cache,
}
}
func (uc *Controller) Get(w http.ResponseWriter, r *http.Request) {
// note that we can also wrap logging in a middleware, this is for demonstration purposes
uc.log.Info().Msg("Getting user")
userParam := chi.URLParam(r, "name")
var user *models.User
// get the user from the cache
err := uc.cache.Get(r.Context(), userParam).Scan(&user)
if err != nil {
uc.log.Error().Err(err).Msg("Error getting user from cache. Retrieving from SQL storage")
}
user, err = uc.storage.Get(r.Context(), "johndoe")
if err != nil {
uc.log.Error().Err(err).Msg("Error getting user from SQL storage")
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
}
Finally you can use the database connection initialized in your main function at the data access layer:
package models
import (
"database/sql"
"time"
)
type (
UserStorage struct {
conn *sql.DB
}
User struct {
Name string 'json:"name" db:"name,primarykey"'
JoinedAt time.Time 'json:"joined_at" db:"joined_at"'
Email string 'json:"email" db:"email"'
}
)
func NewUserStorage(conn *sql.DB) *UserStorage {
return &UserStorage{
conn: conn,
}
}
func (us *UserStorage) Get(name string) (user *User, err error) {
// assuming 'name' is a unique key
query := "SELECT * FROM users WHERE name = $1"
if err := us.conn.QueryRow(query, name).Scan(&user); err != nil {
return nil, err
}
return user, nil
}
See also
Architecture description language
Factory pattern
Inversion of control
Plug-in (computing)
Strategy pattern
Service locator pattern
Parameter (computer programming)
Quaject
References
External links
Composition Root by Mark Seemann
A beginners guide to Dependency Injection
Dependency Injection & Testable Objects: Designing loosely coupled and testable objects - Jeremy Weiskotten; Dr. Dobb's Journal, May 2006.
Design Patterns: Dependency Injection -- MSDN Magazine, September 2005
Martin Fowler's original article that introduced the term Dependency Injection
P of EAA: Plugin
- Andrew McVeigh - A detailed history of dependency injection.
What is Dependency Injection? - An alternative explanation - Jakob Jenkov
Writing More Testable Code with Dependency Injection -- Developer.com, October 2006
Managed Extensibility Framework Overview -- MSDN
Old fashioned description of the Dependency Mechanism by Hunt 1998
Refactor Your Way to a Dependency Injection Container
Understanding DI in PHP
You Don't Need a Dependency Injection Container
Component-based software engineering
Software architecture
Software design patterns
Articles with example Java code | Dependency injection | [
"Technology"
] | 4,741 | [
"Component-based software engineering",
"Components"
] |
1,484,741 | https://en.wikipedia.org/wiki/Carcinisation | Carcinisation (American English: carcinization) is a form of convergent evolution in which non-crab crustaceans evolve a crab-like body plan. The term was introduced into evolutionary biology by L. A. Borradaile, who described it as "the many attempts of Nature to evolve a crab".
Definition of carcinised morphology
It was stated by Lancelot Alexander Borradaile in 1916 that:
Keiler et al., 2017 defines a carcinised morphology as follows:
"The carapace is flatter than it is broad and possesses lateral margins."
"The sternites are fused into a wide sternal plastron which possesses a distinct emargination on its posterior margin."
"The pleon is flattened and strongly bent, in dorsal view completely hiding the tergites of the fourth pleonal segment, and partially or completely covers the plastron."
An important and visually evident marker of difference between true crabs and carcinised Anomura is the number of leg pairs. While Brachyura (true) crabs have four pairs of legs used for locomotion, Anomura possess one much smaller set and therefore three sets of walking legs.
Examples
Carcinisation is believed to have occurred independently in at least five groups of decapod crustaceans:
Order Decapoda:
Infraorder Anomura:
King crabs, which most scientists believe evolved from hermit crab ancestors
First appearance: Late Cenozoic
Porcelain crabs, which are closely related to squat lobsters
First appearance: Late Jurassic
The hairy stone crab (Lomis hirta)
Hermit crabs:
The coconut crab (Birgus latro)
Patagurus rex
Infraorder Brachyura (true crabs) First appearance: Early Jurassic
The extinct probable crustacean order Cyclida are also noted to "strikingly resemble crabs," and probably had a similar ecology.
King crabs
The example of king crabs (family Lithodidae) evolving from hermit crabs has been particularly well studied, and evidence in their biology supports this theory. For example, most hermit crabs are asymmetrical, and fit well into spiral snail shells; the abdomens of king crabs, even though they do not use snail shells for shelter, are also asymmetrical.
Hypercarcinisation
An exceptional form of carcinisation, termed "hypercarcinisation", is seen in the porcelain crab Allopetrolisthes spinifrons. In addition to the shortened body form, A. spinifrons also shows similar sexual dimorphism to that seen in true crabs, where males have a shorter pleon than females.
Selective pressures and benefits
Independently arising from multiple ancestral crustacean taxa, the crab-like traits exhibited vary between individual species and taxa. However, all crabs and carcinised organisms are decapods. Correlations between the folding of the pleon tail and widening of the cephalothorax across disparate decapod species suggest similar evolutionary pressures. Some occurrences of carcinisation are derived from convergent but distinct developmental pathways, while others may be instances of homologous parallelism from shared ancestral body plans.
Most carcinised organisms are descended from the infraorder Anomura, which includes hermit crabs. Many carcinised Anomura evolved from ancestors with morphologically intermediate forms reminiscent of modern squat lobsters, not including the King Crab which is hypothesized by researchers to be descended directly from a variety of Pagurid hermit crab. There may be various advantages to adopting brachyuraform (true crab-like) traits.
The adoption of a crab-like body structure can convey a number of selective advantages for crustacean species. Carcinisation is associated with a lowered center of gravity, allowing these creatures to invest in sideways walking. This evasive adaptation is particularly useful in an ocean environment with forward-moving predators. The pleon is held tightly under the animal’s cephalothorax with reduced musculature, which protects the pleon’s organs from attack. The smaller and more balanced frame facilitates concealment within rocks and coral. The folding of the pleon below the carapace reduces the crustacean’s exposed surface area, and associated hardening of the pleonal cuticle are all thought to benefit the fitness of this body type.
Evolutionary tradeoffs
The caridoid escape reaction is an innate danger response in crustaceans such as lobsters and crayfish, which contracts abdominal flexions and sends the crustacean flying backward in the water. Brachyura and species which have undergone carcinization have strongly bent and immobile tails, which prevent them from using this evasion strategy. The necessary muscles are no longer developed enough in these species to facilitate the necessary tail flipping. Crabs and false crabs are best suited to escape by ground pursuit in comparison to the quick aquatic escape provided by the caridoid escape reaction.
Porcelain crabs’ closest relatives are squat lobsters, taxa which occupy a morphological middle ground, described by Keiler et. al. as “half-carcinized” due to their partially flexed pleons and carapaces that remain longer than they are wide. Many species do not become fully carcinised but only undergo the crab-like adaptations that are contextually beneficial, to varying degrees.
Coconut crabs (Birgus latro)
While most incidences of carcinization are in aquatic Anomura populations, it has evolved in the planet’s largest land-dwelling invertebrate, Coconut crabs. A number of true crab-like features, such as a wide carapace, and a low abdomen with strong supporting legs, allow these crustaceans to wield muscular claws and manipulate their terrestrial environments with greater ease. The lack of an extended pleon greatly benefits their mobility. In this case, brachyuraform traits accommodate comfortable terrestrial locomotion and are far more pronounced in maturity, after the larval and post-larval stages which remain obligatorily aquatic. The repeated emergence of carcinised morphological structures suggests selective pressures in various Anomura niches and habitus often favor carcinization, though this may fluctuate and is sometimes reversed by the opposite process of decarcinisation.
Decarcinisation
Some crab-shaped species have evolved away from the crab form in a process called decarcinisation. Decarcinisation, or the loss of the crab-like body, has occurred multiple times in both Brachyura and Anomura. However, there are varying degrees of carcinisation and decarcinisation. Thus, not all species can necessarily be distinctly classified as "carcinised" or "decarcinised". Some examples include the coconut crab, as well as other hermit crabs, that have lost or reduced their outer casing, often referred to as "domiciles". While they retain their crab-like phenotype, their reduction in or lack of domicile necessitates a "semi-carcinised" label.
See also
List of examples of convergent evolution
Cretaceous crab revolution
Mesozoic marine revolution
Orthogenesis (comparable with convergent evolution but involving teleology)
References
Crustaceans
Convergent evolution | Carcinisation | [
"Biology"
] | 1,485 | [
"Convergent evolution",
"Evolutionary biology concepts"
] |
1,484,829 | https://en.wikipedia.org/wiki/Nimesulide | Nimesulide is a nonsteroidal anti-inflammatory drug (NSAID) with pain medication and fever reducing properties. Its approved indications are the treatment of acute pain, the symptomatic treatment of osteoarthritis, and primary dysmenorrhoea in adolescents and adults above 12 years old.
Side effects may include liver problems. It has a multifactorial mode of action and is characterized by a fast onset of action. It works by blocking the production of prostaglandins (a chemical associated with pain), thereby relieving pain and inflammation.
Medical uses
It may be used for pain, including period pains. Nimesulide is not recommended long-term, as for chronic conditions such as arthritis. This is due to its association with an increased risk of liver toxicity, including liver failure. Despite its risk of hepatotoxicity, a 2012 evaluation by the European Medicines Agency (EMA) concluded that the overall benefit/risk profile of nimesulide is favourable and in line with that of the other NSAIDs such as diclofenac, ibuprofen, and naproxen provided that the duration of use is limited to 15 days and the dose does not exceed 200 mg/day.
Children
Less than 10 days of nimesulide does not appear to increase the risk of hypothermia, gastrointestinal bleeding, epigastric pain, vomiting, diarrhea, or transient, asymptomatic elevation of liver enzymes compared to ketoprofen, paracetamol, mefenamic acid, aspirin, or ibuprofen in children. However, data does not speak to populations less than 6 months old.
Pregnancy and lactation
Women should use the drug with caution during lactation and nimesulide is contraindicated during pregnancy, and research suggest that it is also contraindicated in lactating women.
Available forms
Nimesulide is available in a variety of forms: tablets, powder for dissolution in water, suppositories, mouth dissolving tablets, and topical gel.
Contraindications
It should be avoided by children under 12 and people with liver problems.
Side effects
Due to concerns about the risk of liver toxicity, nimesulide has been withdrawn from market in several countries (Mexico, Spain, Finland, Belgium, and Ireland). Liver problems have resulted in both deaths and the need for transplantation. The frequency of nimesulide-induced liver injury is estimated at around 1 in 50,000 patients, severe injury has occurred in as little as three days after starting the medication. Shorter (≤ 15 days) duration of therapy does not prevent serious nimesulide hepatotoxicity.
Continuous use of nimesulide (more than 15 days) may cause the following side effects:
Diarrhea
Vomiting
Skin rash
Itchiness
Dizziness
Bitterness in mouth
Pharmacology
Pharmacodynamics
Nimesulide is a nonsteroidal anti-inflammatory drug (NSAID), acting specifically as a relatively selective cyclooxygenase-2 inhibitor. However, the pharmacological profile of nimesulide is peculiar, and additional, unknown/yet-to-be-identified mechanisms appear to also be involved. One pathway that has been implicated in its actions is the ecto-5'-nucleotidase (e-5′NT/CD73)/adenosine A2A receptor pathway.
Pharmacokinetics
Nimesulide is absorbed rapidly following oral administration.
Nimesulide undergoes extensive biotransformation, mainly to 4-hydroxynimesulide (which also appears to be biologically active).
Food, sex, and advanced age have negligible effects on nimesulide pharmacokinetics.
Moderate chronic kidney disease does not necessitate dosage adjustment, while in patients with severe chronic kidney disease or liver disease, nimesulide is contraindicated.
Nimesulide has a relatively rapid onset of action, with meaningful reductions in pain and inflammation observed within 15 minutes from drug intake.
The therapeutic effects of nimesulide are the result of its complex mode of action, which targets a number of key mediators of the inflammatory process such as: COX-2 mediated prostaglandins, free radicals, proteolytic enzymes, and histamine.
Clinical evidence is available to support a particularly good profile in terms of gastrointestinal tolerability.
History
Nimesulide was launched in Italy for the first time as Aulin and Mesulid in 1985 and is available in more than 50 countries worldwide, including among others France, Portugal, Greece, Switzerland, Belgium, Russia, Thailand, and Brazil. Nimesulide has never been filed for Food and Drug Administration (FDA) evaluation in the United States, where it is not marketed.
Society and culture
Brand names
Nimesulide is available throughout the world as original product with the following trademarks: Sulide, Nimalox, Mesulid (Novartis, Brazil, Boehringer Ingelheim, Greece, Italy), Coxtal (Sanofi, China, Bulgaria), Sintalgin (Abbott, Brazil), Eskaflam (GSK, Brazil, Mexico), Octaprin, Nimside (Teva, Pakistan), Nise (Russia, Venezuela, Vietnam, Ukraine), Nilsid (Egypt); Aulin (Bulgaria, Czech Republic, Italy, Romania, Poland), Ainex, Drexel, Donulide, Edrigyl, Enetra, Heugan, Mesulid, Minapon, NeRelid, Nexen, Nidolon, Nilden (Mexico); Emsulide, Nimed, Nimedex, Nimesil (Czech Republic, Moldova, Latvia, Lithuania, Kazhakhstan, Georgia, Poland), Nimulid (Trinidad & Tobago), Nimutab, Nimdase, Nimopen-MP Nise, Nimuwin, Nisulid, Nodard Plus, Nicip, Nimcap, Nic-P, Nic-Spas, Nimupain (India); Mesulid, Novolid, Relmex (Ecuador); Remisid (Ukraine); Coxulid, Emulid, Frenag, Fuzo, Motival, Nimeksil, Nimelid, Nîmes, Nimesdin, Romasulid, Sulidin, Suljel, Thermo Sulidin (Turkey), Xilox (Hungary); Modact-IR (Pakistan); and ad Sulidene and Zolan for veterinary use. Many generic and copy-products also exist (Lusemin, Medicox, Nidol, Nimalox, Nimesil, Nimotas, Nimulid, Nizer, Sorini, Ventor, Vionim, Neolide, Willgo among others), new-aid, Nexulide (Syria), Nims, Nice, Nimulide (Nepal)
India
Several reports have been made of adverse drug reactions in India. On February 12, 2011, Indian Express reported that the Union Ministry of Health and Family Welfare finally had decided to suspend the pediatric use of the analgesic, Nimesulide suspension. From 10 March 2011 onward Nimesulide formulations are not indicated for human use in children below 12 years of age.
On September 13, 2011 Madras High Court revoked a suspension on manufacture and sale of paediatric drugs nimesulide and phenylpropanolamine (PPA).
On December 30, 2024, Central government banned the manufacture, sale and distribution of nimesulide and its formulations for animal use.
EMA confirms the positive benefit/risk ratio
On September 21, 2007 the EMA released a press release on their review on the liver-related safety of nimesulide. The EMA has concluded that the benefits of these medicines outweigh their risks, but that there is a need to limit the duration of use to ensure that the risk of patients developing liver problems is kept to a minimum. Therefore, the EMA has limited the use of systemic formulations (tablets, solutions, suppositories) of nimesulide to 15 days.
Irish Medicines Board
The Irish Medicines Board has decided to suspend Nimesulide from the Irish market and refer it to the EU Committee for Human Medicinal Products (CHMP) for a review of its benefit/risk profile. The decision is due to the reporting of six cases of potentially-related liver failures to the IMB by the National Liver Transplant Unit, St. Vincent's University Hospital. These cases occurred in the period from 1999 to 2006.
Bribes
In May 2008, Italy's leading daily paper Corriere della Sera and other media outlets reported that a top-ranking official at Italy's medicines agency AIFA had been filmed by police while accepting bribes from employees of pharmaceutical companies. The money allegedly was being paid to ensure that certain drugs would be spared scrutiny from the drugs watchdog. The investigation had started in 2005 following suspicions that some AIFA drug tests had been faked. Eight arrests were made. Nimesulide can be bought carrying a prescription from a physician that is kept as a receipt at the chemist shop, nominally allowing strong control over selling.
The original manufacturer of nimesulide is Helsinn Healthcare SA, Switzerland, which acquired the rights for the drug in 1976. After the patent protection terminated in 2015, a number of other companies started production and marketing of Nimesulide.
References
Antipyretics
Drugs with unknown mechanisms of action
Hepatotoxins
Nitrobenzene derivatives
Nonsteroidal anti-inflammatory drugs
Phenol ethers
Withdrawn drugs | Nimesulide | [
"Chemistry"
] | 2,045 | [
"Drug safety",
"Withdrawn drugs"
] |
1,484,838 | https://en.wikipedia.org/wiki/Leaded%20glass | Leaded glass may refer to:
Lead glass, potassium silicate glass which has been impregnated with a small amount of lead oxide in its fabrication
Lead came glasswork, glass panels made by combining multiple small pieces of glass, which may be stained, textured or beveled, with cames or copper foil
Leadlight or leaded lights, decorative windows made of small sections of glass supported in lead cames
Flint glass, an optical glass that has relatively high refractive index and low Abbe number
Glass compositions | Leaded glass | [
"Chemistry"
] | 108 | [
"Glass compositions",
"Glass chemistry"
] |
1,484,885 | https://en.wikipedia.org/wiki/Bay%20window | A bay window is a window space projecting outward from the main walls of a building and forming a bay in a room. It typically consists of a central windowpane, called a fixed sash, flanked by two or more smaller windows, known as casement or double-hung windows. The arrangement creates a panoramic view of the outside, allows more natural light to enter the room, and provides additional space within the room. Bay windows are often designed to extend beyond the exterior wall, forming a small nook or seating area inside, which can be used for various purposes such as reading, display, or simply enjoying the view. They are commonly found in residential buildings, particularly in living rooms, dining areas, or bedrooms, but can also be seen in commercial or public structures.
Types
Bay window is a generic term for all protruding window constructions, regardless of whether they are curved or angular, or run over one or multiple storeys.
In plan, the most frequently used shapes are isosceles trapezoid (which may be referred to as a canted bay window) and rectangle.
But other polygonal shapes with more than two corners are also common, as are curved shapes. If a bay window is curved it may alternatively be called bow window. Bay windows in a triangular shape with just one corner exist, but are relatively rare.
A bay window that does not reach the ground and is instead supported by a corbel, bracket or similar is called an oriel window.
"Rawashin" is a traditional and distinctive style of corbelled bay window in Jeddah, Saudi Arabia (e.g., as on the frontage of Nasseef House).
Uses
Most medieval bay windows and up to the Baroque era are oriel windows. They frequently appear as a highly ornamented addition to the building rather than an organic part of it. Particularly during the Gothic period they often serve as small house chapels, with the oriel window containing an altar and resembling an apse of a church. Especially in Nuremberg these are even called (), with the most famous example being the one from the parsonage of St. Sebaldus Church.
In Islamic architecture, oriel windows such as the Arabic mashrabiya are frequently made of wood and allow viewing out while restricting visibility from the outside. Especially in warmer climates, a bay window may be identical to a balcony with a privacy shield or screen.
Bay windows can make a room appear larger, and provide views of the outside which would be unavailable with an ordinary flat window. They are found in terraced houses, semis, and detached houses as well as in blocks of flats.
Based on British models, their use spread to other English-speaking countries like Ireland, the US, Canada, and Australia. Following the pioneering model of pre-modern commercial architecture at the Oriel Chambers in Liverpool, they feature on early Chicago School skyscrapers, where they often run the whole height of the building's upper storeys. They also feature in bay-and-gable houses commonly found in older portions of Toronto.
Bay windows were identified as a defining characteristic of San Francisco architecture in a 2012 study that had a machine learning algorithm examine a random sample of 25,000 photos of cities from Google Street View.
Gallery
See also
Bay window caboose
Bow window
Bretèche
Oriel window
References
External links
Architectural elements
Architecture in the San Francisco Bay Area
Windows | Bay window | [
"Technology",
"Engineering"
] | 691 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
1,484,904 | https://en.wikipedia.org/wiki/Cathedral%20glass | Cathedral glass is the name given commercially to monochromatic sheet glass. It is thin by comparison with 'slab glass', may be coloured, and is textured on one side. The name draws from the fact that windows of stained glass were a feature of medieval European cathedrals from the 10th century onward.
The term 'cathedral glass' is sometimes applied erroneously to the windows of cathedrals as an alternative to the term 'stained glass'. Stained glass is the material and the art form of making coloured windows of elaborate or pictorial design.
Manufacture
Traditional methods of making coloured glass
Very early architectural glass, like that sometimes found in excavations of Roman baths, was cast. The molten glass was poured into a mold of wood or stone to make a sheet of glass. The texture of the mold material would be picked up by the glass.
By the time stained glass was being made, the glassblowing pipe was in common use, so hand-blown (or mouth-blown) sheets were made by the cylinder glass and/or crown glass method.
Casting came back as a common technique when rolled glass began to be manufactured in the mid 1830s and as glass jewels (also used for architectural glass) became popular. Rolled glass is not as rich and translucent as hand-blown glass, but it is much cheaper and is made in a variety of colors and textures, making it a useful decorative material.
Modern methods of making cathedral glass
This type of rolled glass is produced by pouring molten glass onto a metal or graphite table and immediately rolling it into a sheet using a large metal cylinder, similar to rolling out a pie crust. The rolling can be done by hand or machine. Glass can be 'double rolled', which means it is passed through two cylinders at once to yield glass of a certain thickness (approximately 3/16" or 5 mm). Glass made this way is never fully transparent, but it does not necessarily have much texture. It can be pushed and tugged while molten to achieve certain effects. For more distinct textures, the metal cylinder is imprinted with a pattern that is pressed into the molten glass as it passes through the rollers. The glass is then annealed.
Rolled glass was first commercially produced around the 1830s and is widely used today. It is often called cathedral glass, but this has nothing to do with medieval cathedrals, where the glass used was hand-blown.
Cathedral glass comes in a wide variety of colours and surface textures including hammered, rippled, seedy, and marine textures. It is made in the US, England, Germany, and China.
Uses
Cathedral glass has been used extensively in churches (often for non-pictorial windows) and for decorative glass in domestic and commercial buildings, both leaded and not, often in conjunction with drawn sheet glass and sometimes with decorative sections of beveled glass. It lets in light while reducing visibility and is a less expensive but still decorative material. While it does not have the richness and versatility of hand-blown glass, it has been used successfully for the creation of modern stained-glass windows in which the texture of the glass is treated, with the colour, as a significant design element.
Gallery
References
Sarah Brown, Stained Glass, an Illustrated History 1995, Bracken Books,
Ben Sinclair, Plain Glazing, 2001, the Building Conservation Directory,
Windows
Glass architecture
Glass art
Church architecture | Cathedral glass | [
"Materials_science",
"Engineering"
] | 677 | [
"Glass architecture",
"Glass engineering and science"
] |
1,484,951 | https://en.wikipedia.org/wiki/Selected-ion%20flow-tube%20mass%20spectrometry | Selected-ion flow-tube mass spectrometry (SIFT-MS) is a quantitative mass spectrometry technique for trace gas analysis which involves the chemical ionization of trace volatile compounds by selected positive precursor ions during a well-defined time period along a flow tube. Absolute concentrations of trace compounds present in air, breath or the headspace of bottled liquid samples can be calculated in real time from the ratio of the precursor and product ion signal ratios, without the need for sample preparation or calibration with standard mixtures. The detection limit of commercially available SIFT-MS instruments extends to the single digit pptv range.
The instrument is an extension of the selected ion flow tube, SIFT, technique, which was first described in 1976 by Adams and Smith. It is a fast flow tube/ion swarm method to react positive or negative ions with atoms and molecules under truly thermalised conditions over a wide range of temperatures. It has been used extensively to study ion-molecule reaction kinetics. Its application to ionospheric and interstellar ion chemistry over a 20-year period has been crucial to the advancement and understanding of these topics.
SIFT-MS was initially developed for use in human breath analysis, and has shown great promise as a non-invasive tool for physiological monitoring and disease diagnosis. It has since shown potential for use across a wide variety of fields, particularly in the life sciences, such as agriculture and animal husbandry, environmental research and food technology.
SIFT-MS has been popularised as a technology which is sold and marketed by Syft Technologies based in Christchurch, New Zealand.
The SIFT technique, which is the basis of SIFT-MS, was conceived and developed in the 1970s at the University of Birmingham, England, by Nigel Adams and David Smith.
Instrumentation
In the selected ion flow tube mass spectrometer, SIFT-MS, ions are generated in a microwave plasma ion source, usually from a mixture of laboratory air and water vapor. From the formed plasma, a single ionic species is selected using a quadrupole mass filter to act as "precursor ions" (also frequently referred to as primary or reagent ions in SIFT-MS and other processes involving chemical ionization). In SIFT-MS analyses, H3O+, NO+ and O2+ are used as precursor ions, and these have been chosen because they are known not to react significantly with the major components of air (nitrogen, oxygen, etc.), but can react with many of the very low level (trace) gases.
The selected precursor ions are injected into a flowing carrier gas (usually helium at a pressure of 1 Torr) via a Venturi orifice (~1 mm diameter) where they travel along the reaction flow tube by convection. Concurrently, the neutral analyte molecules of a sample vapor enter the flow tube, via a heated sampling tube, where they meet the precursor ions and may undergo chemical ionization, depending on their chemical properties, such as their proton affinity or ionization energy.
The newly formed "product ions" flow into the mass spectrometer chamber, which contains a second quadrupole mass filter, and an electron multiplier detector, which are used to separate the ions by their mass-to-charge ratios (m/z) and measure the count rates of the ions in the desired m/z range.
Analysis
The concentrations of individual compounds can be derived largely using the count rates of the precursor and product ions, and the reaction rate coefficients, k. Exothermic proton transfer reactions with H3O+ are assumed to proceed at the collisional rate (see Collision theory), the coefficient for which, kc, is calculable using the method described by Su and Chesnavich, providing the polarizability and dipole moment are known for the reactant molecule. NO+ and O2+ reactions proceed at kc less frequently, and thus the reaction rates of the reactant molecule with these precursor ions must often be derived experimentally by comparing the decline in the count rates of each of the NO+ and O2+ precursor ions to that of H3O+ as the sample flow of reactant molecules is increased. The product ions and rate coefficients have been derived in this way for well over 200 volatile compounds, which can be found in the scientific literature.
The instrument can be programmed either to scan across a range of masses to produce a mass spectrum (Full Scan, FS, mode), or to rapidly switch between only the m/z values of interest (Multiple Ion Monitoring, MIM, mode). Due to the different chemical properties of the aforementioned precursor ions (H3O+, NO+, and O2+), different FS mode spectra can be produced for a vapor sample, and these can give different information relating to the composition of the sample. Using this information, it is often possible to identify the trace compound(s) that are present. The MIM mode, on the other hand will usually employ a much longer dwell time on each ion, and as a result, accurate quantification is possible to the parts per billion (ppb) level.
SIFT-MS utilises an extremely soft ionisation process which greatly simplifies the resulting spectra and thereby facilitates the analysis of complex mixtures of gases, such as human breath. Another very soft ionization technique is secondary electrospray ionization (SESI-MS). For example, even proton-transfer-reaction mass spectrometry (PTR-MS), another soft ionisation technology that uses the H3O+ reagent ion, has been shown to give considerably more product ion fragmentation than SIFT-MS.
Another key feature of SIFT-MS is the upstream mass quadrupole, which allows the use of multiple precursor ions. The ability to use three precursor ions, H3O+, NO+ and O2+, to obtain three different spectra is extremely valuable because it allows the operator to analyse a much wider variety of compounds. An example of this is methane, which cannot be analysed using H3O+ as a precursor ion (because it has a proton affinity of 543.5kJ/mol, somewhat less than that of H2O), but can be analysed using O2+. Furthermore, the parallel use of three precursor ions may allow the operator to distinguish between two or more compounds that react to produce ions of the same mass-to-charge ratio in certain spectra. For example, dimethyl sulfide (C2H6S, 62 amu) accepts a proton when it reacts with H3O+ to generate C2H7S+ product ions which appear at m/z 63 in the resulting spectrum. This may conflict with other product ions, such as the association product from the reaction with carbon dioxide, H3O+CO2, and the single hydrate of the protonated acetaldehyde ion, C2H5O+(H2O), which also appear at m/z 63, and so it may be unidentifiable in certain samples. However dimethyl sulfide reacts with NO+ by charge transfer, to produce the ion C2H6S+, which appears at m/z 62 in resulting spectra, whereas carbon dioxide does not react with NO+, and acetaldehyde donates a hydride ion, giving a single product ion at m/z 43, C2H3O+, and so dimethyl sulfide can be easily distinguished.
Over recent years, advances in SIFT-MS technology have vastly increased the sensitivity of these devices such that the limits of detection now extend down to the single-digit-ppt level.
References
Literature
"The selected ion flow tube (SIFT); A technique for studying ion-neutral reactions" Adams N.G., Smith D.; International Journal of Mass Spectrometry and Ion Physics 21 (1976) pp. 349–359.
"Parametrization of the ion-polar molecule collision rate constant by trajectory calculations" Su T., Chesnavich W.J.; Journal of Chemical Physics 76 (1982) pp. 5183–5186.
"Selected ion flow tube mass spectrometry (SIFT-MS) for on-line trace gas analysis" Smith D., Španěl P.; Mass Spectrometry Reviews 24 (2005) pp. 661–700.
"Quantification of methane in humid air and exhaled breath using selected ion flow tube mass spectrometry" Dryahina K., Smith D., Španěl P.; Rapid Communications in Mass Spectrometry 24 (2010) pp. 1296–1304.
Mass spectrometry | Selected-ion flow-tube mass spectrometry | [
"Physics",
"Chemistry"
] | 1,794 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
1,484,989 | https://en.wikipedia.org/wiki/Primer%20walking | Primer walking is a technique used to clone a gene (e.g., disease gene) from its known closest markers (e.g., known gene). As a result, it is employed in cloning and sequencing efforts in plants, fungi, and mammals with minor alterations. This technique, also known as "directed sequencing," employs a series of Sanger sequencing reactions to either confirm the reference sequence of a known plasmid or PCR product based on the reference sequence (sequence confirmation service) or to discover the unknown sequence of a full plasmid or PCR product by designing primers to sequence overlapping sections (sequence discovery service).
Primer walking: a DNA sequencing method
Primer walking is a method to determine the sequence of DNA up to the 1.3–7.0 kb range whereas chromosome walking is used to produce the clones of already known sequences of the gene. Too long fragments cannot be sequenced in a single sequence read using the chain termination method. This method works by dividing the long sequence into several consecutive short ones. The DNA of interest may be a plasmid insert, a PCR product or a fragment representing a gap when sequencing a genome. The term "primer walking" is used where the main aim is to sequence the genome. The term "chromosome walking" is used instead when the sequence is known but there is no clone of a gene. For example, the gene for a disease may be located near a specific marker such as an RFLP on the sequence. Chromosome walking is a technique used to clone a gene (e.g., disease gene) from its known closest markers (e.g., known gene) and hence is used in moderate modifications in cloning and sequencing projects in plants, fungi, and animals. To put it another way, it's utilized to find, isolate, and clone a specific sequence existing near the gene to be mapped. Libraries of large fragments, mainly bacterial artificial chromosome libraries, are mostly used in genomic projects. To identify the desired colony and to select a particular clone the library is screened first with a desired probe. After screening, the clone is overlapped with the probe and overlapping fragments are mapped. These fragments are then used as a new probe (short DNA fragments obtained from the 3′ or 5′ ends of clones) to identify other clones. A library approximately consists of 96 clones and each clone contains a different insert. Probe one identifies λ1 and λ2 as it overlaps them . Probe two derived from λ2 clones is used to identify λ3, and so on. Orientation of the clones is determined by restriction mapping of the clones. Thus, new chromosomal regions present in the vicinity of a gene could be identified. Chromosome walking is time-consuming, and chromosome landing is the method of choice for gene identification. This method necessitates the discovery of a marker that is firmly related to the mutant locus.
The fragment is first sequenced as if it were a shorter fragment. Sequencing is performed from each end using either universal primers or specifically designed ones. This should identify the first 1000 or so bases. In order to completely sequence the region of interest, design and synthesis of new primers (complementary to the final 20 bases of the known sequence) is necessary to obtain contiguous sequence information.
Primer walking versus shotgun sequencing
Primer walking is an example of directed sequencing because the primer is designed from a known region of DNA to guide the sequencing in a specific direction. In contrast to directed sequencing, shotgun sequencing of DNA is a more rapid sequencing strategy.
There is a technique from the "old time" of genome sequencing. The underlying method for sequencing is the Sanger chain termination method which can have read lengths between 100 and 1000 basepairs (depending on the instruments used). This means you have to break down longer DNA molecules, clone and subsequently sequence them. There are two methods possible.
The first is called chromosome (or primer) walking and starts with sequencing the first piece. The next (contiguous) piece of the sequence is then sequenced using a primer which is complementary to the end of the first sequence read and so on. This technique doesn't require much assembling, but you need a lot of primers and it is relatively slow.
To overcome this problem the shotgun sequencing method was developed. Here the DNA is broken into different pieces (not all broken at the same place), cloned and sequenced with primers specific for the vector used for cloning. This leads to overlapping sequences which then have to be assembled into one sequence on the computer. This method allows for the parallelization of the sequencing (you can prepare a lot of sequencing reactions at the same time and run them) which makes the process much faster and also avoids the need for sequence specific primers. The challenge is to organize sequences into their order, as overlaps are not as clear here. To resolve this problem, a first draft is made and then critical regions are resequenced using other techniques such as primer walking.
Process
The overall process is as follows:
A primer that matches the beginning of the DNA to sequence is used to synthesize a short DNA strand adjacent to the unknown sequence, starting with the primer (see PCR).
The new short DNA strand is sequenced using the chain termination method.
The end of the sequenced strand is used as a primer for the next part of the long DNA sequence, hence the term "walking".
The method can be used to sequence entire chromosomes (hence "chromosome walking"). Primer walking was also the basis for the development of shotgun sequencing, which uses random primers instead of specifically chosen ones.
See also
Chromosome jumping
Chromosome landing
Shotgun sequencing – an alternative method, using random, rather than consecutive, sub-strands.
References
DNA
Molecular biology | Primer walking | [
"Chemistry",
"Biology"
] | 1,191 | [
"Biochemistry",
"Molecular biology"
] |
1,485,054 | https://en.wikipedia.org/wiki/Sunless%20tanning | Sunless tanning, also known as UV filled tanning, self tanning, spray tanning (when applied topically), or fake tanning, refers to the effect of a suntan without exposure to the Sun. Sunless tanning involves the use of oral agents (carotenids), or creams, lotions or sprays applied to the skin. Skin-applied products may be skin-reactive agents or temporary bronzers (colorants).
The popularity of sunless tanning has risen since the 1960s after health authorities confirmed links between UV exposure (from sunlight or indoor tanning) and the incidence of skin cancer.
The chemical compound dihydroxyacetone (DHA) is used in sunless tanning products in concentrations of 3%-5%. DHA concentration is adjusted to provide darker and lighter shades of tan. The reaction of keratin protein present in skin and DHA is responsible for the production of pigmentation.
Oral agents (carotenoids)
A safe and effective method of sunless tanning is consumption of certain carotenoids—antioxidants found in some fruits and vegetables such as carrots and tomatoes—which can result in changes to skin color when ingested chronically and/or in high amounts. Carotenoids are long-lasting. In addition, carotenoids have been linked to a more attractive skin tone (defined as a more golden skin color) than suntan. Carotenes also fulfil the function of melanin in absorbing UV radiation and protecting the skin. For example, they are concentrated in the macula of the eye to protect the retina from damage. They are used in plants both to protect chlorophyll from light damage and harvest light directly.
Carotenaemia (xanthaemia) is the presence in blood of the yellow pigment carotene from excessive intake of carrots or other vegetables containing the pigment resulting in increased serum carotenoids. It can lead to subsequent yellow-orange discoloration (xanthoderma or carotenoderma) and their subsequent deposition in the outermost layer of skin. Carotenemia, or carotenoderma, is in itself harmless, and does not require treatment. In primary carotenoderma, when the use of high quantities of carotene is discontinued the skin color will return to normal. It may take up to several months, however, for this to happen.
Lycopene
Lycopene is a key intermediate in the biosynthesis of beta-carotene and xanthophylls.
Lycopene may be the most powerful carotenoid quencher of singlet oxygen.
Due to its strong color and non-toxicity, lycopene is a useful food coloring (registered as E160d) and is approved for usage in the US, Australia and New Zealand (registered as 160d) and the EU.
Beta-carotene
Sunless-tanning pills often contain β-carotene. The American Cancer Society states that "Although the US Food and Drug Administration (FDA) has approved some of these additives for coloring food, they are not approved for use in tanning agents." Also that "They may be harmful at the high levels that are used in tanning pills."
Chronic, high doses of synthetic β-carotene supplements have been associated with increased rate of lung cancer among those who smoke.
Canthaxanthin
Canthaxanthin is most commonly used as a color additive in certain foods. Although the FDA has approved the use of canthaxanthin in food, it does not approve its use as a tanning agent and has issued warnings concerning its use. When used as a color additive, only very small amounts of canthaxanthin are needed. As a tanning agent, however, much larger quantities are used. After canthaxanthin is consumed, it is deposited throughout the body, including in the layer of fat below the skin, which turns an orange-brown color. These types of tanning pills have been linked to various side effects, including hepatitis and canthaxanthin retinopathy, a condition in which yellow deposits form in the retina of the eye. Other side effects including damage to the digestive system and skin surface have also been noted.
Skin-reactive agents
DHA-based products
DHA (dihydroxyacetone, also known as glycerone) is not a dye, stain or paint, but causes a chemical reaction with the amino acids in the dead layer on the skin surface. One of the pathways is a free radical-mediated Maillard reaction. The other pathway is the conventional Maillard reaction, a process well known to food chemists that causes the browning that occurs during food manufacturing and storage. It does not involve the underlying skin pigmentation nor does it require exposure to ultraviolet light to initiate the color change. However, for the 24 hours after self-tanner is applied, the skin is especially susceptible to ultraviolet, according to a 2007 study led by Katinka Jung of the Gematria Test Lab in Berlin. Forty minutes after the researchers treated skin samples with high levels of DHA they found that more than 180 percent additional free radicals formed during sun exposure compared with untreated skin. Another self-tanner ingredient, erythrulose, produced a similar response at high levels. For a day after self-tanner application, excessive sun exposure should be avoided and sunscreen should be worn outdoors, they say; an antioxidant cream could also minimize free radical production. Although some self-tanners contain sunscreen, its effect will not last long after application, and a fake tan itself will not protect the skin from UV exposure. The study by Jung et al. further confirms earlier results demonstrating that dihydroxyacetone in combination with dimethylisosorbide enhances the process of (sun-based) tanning. This earlier study also found that dihydroxyacetone also has an effect on the amino acids and nucleic acids which is bad for the skin.
The free radicals are due to the action of UV light on AGE (advanced glycation end-products) as a result of the reaction of DHA with the skin, and the intermediates, such as Amadori products (a type of AGE), that lead to them. AGEs are behind the damage to the skin that occurs with high blood sugar in diabetes where similar glycation occurs. AGEs absorb and provide a little protection against some of the damaging factors of UV (up to SPF 3), However, they do not have melanin's extended electronic structure that dissipates the energy, so part of it goes towards starting free radical chain reactions instead, in which other AGEs participate readily. Overall tanner enhances free radical injury. Although some self-tanners contain sunscreen, its effect will not last as long as the tan. The stated SPF is only applicable for a few hours after application. Despite darkening of the skin, an individual is still susceptible to UV rays, therefore an overall sun protection is still very necessary. There may also be some inhibition of vitamin D production in DHA-treated skin.
The color effect is temporary and fades gradually over 3 to 10 days. Some of these products also use erythrulose which works identically to DHA, but develops more slowly. Both DHA and erythrulose have been known to cause contact dermatitis.
Professional spray tan applications are available from spas, salons and gymnasiums by both hand-held sprayers and in the form of sunless or UV-Free spray booths. Spray tan applications are also available through online retail distribution channels and are widely available to purchase for in home use. The enclosed booth, which resembles an enclosed shower stall, sprays the tanning solution over the entire body. The U.S. Food and Drug Administration (FDA) states when using DHA-containing products as an all-over spray or mist in a commercial spray "tanning" booth, it may be difficult to avoid exposure in a manner for which DHA is not approved, including the area of the eyes, lips, or mucous membrane, or even internally. DHA is not approved by the FDA for inhalation.
An opinion issued by the European Commission's Scientific Committee on Consumer Safety, concluding spray tanning with DHA did not pose risk, has been heavily criticized by specialists. This is because the cosmetics industry in Europe chose the evidence to review, according to the commission itself. Thus, nearly every report the commission's eventual opinion referenced came from studies that were never published or peer-reviewed and, in the majority of cases, were performed by companies or industry groups linked to the manufacturing of DHA. The industry left out nearly all of the peer-reviewed studies published in publicly available scientific journals that identified DHA as a potential mutagen. A study by scientists from the Department of Dermatology, Bispebjerg Hospital, published in Mutation Research has concluded DHA 'induces DNA damage, cell-cycle block and apoptosis' in cultured cells.
SIK-inhibitors
A novel class of compounds has been found to stimulate melanogenesis in a mechanism that is independent from α-melanocyte-stimulating hormone (α-MSH) activation of the melanocortin 1 receptor (MC1 receptor). This is accomplished via small molecule inhibition of salt-inducible kinases (SIK). Inhibition of SIK increases transcription of MITF which is known to increase melanin production. Work published in June 2017 has demonstrated compounds that have efficacy when applied topically to human skin. These compounds are still however in pre-clinical stages of development. Future directions may include the incorporation of SIK-inhibitor compounds with traditional UV-blocking sunscreens to minimize UV-related DNA damage in the short term while providing longer term protection through endogenous melanin production.
Tyrosine-based products
Tanning accelerators—lotions or pills that usually contain the amino acid tyrosine—claim that they stimulate and increase melanin formation, thereby accelerating the tanning process. These are used in conjunction with UV exposure. At this time, there is no scientific data available to support these claims.
Melanotan peptide hormones
The role of alpha-melanocyte-stimulating hormone (α-MSH) in promoting melanin diffusion has been known since the 1960s. In the 1980s, scientists at University of Arizona began attempting to develop α-MSH and analogs as potential sunless tanning agents, and synthesized and tested several analogs, including afamelanotide, then called melanotan-I.
In the European Union and United States, afamelanotide is indicated for the prevention of phototoxicity in adults with erythropoietic protoporphyria. Afamelanotide is also being investigated as a method of photoprotection from in the treatment of polymorphous light eruption, actinic keratosis and squamous cell carcinoma (a form of skin cancer). Bremelanotide is used for the treatment of generalized hypoactive sexual desire disorder (HSDD) in premenopausal women.
To pursue the tanning agent, melanotan-I was licensed by Competitive Technologies, a technology transfer company operating on behalf of University of Arizona, to an Australian startup called Epitan, which changed its name to Clinuvel in 2006.
A number of products are sold online and in gyms and beauty salons as "melanotan" or "melanotan-1" which discuss afamelanotide in their marketing. The products are not legal in any jurisdiction and are dangerous. Starting in 2007 health agencies in various counties began issuing warnings against their use.
Other melanogenesis stimulants
Eicosanoids, retinoids, oestrogens, melanocyte-stimulating hormone, endothelins, psoralens, hydantoin, forskolin, cholera toxin, isobutylmethylxanthine, diacylglycerol analogues, and UV irradiation all trigger melanogenesis and, in turn, pigmentation.
Temporary bronzers (skin colorants)
Bronzers are a temporary sunless tanning or bronzing option. These come in powders, sprays, mousse, gels, lotions and moisturizers. Once applied, they create a tan that can easily be removed with soap and water. Like make-up, these products tint or stain a person's skin only until they are washed off.
They are often used for "one-day" only tans, or to complement a DHA-based sunless tan. Many formulations are available, and some have limited sweat or light water resistance. Walnut oil extract, jojoba extract, and caramel are ingredients frequently used in temporary bronzers. If bronzer is applied under clothing, or where fabric and skin edges meet, most will create some light but visible rub-off. Dark clothing prevents the rub-off from being noticeable. While these products are much safer than tanning beds, the color produced can sometimes look orangey and splotchy if applied incorrectly.
A recent trend is that of lotions or moisturizers containing a gradual tanning agent. A slight increase in color is usually observable after the first use, but color will continue to darken the more frequently the product is used.
Just as with the term "sunless tanner", the term "bronzer" is likewise not defined by law, or by regulations enforced by the FDA. What is defined and regulated is the color additive DHA, or dihydroxyacetone. (Note that the "color additive" dihydroxyacetone is itself colorless.)
Air brush tanning is a spray on tan performed by a professional. An air brush tan can last five to ten days and will fade when the skin is washed. It is used for special occasions or to get a quick dark tan. At-home airbrush tanning kits and aerosol mists are also available.
Risks
Tanners usually contain a sunscreen. However, when avobenzone is irradiated with UVA light, it generates a triplet excited state in the keto form which can either cause the avobenzone to degrade or transfer energy to biological targets and cause deleterious effects.
It has been shown to degrade significantly in light, resulting in less protection over time. The UV-A light in a day of sunlight in a temperate climate is sufficient to break down most of the compound. It's important to continue wearing SPF while self-tanning, as self-tanner is generally a fake and temporary tan, and your skin is still sensitive to the sun.
If avobenzone-containing sunscreen is applied on top of tanner, the photosensitizer effect magnifies the free-radical damage promoted by DHA, as DHA may make the skin especially susceptible to free-radical damage from sunlight, according to a 2007 study led by Katinka Jung of the Gematria Test Lab in Berlin. Forty minutes after the researchers treated skin samples with 20% DHA they found that more than 180 percent additional free radicals formed during sun exposure compared with untreated skin.
A toxicologist and lung specialist at the University of Pennsylvania's Perelman School of Medicine (Dr. Rey Panettieri) has commented, "The reason I'm concerned is the deposition of the tanning agents into the lungs could really facilitate or aid systemic absorption -- that is, getting into the bloodstream. These compounds in some cells could actually promote the development of cancers or malignancies, and if that's the case then we need to be wary of them." A study by scientists from the Department of Dermatology, Bispebjerg Hospital, published in Mutation Research has concluded DHA 'induces DNA damage, cell-cycle block and apoptosis' in cultured cells.
Many self tanners use chemical fragrances which may cause skin allergies or may trigger asthma. Furthermore, some of them contain parabens. Parabens are preservatives that can affect the endocrine system.
See also
Indoor tanning
Indoor tanning lotion
Sun tanning
References
External links
FDA listing of approved colorants
American Academy of Dermatology on Self Tanners
Tanning (beauty treatment)
Toiletry | Sunless tanning | [
"Chemistry"
] | 3,418 | [
"Tanning (beauty treatment)",
"Ultraviolet radiation"
] |
1,485,104 | https://en.wikipedia.org/wiki/Chemical%20ionization | Chemical ionization (CI) is a soft ionization technique used in mass spectrometry. This was first introduced by Burnaby Munson and Frank H. Field in 1966. This technique is a branch of gaseous ion-molecule chemistry. Reagent gas molecules (often methane or ammonia) are ionized by electron ionization to form reagent ions, which subsequently react with analyte molecules in the gas phase to create analyte ions for analysis by mass spectrometry. Negative chemical ionization (NCI), charge-exchange chemical ionization, atmospheric-pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) are some of the common variants of the technique. CI mass spectrometry finds general application in the identification, structure elucidation and quantitation of organic compounds as well as some utility in biochemical analysis. Samples to be analyzed must be in vapour form, or else (in the case of liquids or solids), must be vapourized before introduction into the source.
Principles of operation
The chemical ionization process generally imparts less energy to an analyte molecule than does electron impact (EI) ionization, resulting in less fragmentation and usually a simpler spectrum. The amount of fragmentation, and therefore the amount of structural information produced by the process can be controlled to some degree by selection of the reagent ion. In addition to some characteristic fragment ion peaks, a CI spectrum usually has an identifiable protonated molecular ion peak [M+1]+, allowing determination of the molecular mass. CI is thus useful as an alternative technique in cases where EI produces excessive fragmentation of the analyte, causing the molecular-ion peak to be weak or completely absent.
Instrumentation
The CI source design for a mass spectrometer is very similar to that of the EI source. To facilitate the reactions between the ions and molecules, the chamber is kept relatively gas tight at a pressure of about 1 torr. Electrons are produced externally to the source volume (at a lower pressure of 10−4 torr or below) by heating a metal filament which is made of tungsten, rhenium, or iridium. The electrons are introduced through a small aperture in the source wall at energies 200–1000 eV so that they penetrate to at least the centre of the box. In contrast to EI, the magnet and the electron trap are not needed for CI, since the electrons do not travel to the end of the chamber. Many modern sources are dual or combination EI/CI sources and can be switched from EI mode to CI mode and back in seconds.
Mechanism
A CI experiment involves the use of gas phase acid-base reactions in the chamber. Some common reagent gases include: methane, ammonia, water and isobutane. Inside the ion source, the reagent gas is present in large excess compared to the analyte. Electrons entering the source will mainly ionize the reagent gas because it is in large excess compared to the analyte. The primary reagent ions then undergo secondary ion/molecule reactions (as below) to produce more stable reagent ions which ultimately collide and react with the lower concentration analyte molecules to form product ions. The collisions between reagent ions and analyte molecules occur at close to thermal energies, so that the energy available to fragment the analyte ions is limited to the exothermicity of the ion-molecule reaction. For a proton transfer reaction, this is just the difference in proton affinity between the neutral reagent molecule and the neutral analyte molecule. This results in significantly less fragmentation than does 70 eV electron ionization (EI).
The following reactions are possible with methane as the reagent gas.
Primary ion formation
CH4{} + e^- -> CH4^{+\bullet}{} + 2e^-
Secondary reagent ions
CH4{} + CH4^{+\bullet} -> CH5+{} + CH3^{\bullet}
CH4 + CH3^+ -> C2H5+ + H2
Product ion formation
M + CH5+ -> CH4 + [M + H]+ (protonation)
AH + CH3+ -> CH4 + A+ (H^- abstraction)
M + C2H5+ -> [M + C2H5]+ (adduct formation)
A + CH4+ -> CH4 + A+ (charge exchange)
If ammonia is the reagent gas,
NH3{} + e^- -> NH3^{+\bullet}{} + 2e^-
NH3{} + NH3^{+\bullet} -> NH4+{} + NH2
M + NH4^+ -> MH+ + NH3
For isobutane as the reagent gas,
C3H7^+{} + C4H10^{+\bullet} -> C4H9^+{} + C3H8
M + C4H9^+ -> MH^+ + C4H8
Self chemical ionization is possible if the reagent ion is an ionized form of the analyte.
Advantages and limitations
One of the main advantages of CI over EI is the reduced fragmentation as noted above, which for more fragile molecules, results in a peak in the mass spectrum indicative of the molecular weight of the analyte. This proves to be a particular advantage for biological applications where EI often does not yield useful molecular ions in the spectrum. The spectra given by CI are simpler than EI spectra and CI can be more sensitive than other ionization methods, at least in part to the reduced fragmentation which concentrates the ion signal in fewer and therefore more intense peaks. The extent of fragmentation can be somewhat controlled by proper selection of reagent gases. Moreover, CI is often coupled to chromatographic separation techniques, thereby improving its usefulness in identification of compounds. As with EI, the method is limited compounds that can be vapourized in the ion source. The lower degree of fragmentation can be a disadvantage in that less structural information is provided. Additionally, the degree of fragmentation and therefore the mass spectrum, can be sensitive to source conditions such as pressure, temperature, and the presence of impurities (such as water vapour) in the source. Because of this lack of reproducibility, libraries of CI spectra have not been generated for compound identification.
Applications
CI mass spectrometry is a useful tool in structure elucidation of organic compounds. This is possible with CI, because formation of [M+1]+ eliminates a stable molecule, which can be used to guess the functional groups present. Besides that, CI facilitates the ability to detect the molecular ion peak, due to less extensive fragmentation. Chemical ionization can also be used to identify and quantify an analyte present in a sample, by coupling chromatographic separation techniques to CI such as gas chromatography (GC), high performance liquid chromatography (HPLC) and capillary electrophoresis (CE). This allows selective ionization of an analyte from a mixture of compounds, where accurate and precised results can be obtained.
Variants
Negative chemical ionization
Chemical ionization for gas phase analysis is either positive or negative. Almost all neutral analytes can form positive ions through the reactions described above.
In order to see a response by negative chemical ionization (NCI, also NICI), the analyte must be capable of producing a negative ion (stabilize a negative charge) for example by electron capture ionization. Because not all analytes can do this, using NCI provides a certain degree of selectivity that is not available with other, more universal ionization techniques (EI, PCI). NCI can be used for the analysis of compounds containing acidic groups or electronegative elements (especially halogens).Moreover, negative chemical ionization is more selective and demonstrates a higher sensitivity toward oxidizing agents and alkylating agents.
Because of the high electronegativity of halogen atoms, NCI is a common choice for their analysis. This includes many groups of compounds, such as PCBs, pesticides, and fire retardants. Most of these compounds are environmental contaminants, thus much of the NCI analysis that takes place is done under the auspices of environmental analysis. In cases where very low limits of detection are needed, environmental toxic substances such as halogenated species, oxidizing and alkylating agents are frequently analyzed using an electron capture detector coupled to a gas chromatograph.
Negative ions are formed by resonance capture of a near-thermal energy electron, dissociative capture of a low energy electron and via ion-molecular interactions such as proton transfer, charge transfer and hydride transfer. Compared to the other methods involving negative ion techniques, NCI is quite advantageous, as the reactivity of anions can be monitored in the absence of a solvent. Electron affinities and energies of low-lying valencies can be determined by this technique as well.
Charge-exchange chemical ionization
This is also similar to CI and the difference lies in the production of a radical cation with an odd number of electrons. The reagent gas molecules are bombarded with high energy electrons and the product reagent gas ions abstract electrons from the analyte to form radical cations. The common reagent gases used for this technique are toluene, benzene, NO, Xe, Ar and He.
Careful control over the selection of reagent gases and the consideration toward the difference between the resonance energy of the reagent gas radical cation and the ionization energy of the analyte can be used to control fragmentation. The reactions for charge-exchange chemical ionization are as follows.
He{} + e^- -> He^{+\bullet}{} + 2e^-
He^{+\bullet}{} + M -> M^{+\bullet}
Atmospheric-pressure chemical ionization
Chemical ionization in an atmospheric pressure electric discharge is called atmospheric pressure chemical ionization (APCI), which usually uses water as the reagent gas. An APCI source is composed of a liquid chromatography outlet, nebulizing the eluent, a heated vaporizer tube, a corona discharge needle and a pinhole entrance to 10−3 torr vacuum. The analyte is a gas or liquid spray and ionization is accomplished using an atmospheric pressure corona discharge. This ionization method is often coupled with high performance liquid chromatography where the mobile phase containing eluting analyte sprayed with high flow rates of nitrogen or helium and the aerosol spray is subjected to a corona discharge to create ions. It is applicable to relatively less polar and thermally less stable compounds. The difference between APCI and CI is that APCI functions under atmospheric pressure, where the frequency of collisions is higher. This enables the improvement in sensitivity and ionization efficiency.
See also
Electrospray ionization
Proton-transfer-reaction mass spectrometry
References
Bibliography
External links
Using Amines as Chemical Ionization Reagents and Building Custom Manifold
Ion source
Mass spectrometry
Scientific techniques | Chemical ionization | [
"Physics",
"Chemistry"
] | 2,329 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Ion source",
"Mass spectrometry",
"Matter"
] |
1,485,282 | https://en.wikipedia.org/wiki/James%20Edward%20Allen%20Gibbs | James Edward Allen Gibbs (1829–1902) was a farmer, inventor, and businessman from Rockbridge County in the Shenandoah Valley in Virginia. On June 2, 1857, he was awarded a patent for the first twisted chain-stitch single-thread sewing machine using a rotating hook. In partnership with James Willcox, Gibbs became a principal in the Willcox & Gibbs Sewing Machine Company.
The Willcox & Gibbs Sewing Machine Company started in 1857 by James E. A. Gibbs and James Willcox opened its London Office in 1859 at 135 Regent Street. By around 1871 the European offices were at 150 Cheapside, London and later 20 Fore Street, London. The company hired John Emory Powers for marketing its product. Powers pioneered the use of many new marketing techniques, including full-page ads in the form of a story or play, free trial uses of a product and installment purchasing plan. The marketing campaign created a demand for sewing machines in the Great Britain that Willcox and Gibbs could not meet. The machine's circular design was so popular that it was produced well into the early 20th century, long after most machines were of the more conventional design. The machines shown employ the Gibbs rotary twisted chain stitch mechanism which was less prone to coming undone.
Following his successful invention, he named his family's farm "Raphine." The name originated from an old Greek word "raphis" which means "to sew." The community of Raphine, Virginia, was named in his honor.
Gallery
References
External links
A typical Willcox and Gibbs machine from about 1930
1829 births
1902 deaths
19th-century American businesspeople
19th-century American engineers
19th-century American inventors
Businesspeople from Virginia
Engineers from Virginia
People from Rockbridge County, Virginia
Sewing machines
Farmers from Virginia
Inventors from Virginia | James Edward Allen Gibbs | [
"Physics",
"Technology"
] | 370 | [
"Physical systems",
"Machines",
"Sewing machines"
] |
1,485,502 | https://en.wikipedia.org/wiki/Ralph%20Henstock | Ralph Henstock (2 June 1923 – 17 January 2007) was an English mathematician and author. As an Integration theorist, he is notable for Henstock–Kurzweil integral. Henstock brought the theory to a highly developed stage without ever having encountered Jaroslav Kurzweil's 1957 paper on the subject.
Early life
Henstock was born in the coal-mining village of Newstead, Nottinghamshire, the only child of mineworker and former coalminer William Henstock and Mary Ellen Henstock (née Bancroft). On the Henstock side he was descended from 17th century Flemish immigrants called Hemstok.
Because of his early academic promise it was expected that Henstock would attend the University of Nottingham where his father and uncle had received technical education, but as it turned out he won scholarships which enabled him to study mathematics at St John's College, Cambridge from October 1941 until November 1943, when he was sent for war service to the Ministry of Supply's department of Statistical Method and Quality Control in London.
This work did not satisfy him, so he enrolled at Birkbeck College, London where he joined the weekly seminar of Professor Paul Dienes which was then a focus for mathematical activity in London. Henstock wanted to study divergent series but Dienes prevailed upon him to get involved in the theory of integration, thereby setting him on course for his life's work.
A devoted Methodist, the lasting impression he made was one of gentle sincerity and amiability. Henstock married Marjorie Jardine in 1949. Their son John was born 10 July 1952. Ralph Henstock died on 17 January 2007 after a short illness.
Work
Henstock was awarded the Cambridge B.A. in 1944 and began research for the PhD in Birkbeck College, London, under the supervision of Paul Dienes. His PhD thesis, entitled Interval Functions and their Integrals, was submitted in December 1948. His Ph.D. examiners were Burkill and H. Kestelman. In 1947 he returned briefly to Cambridge to complete the undergraduate mathematical studies which had been truncated by his Ministry of Supply work.
Most of Henstock's work was concerned with integration. From initial studies of the Burkill and Ward integrals he formulated an integration process whereby the domain of integration is suitably partitioned for Riemann sums to approximate the integral of a function. His methods led to an integral on the real line that was very similar in construction and simplicity to the Riemann integral but which included the Lebesgue integral and, in addition, allowed non-absolute convergence.
These ideas were developed from the late 1950s. Independently, Jaroslav Kurzweil developed a similar Riemann-type integral on the real line. The resulting integral is now known as the Henstock-Kurzweil integral. On the real line it is equivalent to the Denjoy-Perron integral, but has a simpler definition.
In the following decades, Henstock developed extensively the distinctive features of his theory, inventing the concepts of division spaces or integration bases to demonstrate in general settings the properties and characteristics of mathematical integration. His theory provides a unified approach to non-absolute integral, as different kinds of Henstock integral, choosing an appropriate integration basis (division space, in Henstock's own terminology). It has been used in differential and integral equations, harmonic analysis, probability theory and Feynman integration. Numerous monographs and texts have appeared since 1980 and there have been several conferences devoted to the theory. It has been taught in standard courses in mathematical analysis.
Henstock was author of 46 journal papers in the period 1946 to 2006. He published four books on analysis (Theory of Integration, 1963; Linear Analysis, 1967; Lectures on the Theory of Integration, 1988; and The General Theory of Integration, 1991). He wrote 171 reviews for MathSciNet. In 1994 he was awarded the Andy Prize of the XVIII Summer Symposium in Real Analysis. His academic career began as Assistant Lecturer, Bedford College for Women, 1947–48; then Assistant Lecturer at Birkbeck, 1948–51; Lecturer, Queen's University Belfast, 1951–56; Lecturer, Bristol University, 1956–60; Senior Lecturer and Reader, Queen's University Belfast, 1960–64; Reader, Lancaster University, 1964–70; Chair of Pure Mathematics, New University of Ulster, 1970–88; and Leverhulme Fellow 1988–91.
List of publications of Ralph Henstock
Much of Henstock's earliest work was published by the Journal of the London Mathematical Society. These were "On interval functions and their integrals" I (21, 1946) and II (23, 1948); "The efficiency of matrices for Taylor series" (22, 1947); "The efficiency of matrices for bounded sequences" (25, 1950); "The efficiency of convergence factors for functions of a continuous real variable" (30, 1955); "A new description of the Ward integral" (35 1960); and "The integrability of functions of interval functions" (39 1964).
His works, published in Proceedings of the London Mathematical Society, were "Density integration" (53, 1951); "On the measure of sum sets (I) The theorems of Brunn, Minkowski, and Lusternik, (with A.M. McBeath)" ([3] 3, 1953); "Linear functions with domain a real countably infinite dimensional space" ([3] 5, 1955); "Linear and bilinear functions with domain contained in a real countably infinite dimensional space" ([3] 6, 1956); "The use of convergence factors in Ward integration" ([3] 10, 1960); "The equivalence of generalized forms of the Ward, variational, Denjoy-Stieltjes, and Perron-Stieltjes integrals" ([3] 10, 1960); "N-variation and N-variational integrals of set functions" ([3] 11, 1961); "Definitions of Riemann type of the variational integrals" ([3] 11, 1961); "Difference-sets and the Banach–Steinhaus theorem" ([3] 13, 1963); "Generalized integrals of vector-valued functions ([3] 19 1969)
Additional publications:
Sets of uniqueness for trigonometric series and integrals, Proceedings of the Cambridge Philosophical Society 46 (1950) 538–548.
On Ward's Perron-Stieltjes integral, Canadian Journal of Mathematics 9 (1957) 96–109.
The summation by convergence factors of Laplace-Stieltjes integrals outside their half plane of convergence, Mathematische Zeitschrift 67 (1957) 10–31.
Theory of Integration, Butterworths, London, 1962.
Tauberian theorems for integrals, Canadian Journal of Mathematics 15 (1963) 433–439.
Majorants in variational integration, Canadian Journal of Mathematics 18 (1966) 49–74.
A Riemann-type integral of Lebesgue power, Canadian Journal of Mathematics 20 (1968) 79–87.
Linear Analysis, Butterworths, London, 1967.
Integration by parts, Aequationes Mathematicae 9 (1973) 1–18.
The N-variational integral and the Schwartz distributions III, Journal of the London Mathematical Society (2) 6 (1973) 693–700.
Integration in product spaces, including Wiener and Feynman integration, Proceedings of the London Mathematical Society (3) 27 (1973) 317–344.
Additivity and the Lebesgue limit theorems, The Greek Mathematical Society C. Carathéodory Symposium, 1973, 223–241 (Proceedings published 1974).
Integration, variation and differentiation in division spaces, Proceedings of the Royal Irish Academy, Series A (10) 78 (1978) 69–85.
The variation on the real line, Proceedings of the Royal Irish Academy, Series A (1) 79 (1979) 1–10.
Generalized Riemann integration and an intrinsic topology, Canadian Journal of Mathematics 32 (1980) 395–413.
Division spaces, vector-valued functions and backwards martingales, Proceedings of the Royal Irish Academy, Series A (2) 80 (1980) 217–232.
Density integration and Walsh functions, Bulletin of the Malaysian Mathematical Society (2) 5 (1982) 1–19.
A problem in two-dimensional integration, Journal of the Australian Mathematical Society, (Series A) 35 (1983) 386–404.
The Lebesgue syndrome, Real Analysis Exchange 9 (1983–84) 96–110.
The reversal of power and integration, Bulletin of the Institute of Mathematics and its Applications 22 (1986) 60–61.
Lectures on the Theory of Integration, World Scientific, Singapore, 1988.
A short history of integration theory, South East Asian Bulletin of Mathematics 12 (1988) 75–95.
Introduction to the new integrals, New integrals (Coleraine, 1988), 7–9, Lecture Notes in Mathematics, 1419, Springer-Verlag, Berlin, 1990.
Integration in infinite-dimensional spaces, New integrals (Coleraine, 1988), 54–65, Lecture Notes in Mathematics, 1419, Springer-Verlag, Berlin, 1990.
Stochastic and other functional integrals, Real Analysis Exchange 16 (1990/91) 460–470.
The General Theory of Integration, Oxford Mathematical Monographs, Clarendon Press, Oxford, 1991.
The integral over product spaces and Wiener's formula, Real Analysis Exchange 17 (1991/92) 737–744.
Infinite decimals, Mathematica Japonica 38 (1993) 203–209.
Measure spaces and division spaces, Real Analysis Exchange 19 (1993/94) 121–128.
The construction of path integrals, Mathematica Japonica 39 (1994) 15–18.
Gauge or Kurzweil-Henstock integration. Proceedings of the Prague Mathematical Conference 1996, 117–122, Icaris, Prague, 1997.
De La Vallée Poussin's contributions to integration theory, Charles-Jean de La Vallée Poussin Oeuvres Scientifiques, Volume II, Académie Royale de Belgique, Circolo Matematico di Palermo, 2001, 3–16.
Partitioning infinite-dimensional spaces for generalized Riemann integration, (with P. Muldowney and V.A. Skvortsov) Bulletin of the London Mathematical Society, 38 (2006) 795–803.
Review of Henstock's work
The journal Scientiae Mathematicae Japonicae published a special commemorative issue in Henstock’s honor, January 2008. The above article is copied, with permission, from Real Analysis Exchange and from Scientiae Mathematicae Japonicae. The latter contains the following review of Henstock's work:
1. Ralph Henstock, an obituary, by P. Bullen.
2. Ralph Henstock: research summary, by E. Talvila.
3. The integral à la Henstock, by Peng Yee Lee.
4. The natural integral on the real line, by B. Thomson.
5. Ralph Henstock's influence on integration theory, by W.F. Pfeffer.
6. Henstock on random variation, by P. Muldowney.
7. Henstock integral in harmonic analysis, by V.A. Skvortsov.
8. Convergences on the Henstock-Kurzweil integral, by S. Nakanishi.
See also
Partition of an interval
Integrable function
External links
The Calculus and Gauge Integrals, by Ralph Henstock
Lectures on Integration, by Ralph Henstock
Autobiographical notes, by Ralph Henstock
References
Whole Number 247
1923 births
2007 deaths
20th-century English mathematicians
21st-century English mathematicians
Mathematical analysts
Academics of Queen's University Belfast
Academics of the University of Bristol
Academics of Lancaster University
Academics of Ulster University
Alumni of Birkbeck, University of London
Alumni of St John's College, Cambridge
English Methodists | Ralph Henstock | [
"Mathematics"
] | 2,481 | [
"Mathematical analysis",
"Mathematical analysts"
] |
1,485,520 | https://en.wikipedia.org/wiki/Arnaud%20Denjoy | Arnaud Denjoy (; 5 January 1884 – 21 January 1974) was a French mathematician.
Biography
Denjoy was born in Auch, Gers. His contributions include work in harmonic analysis and differential equations. His integral was the first to be able to integrate all derivatives. Among his students is Gustave Choquet. He is also known for the more general broad Denjoy integral, or Khinchin integral.
Denjoy was an Invited Speaker of the ICM with talk Sur une classe d'ensembles parfaits en relation avec les fonctions admettant une dérivée seconde généralisée in 1920 at Strasbourg and with talk Les equations differentielles periodiques in 1950 at Cambridge, Massachusetts. In 1931 he was the president of the Société Mathématique de France. In 1942 he was elected a member of the Académie des sciences and was its president in 1962.
Denjoy married in 1923 and was the father of three sons. He died in Paris in 1974. He was an atheist with a strong interest in philosophy, psychology, and social issues.
The asteroid (19349) Denjoy is named in his honor.
Selected publications
Une extension de l'intégrale de Lebesgue, Académie des Sciences, pp. 859–862 (1912)
Les continus cycliques et la représentation conforme, Bulletin de la Société Mathématique de France, pp. 97-124 (1942)
Sur les fonctions dérivées sommables., Bulletin de la Société Mathématique de France, pp. 161-248 (1915)
Introduction à la théorie de fonctions de variables réelles, vol. 1, Hermann 1937
Aspects actuels de la pensée mathématique, Bulletin de la Société Mathématique de France, vol. 67, 1939, pp. 1–12 (supplément), numdam
Leçons sur le calcul des coefficients d'une série trigonométrique, 4 vols., 1941–1949
L'énumération transfinie, 4 vols., Gauthier-Villars, 1946–1954
Mémoire sur la dérivation et son calcul inverse, 1954, published by Éditions Jacques Gabay
Articles et Mémoires, 2 vols., 1955
Jubilé scientifique, 1956
Un demi-siècle de Notes académiques (1906–1956), 2 vols., Gauthier-Villars, 1957 (collection of Denjoy's essays)
Hommes, Formes et le Nombre, 1964
See also
Denjoy theorem (disambiguation)
Denjoy integral (disambiguation)
Denjoy–Luzin theorem
Denjoy–Luzin–Saks theorem
Denjoy–Riesz theorem
Denjoy–Young–Saks theorem
Denjoy–Carleman theorem
Denjoy–Carleman–Ahlfors theorem
Denjoy's theorem on rotation number
Denjoy–Koksma inequality
Denjoy–Wolff theorem
References
External links
1884 births
1974 deaths
People from Auch
French atheists
20th-century French mathematicians
École Normale Supérieure alumni
Members of the French Academy of Sciences
Foreign members of the USSR Academy of Sciences
Recipients of the Lomonosov Gold Medal | Arnaud Denjoy | [
"Technology"
] | 658 | [
"Science and technology awards",
"Recipients of the Lomonosov Gold Medal"
] |
1,485,612 | https://en.wikipedia.org/wiki/AutoHotkey | AutoHotkey is a free and open-source custom scripting language for Microsoft Windows, primarily designed to provide easy keyboard shortcuts or hotkeys, fast macro-creation and software automation to allow users of most computer skill levels to automate repetitive tasks in any Windows application. It can easily extend or modify user interfaces (for example, overriding the default Windows control key commands with their Emacs equivalents). The installation package includes an extensive help file; web-based documentation is also available.
Features
AutoHotkey scripts can be used to launch programs, open documents, and emulate keystrokes or mouse clicks and movements. They can also assign, retrieve, and manipulate variables, run loops, and manipulate windows, files, and folders. They can be triggered by a hotkey, such as a script that opens an internet browser when the user presses on the keyboard. Keyboard keys can also be remapped and disabled—for example, so that pressing produces an em dash in the active window. AutoHotkey also allows "hotstrings" that automatically replace certain text as it is typed, such as assigning the string "btw" to produce the text "by the way", or the text "%o" to produce "percentage of". Scripts can also be set to run automatically at computer startup, with no keyboard action required—for example, for performing file management at a set interval.
More complex tasks can be achieved with custom data entry forms (GUI windows), working with the system registry, or using the Windows API by calling functions from DLLs. The scripts can be compiled into standalone executable files that can be run on other computers without AutoHotkey installed. The C++ source code can be compiled with Visual Studio Express.
AutoHotkey allows memory access through pointers, as in C.
Some uses for AutoHotkey:
Remapping the keyboard, such as from QWERTY to Dvorak and other alternative keyboard layouts
Using shortcuts to type frequently-used filenames and other phrases
Typing punctuation not available on the keyboard, such as curved quotes (“…”)
Typing other non-keyboard characters, such as the sign used for dimensional measurement (e.g. 10′×12′)
Controlling the mouse cursor with a keyboard or joystick
Opening programs, documents, and websites with simple keystrokes
Adding a signature to e-mail, message boards, etc.
Monitoring a system and automatically closing unwanted programs
Scheduling an automatic reminder, system scan or backup
Automating repetitive tasks
Filling out forms automatically
Prototyping applications before implementing them in other, more time-consuming programming languages
History
The first public beta of AutoHotkey was released on November 10, 2003, after author Chris Mallett's proposal to integrate hotkey support into AutoIt v2 failed to generate response from the AutoIt community. Mallett built a new program from scratch basing the syntax on AutoIt v2 and using AutoIt v3 for some commands and the compiler. Later, AutoIt v3 switched from GPL to closed source because of "other projects repeatedly taking AutoIt code" and "setting themselves up as competitors".
In 2010, AutoHotkey v1.1 (originally called AutoHotkey_L) became the platform for ongoing development of AutoHotkey. In late 2012, it became the official branch. Another port of the program is AutoHotkey.dll. A well known fork of the program is AutoHotkey_H, which has its own subforum on the main site.
Version 2
In July 2021, the first AutoHotkey v2 beta was released. The first release candidate was released on November 20, 2022, with the full release of v2.0.0 planned later in the year.
On December 20, 2022, version 2.0.0 was officially released. On January 22, 2023, AutoHotkey v2 became the official primary version. AutoHotkey v1.1 became legacy and no new features were implemented, but this version was still supported by the site. On March 16, 2024, the final update of AutoHotkey v1.1 was released. AutoHotkey v1.1 has now reached its end of life.
Examples
The following script searches for a particular word or phrase using Google. After the user copies text from any application to the clipboard, pressing the configurable hotkey opens the user's default web browser and performs the search.
#g::Run "https://www.google.com/search?q=" . A_Clipboard
The following script defines a hotstring that enables the user to type afaik in any program and, when followed by an ending character, automatically replace it with "as far as I know":
::afaik::as far as I know
User-contributed features
AutoHotKey extensions, interops and inline script libraries are available for use with and from other programming languages, including:
VB/C# (.NET)
Lua
Lisp
ECL
Embedded machine code
VBScript/JScript (Windows Scripting Host)
Other major plugins enable support for:
Aspect-oriented programming
Function hooks
COM wrappers
Console interaction
Dynamic code generation
HIDs
Internet Explorer automation
GUI creation
Synthetic programming
Web services
Windows event hooks
Malware
When AutoHotkey is used to make standalone software for distribution, that software must include the part of AutoHotkey itself that understands and executes AutoHotkey scripts, as it is an interpreted language. Inevitably, some malware has been written using AutoHotkey. When anti-malware products attempt to earmark items of malware that have been programmed using AutoHotkey, they sometimes falsely identify AutoHotkey as the culprit rather than the actual malware.
See also
AutoIt (for Windows)
AutoKey (for Linux)
Automator (for Macintosh)
Bookmarklet (for web browsers)
iMacros (for Firefox, Chrome, and Internet Explorer)
Keyboard Maestro (for Macintosh)
KiXtart (for Windows)
Macro Express (for Windows)
Winbatch (for Windows)
References
External links
AutoHotkey Foundation LLC
The Automator Community and Resources
Automation software
Free system software
Free software programmed in C++
Windows-only free software
Software using the GNU General Public License | AutoHotkey | [
"Engineering"
] | 1,307 | [
"Automation software",
"Automation"
] |
1,485,789 | https://en.wikipedia.org/wiki/Overscan | Overscan is a behaviour in certain television sets in which part of the input picture is cut off by the visible bounds of the screen. It exists because cathode-ray tube (CRT) television sets from the 1930s to the early 2000s were highly variable in how the video image was positioned within the borders of the screen. It then became common practice to have video signals with black edges around the picture, which the television was meant to discard in this way.
Origins
Early analog televisions varied in the displayed image because of manufacturing tolerance problems. There were also effects from the early design limitations of power supplies, whose DC voltage was not regulated as well as in later power supplies. This could cause the image size to change with normal variations in the AC line voltage, as well as a process called blooming, where the image size increased slightly when a brighter overall picture was displayed due to the increased electron beam current causing the CRT anode voltage to drop. Because of this, TV producers could not be certain where the visible edges of the image would be. In order to compensate, they defined three areas:
Title safe: An area visible by all reasonably maintained sets, where text was certain not to be cut off.
Action safe: A larger area that represented where a "perfect" set (with high precision to allow less overscanning) would cut the image off.
Underscan: The full image area to the electronic edge of the signal with additional black borders which weren't part of the original image.
Fullscan: The full image area to the electronic edge of the signal (with the black borders of the image if they exist).
Observable fullscan: An overscan image area which dismisses only the additional black borders of the image (if they exist).
A significant number of people would still see some of the overscan area, so while nothing important in a scene would be placed there, it also had to be kept free of microphones, stage hands, and other distractions. Studio monitors and camera viewfinders were set to show this area, so that producers and directors could make certain it was clear of unwanted elements. When used, this mode is called underscan.
Despite the wide adoption of LCD TVs that do not require overscan since the size of their images remains the same irrespective of voltage variations, many LCD TVs still come with overscan enabled by default, but it can be disabled by the user using the TV's on-screen menus.
Modern video displays
Today's displays, being driven by digital signals (such as DVI, HDMI and DisplayPort), and based on newer fixed-pixel digital flat panel technology (such as liquid crystal displays), can safely assume that all pixels are visible to the viewer. On digital displays driven from a digital signal, therefore, no adjustment is necessary because all pixels in the signal are unequivocally mapped to physical pixels on the display. As overscan reduces picture quality, it is undesirable for digital flat panels; therefore, 1:1 pixel mapping is preferred. When driven by analog video signals such as VGA, however, displays are subject to timing variations and cannot achieve this level of precision.
CRTs made for computer display are set to underscan with an adjustable border, usually colored black. Some 1980s home computers such as the Apple IIGS could even change the border color. The border will change size and shape if required to allow for the tolerance of low precision (although later models allow for precise calibration to minimise or eliminate the border). As such, computer CRTs use less physical screen area than TVs, to allow all information to be shown at all times.
Computer CRT monitors usually have a black border (unless they are fine-tuned by a user to minimize it)—these can be seen in the video card timings, which have more lines than are used by the desktop. When a computer CRT is advertised as 17-inch (16-inch viewable), it will have a diagonal inch of the tube covered by the plastic cabinet; this black border will occupy this missing inch (or more) when its geometry calibrations are set to default (LCDs with analog input need to deliberately identify and ignore this part of the signal, from all four sides).
Video game systems have been designed to keep important game action in the title safe area. Older systems did this with borders for example, the Super Nintendo Entertainment System windowboxed the image with a black border, visible on some NTSC television sets and all PAL television sets. Newer systems frame content much as live action does, with the overscan area filled with extraneous details.
Within the wide diversity of home computers that arose during the 1980s and early 1990s, many machines such as the ZX Spectrum or Commodore 64 had borders around their screen, which worked as a frame for the display area. Some other computers such as the Amiga allowed the video signal timing to be changed to produce overscan. In the cases of the C64, Amstrad CPC, and Atari ST it has proved possible to remove apparently fixed borders with special coding tricks. This effect was called overscan or fullscreen within the 16-bit Atari demoscene and allowed the development of a CPU-saving scrolling technique called sync-scrolling a bit later.
Datacasting
Analog TV overscan can also be used for datacasting. The simplest form of this is closed captioning and teletext, both sent in the vertical blanking interval (VBI). Electronic program guides, such as TV Guide On Screen, are also sent in this manner. Microsoft's HOS uses the horizontal overscan instead of the vertical to transmit low-speed program-associated data at 6.4 kbit/s, which is slow enough to be recorded on a VCR without data corruption. In the U.S., National Datacast used PBS network stations for overscan and other datacasting, but they migrated to digital TV due to the digital television transition in 2009.
Overscan amounts
There is no hard technical specification for overscan amounts for the low definition formats. Some say 5%, some say 10%, and the figure can be doubled for title safe, which needs more margin compared to action safe. The overscan amounts are specified for the high definition formats as specified above.
Different video and broadcast television systems require differing amounts of overscan. Most figures serve as recommendations or typical summaries, as the nature of overscan is to overcome a variable limitation in older technologies such as cathode-ray tubes.
However the European Broadcasting Union has safe area recommendations regarding Television Production for 16:9 Widescreen.
The official BBC suggestions actually say 3.5% / 5% per side (see p21, p19). The following is a summary:
Microsoft's Xbox game developer guidelines recommend using 85 percent of the screen width and height, or a title safe area of 7.5% per side.
Terminology
Title safe or safe title is an area that is far enough in from the edges to neatly show text without distortion. If you place text beyond the safe area, it might not display on some older CRT TV sets (in worst case).
Action-safe or safe action is the area in which you can expect the customer to see action. However, the transmitted image may extend to the edges of the MPEG frame 720x576. This presents a requirement unique to television, where an image with reasonable quality is expected to exist where some customers won't see it. This is the same concept as used in widescreen cropping.
TV-safe is a generic term for the above two, and could mean either one.
Analog to digital resolution issues
720 vs. 702 or 704
The sampling (digitising) of standard definition video was defined in Rec. 601 in 1982. In this standard, the existing analogue video signals are sampled at 13.5 MHz. Thus the number of active video pixels per line is equal to the sample rate multiplied by the active line duration (the part of each analogue video line that contains active video, that is to say that it does not contain sync pulses, blanking, etc.).
For 625-line 50 Hz video (usually, though incorrectly, called "PAL"), the active line duration is 52 μs, giving 702 pixels per line.
For 525-line 60 Hz video (usually, and correctly, called "NTSC"), the active line duration is 52.856 μs, giving ≈713.5 pixels per line.
In order to accommodate both formats within the same line length, and to avoid cutting off parts of the active picture if the timing of the analogue video was at or beyond the tolerances set in the relevant standards, a total digital line length of 720 pixels was chosen. Hence the picture will have thin black bars down each side.
704 is the nearest mod(16) value to the actual analogue line lengths, and avoids having black bars down each side. The use of 704 can be further justified as follows:
625-line analogue video contains 575 active video lines (this includes two half lines). When the half lines are rounded up to whole lines for ease of digital representation, this gives 576 lines, which is also the nearest mod(16) value to 575. To maintain the same picture aspect ratio, the number of active pixels could be increased to 703.2, which can be rounded up to 704.
525-line analogue video contains 485 active video lines (this include two half lines, though typically only 483 picture lines are present due to Closed Captions data taking up the first "active picture" line on each field). The nearest mod(16) value is 480. To maintain the same picture aspect ratio, the number of active pixels could be decreased to 706.2, which can be rounded down to 704 for mod(16).
The "standard" pixel aspect ratio data found in video editors, certain ITU standards, MPEG etc. is usually based on an approximation of the above, fudged to allow either 704 or 720 pixels to equate to the full 4x3 or 16x9 picture at the whim of the author.
Although standards-compliant video processing software should never fill all 720 pixels with active picture (only the center 704 pixels must contain the actual image, and the remaining 8 pixels on the sides of the image should constitute vertical black bars), recent digitally generated content (e.g. DVDs of recent movies) often disregards this rule. This makes it difficult to tell whether these pixels represent wider than 4x3 or 16x9 (as they would do if following Rec.601), or represent exactly 4x3 or 16x9 (as they would do if created using one of the fudged 720-referenced pixel aspect ratios).
The difference between 702/704 and 720 pixels/line is referred to as nominal analogue blanking.
625 / 525 or 576 / 480
In broadcasting, analogue system descriptions include the lines not used for the visible picture, whereas the digital systems only "number" and encode signals that contain something to see.
The 625 (PAL) and 525 (NTSC) frame areas therefore contain even more overscan, which can be seen when vertical hold is lost and the picture rolls.
A portion of this interval available in analogue, known as the vertical blanking interval, can be used for older forms of analogue datacasting such as Teletext services (like Ceefax and subtitling in the UK). The equivalent service on digital television does not use this method and instead often uses MHEG.
480 vs 486
The 525-line system originally contained 486 lines of picture, not 480. Digital foundations to most storage and transmission systems since the early 1990s have meant that analogue NTSC has only been expected to have 480 lines of picture – see SDTV, EDTV, and DVD-Video. How this affects the interpretation of "the 4:3 ratio" as equal to 704x480 or 720x486 is unclear, but the VGA standard of 640x480 has had a large impact.
See also
1:1 pixel mapping
HD ready 1080p
Bleed (printing)
Nominal analogue blanking
References
Television technology | Overscan | [
"Technology"
] | 2,525 | [
"Information and communications technology",
"Television technology"
] |
1,485,822 | https://en.wikipedia.org/wiki/Xiao%20Qiang | Xiao Qiang (, born November 19, 1961) is the Director and Research Scientist of the Counter-Power Lab, an interdisciplinary faculty-student research group focusing on digital rights and internet freedom, based in the School of Information, University of California, Berkeley and is funded by the US Department of State. He also serves as the director of the China Internet Project at Berkeley. Xiao is an adjunct professor at the School of Information and the Graduate School of Journalism at the University of California, Berkeley. He is also the founder and editor-in-chief of China Digital Times, a bilingual news website.
Xiao teaches classes Digital Activism, Internet Freedom and Blogging in China at both the School of Information and the Graduate School of Journalism, University of California at Berkeley. In fall 2003, Xiao launched China Digital Times to explore how to apply cutting edge technologies to aggregate, contextualize and translate online information from and about China. His current research focuses on state censorship, propaganda and disinformation, as well as mass surveillance in China.
Biography
A theoretical physicist by training, he studied at the University of Science and Technology of China and entered the PhD program (1986–1989) in astrophysics at the University of Notre Dame. He became a full-time human rights activist after the 1989 Tiananmen Square protests and massacre. Xiao was the executive director of the New York-based organization Human Rights in China from 1991 to 2002 and vice chairman of the steering committee of the World Movement for Democracy.
Recognition
Xiao is a recipient of the MacArthur Fellowship in 2001, and is profiled in the book "Soul Purpose: 40 People Who Are Changing the World for the Better" (Melcher Media, 2003). He was also a visiting fellow of the Santa Fe Institute in Spring, 2002.
In January 2015, Xiao has been named to Foreign Policy magazine's Pacific Power Index, a list of "50 people shaping the future of the U.S.-China relationship." He was named on the list "for taking on China's Great Firewall of censorship."
References
External links
Rock-n-Go, Xiao's personal blog
Q&A: Xiao Qiang on the anniversary of Tiananmen Square and the right to information in China, Columbia Journalism Review
Chinese dissidents
MacArthur Fellows
Chinese male journalists
Chinese male bloggers
Chinese emigrants to the United States
University of California, Berkeley School of Information faculty
Notre Dame College of Arts and Letters alumni
Living people
Writers from Santa Fe, New Mexico
Chinese human rights activists
Chinese physicists
Theoretical physicists
1961 births
People associated with WikiLeaks
University of Science and Technology of China alumni | Xiao Qiang | [
"Physics"
] | 530 | [
"Theoretical physics",
"Theoretical physicists"
] |
1,486,044 | https://en.wikipedia.org/wiki/Cow%20dung | Cow dung, also known as cow pats, cow pies, cow poop or cow manure, is the waste product (faeces) of bovine animal species. These species include domestic cattle ("cows"), bison ("buffalo"), yak, and water buffalo. Cow dung is the undigested residue of plant matter which has passed through the animal's gut. The resultant faecal matter is rich in minerals. Color ranges from greenish to blackish, often darkening soon after exposure to air.
Uses
Fuel
In many parts of the old world, and in the past in mountain regions of Europe, caked and dried cow dung is used as fuel. In India, it is dried into cake like shapes called or , and used as replacement for firewood for cooking in (traditional kitchen stove).
Dung may also be collected and used to produce biogas to generate electricity and heat. The gas is rich in methane and is used in rural areas of India and Pakistan and elsewhere to provide a renewable and stable (but unsustainable) source of electricity.
Fertilizer
Cow dung, which is usually a dark brown color, is often used as manure (agricultural fertilizer). If not recycled into the soil by species such as earthworms and dung beetles, cow dung can dry out and remain on the pasture, creating an area of grazing land which is unpalatable to livestock.
Cow dung is nowadays used for making flower and plant pots. It is plastic free, biodegradable and eco-friendly. Unlike plastic grow bags which harm nature, cow dung pots dissolves naturally and becomes excellent manure for the plant. From 20 July 2020, State Government of Chhattisgarh India started buying cow dung under the Godhan Nyay Yojana scheme. Cow dung procured under this scheme will be utilised for the production of vermicompost fertilizer.
Religious uses
Cow dung is used in Hindu yajna ritual as an important ingredient. Cow dung is also used in the making of pancha-gavya, for use in Hindu rituals. Several Hindu texts - including Yājñavalkya Smṛti and Manusmṛti - state that the pancha-gavya purifies many sins. The Mahabharata narrates a story about how Lakshmi, the goddess of prosperity, came to reside in cow dung. In the legend, Lakshmi asks cows to let her live in their bodies because they are pure and sinless. The cows refuse, describing her as unstable and fickle. Lakshmi begs them to accept her request, saying that others would ridicule her for being rejected by the cows, and agreeing to live in the most despised part of their body. The cows then allow her to live in their dung and urine.
The Tantric Buddhist ritual manuals Jayavatī-nāma-mahāvidyārāja-dhāraṇī and Mahāvairocanābhisaṃbodhi recommend use of cow dung to purify mandala altars.
Floor and wall coating
In several cultures, cow dung is traditionally used to coat floors and walls. In parts of Africa, floors of rural huts are smeared with cow dung: this is believed to improve interior hygiene and repel insects. This practice has various names, such as "ukusinda" in Xhosa, and "gwaya" in Ruruuli-Lunyala.
Similarly, in India, floors are traditionally smeared with cow dung to clean and smoothen them. Purananuru generally dated 150 BCE mentions women of Tamil Nadu smear cow dung on the floors at the 13th day after her husband's death to purify the house. Italian traveler Pietro Della Valle, who visited India in 1624, observed that the locals - including Christians - smeared floor with cow dung to purify it and repel insects. Tryambaka's Strī-dharma-paddhati (18th century), which narrates a modified version of the Mahabharata legend about how the goddess Lakshmi came to reside in cow dung, instructs women to make their homes pure and prosperous by coating them with cow-dung. Many among modern generations have challenged this practice as unclean.
In 2021, the Government of India's Khadi and Village Industries Commission launched the Khadi Prakritik paint, which has cow dung as its main ingredient, promoting it as an eco-friendly paint with anti-fungal and anti-bacterial properties.
Other uses
In central Africa, Maasai villages have burned cow dung inside to repel mosquitos. In cold places, cow dung is used to line the walls of rustic houses as a cheap thermal insulator. Villagers in India spray fresh cow dung mixed with water in front of the houses to repel insects.
In Rwanda, it is used in an art form called imigongo.
Cow dung is also an optional ingredient in the manufacture of adobe mud brick housing depending on the availability of materials at hand.
A deposit of cow dung is referred to in American English as a "cow pie" or less commonly "cow chip" (usually when dried) and in British English as a "cowpat". When dry, it is used in the practice of "cow chip throwing" popularized in Beaver, Oklahoma in 1970. On April 21, 2001 Robert Deevers of Elgin, Oklahoma, set the record for cow chip throwing with a distance of .
Ecology
Cow dung provides food for a wide range of animal and fungus species, which break it down and recycle it into the food chain and into the soil.
In areas where cattle (or other mammals with similar dung) are not native, there are often also no native species which can break down their dung, and this can lead to infestations of pests such as flies and parasitic worms. In Australia, dung beetles from elsewhere have been introduced to help recycle the cattle dung back into the soil. (see the Australian Dung Beetle Project and Dr. George Bornemissza).
Cattle have a natural aversion to feeding around their own dung. This can lead to the formation of taller ungrazed patches of heavily fertilized sward. These habitat patches, termed "islets", can be beneficial for many grassland arthropods, including spiders (Araneae) and bugs (Hemiptera). They have an important function in maintaining biodiversity in heavily utilized pastures.
Variants
A buffalo chip, also called a meadow muffin, is the name for a large, flat, dried piece of dung deposited by the American bison. Well dried buffalo chips were among the few things that could be collected and burned on the prairie and were used by the Plains Indians, settlers and pioneers, and homesteaders as a source of cooking heat and warmth.
Bison dung is sometimes referred to by the name nik-nik. This word is a borrowing from the Sioux language (which probably originally borrowed it from a northern source). In modern Sioux, nik-nik can refer to the feces of any bovine, including domestic cattle. It has also come to be used, especially in Lakota, to refer to lies or broken promises, analogously to the vulgar English term "bullshit" as a figure of speech.
Gallery
See also
Biomass briquettes
Chicken manure
Coprophilous fungi
Dry animal dung fuel
Imigongo
Shit Museum
Sigri (stove) stove fueled with dried cow dung
References
External links
Animal physiology
Cattle products
Fuels
Feces
Manure | Cow dung | [
"Chemistry",
"Biology"
] | 1,521 | [
"Animals",
"Animal physiology",
"Chemical energy sources",
"Excretion",
"Animal waste products",
"Feces",
"Fuels"
] |
1,486,231 | https://en.wikipedia.org/wiki/Find%20%28Unix%29 | In Unix-like operating systems, find is a command-line utility that locates files based on some user-specified criteria and either prints the pathname of each matched object or, if another action is requested, performs that action on each matched object.
It initiates a search from a desired starting location and then recursively traverses the nodes (directories) of a hierarchical structure (typically a tree). find can traverse and search through different file systems of partitions belonging to one or more storage devices mounted under the starting directory.
The possible search criteria include a pattern to match against the filename or a time range to match against the modification time or access time of the file. By default, find returns a list of all files below the current working directory, although users can limit the search to any desired maximum number of levels under the starting directory.
The related locate programs use a database of indexed files obtained through find (updated at regular intervals, typically by cron job) to provide a faster method of searching the entire file system for files by name.
History
find appeared in Version 5 Unix as part of the Programmer's Workbench project, and was written by Dick Haight alongside cpio, which were designed to be used together.
The GNU find implementation was originally written by Eric Decker. It was later enhanced by David MacKenzie, Jay Plett, and Tim Wood.
The command has also been ported to the IBM i operating system.
Find syntax
$ find [-H|-L] path... [operand_expression...]
The two options control how the find command should treat symbolic links. The default behaviour is never to follow symbolic links. The flag will cause the find command to follow symbolic links. The flag will only follow symbolic links while processing the command line arguments. These flags are specified in the POSIX standard for find. A common extension is the flag, for explicitly disabling symlink following.
At least one path must precede the expression. find is capable of interpreting wildcards internally and commands must be quoted carefully in order to control shell globbing.
Expression elements are separated by the command-line argument boundary, usually represented as whitespace in shell syntax. They are evaluated from left to right. They can contain logical elements such as AND ( or ) and OR ( or ) as well as predicates (filters and actions).
GNU find has a large number of additional features not specified by POSIX.
Predicates
Commonly-used primaries include:
-name pattern: tests whether the file name matches the shell-glob pattern given.
-type type: tests whether the file is a given type. Unix file types accepted include:
b: block device (buffered);
c: character device (unbuffered);
d: directory;
f: regular file;
l: symbolic link;
p: named pipe;
s: socket;
D: door.
-print: always returns true; prints the name of the current file plus a newline to the stdout.
-print0: always returns true; prints the name of the current file plus a null character to the stdout. Not required by POSIX.
-exec program [arguments...] ;: runs program with the given arguments, and returns true if its exit status was 0, false otherwise. If program, or an argument is {}, it will be replace by the current path (if program is {}, find will try to run the current path as an executable). POSIX doesn't specify what should happen if multiple {} are specified. Most implementations will replace all {} with the current path, but that is not standard behavior.
-exec program [arguments...] {} +: always returns true; run program with the given arguments, followed by as many paths as possible (multiple commands will be run if the maximum command-line size is exceeded, like for xargs).
-ok program [arguments...] ;: for every path, prompts the user for confirmation; if the user confirms (typically by entering y or yes), it behaves like -exec program [arguments...] ;, otherwise the command is not run for the current path, and false is returned.
-maxdepth: Can be used to limit the directory depth to search through. For example, -maxdepth 1 limits search to the current directory.
If the expression uses none of -print0, -print, -exec, or -ok, find defaults to performing -print if the conditions test as true.
Operators
Operators can be used to enhance the expressions of the find command. Operators are listed in order of decreasing precedence:
( expr ): forces precedence;
! expr: true if is false;
expr1 expr2 (or expr1 -a expr2): AND. is not evaluated if is false;
expr1 -o expr2: OR. is not evaluated if is true.
$ find . -name 'fileA_*' -o -name 'fileB_*'
This command searches the current working directory tree for files whose names start with or . We quote the so that the shell does not expand it.
$ find . -name 'foo.cpp' '!' -path '.svn'
This command searches the current working directory tree except the subdirectory tree ".svn" for files whose name is "foo.cpp". We quote the ! so that it's not interpreted by the shell as the history substitution character.
POSIX protection from infinite output
Real-world file systems often contain looped structures created through the use of hard or soft links. The POSIX standard requires that
Examples
From the current working directory
$ find . -name 'my*'
This searches the current working directory tree for files whose names start with my. The single quotes avoid the shell expansion—without them the shell would replace my* with the list of files whose names begin with my in the current working directory. In newer versions of the program, the directory may be omitted, and it will imply the current working directory.
Regular files only
$ find . -name 'my*' -type f
This limits the results of the above search to only regular files, therefore excluding directories, special files, symbolic links, etc. my* is enclosed in single quotes (apostrophes) as otherwise the shell would replace it with the list of files in the current working directory starting with my...
Commands
The previous examples created listings of results because, by default, find executes the -print action. (Note that early versions of the find command had no default action at all; therefore the resulting list of files would be discarded, to the bewilderment of users.)
$ find . -name 'my*' -type f -ls
This prints extended file information.
Search all directories
$ find / -name myfile -type f -print
This searches every directory for a regular file whose name is myfile and prints it to the screen. It is generally not a good idea to look for files this way. This can take a considerable amount of time, so it is best to specify the directory more precisely. Some operating systems may mount dynamic file systems that are not congenial to find. More complex filenames including characters special to the shell may need to be enclosed in single quotes.
Search all but one subdirectory tree
$ find / -path excluded_path -prune -o -type f -name myfile -print
This searches every directory except the subdirectory tree excluded_path (full path including the leading /) that is pruned by the -prune action, for a regular file whose name is myfile.
Specify a directory
$ find /home/weedly -name myfile -type f -print
This searches the /home/weedly directory tree for regular files named myfile. You should always specify the directory to the deepest level you can remember.
Search several directories
$ find local /tmp -name mydir -type d -print
This searches the local subdirectory tree of the current working directory and the /tmp directory tree for directories named mydir.
Ignore errors
If you're doing this as a user other than root, you might want to ignore permission denied (and any other) errors. Since errors are printed to stderr, they can be suppressed by redirecting the output to /dev/null. The following example shows how to do this in the bash shell:
$ find / -name myfile -type f -print 2> /dev/null
If you are a csh or tcsh user, you cannot redirect stderr without redirecting stdout as well. You can use sh to run the find command to get around this:
$ sh -c "find / -name myfile -type f -print 2> /dev/null"
An alternate method when using csh or tcsh is to pipe the output from stdout and stderr into a grep command. This example shows how to suppress lines that contain permission denied errors.
$ find . -name myfile |& grep -v 'Permission denied'
Find any one of differently named files
$ find . \( -name '*jsp' -o -name '*java' \) -type f -ls
The -ls operator prints extended information, and the example finds any regular file whose name ends with either 'jsp' or 'java'. Note that the parentheses are required. In many shells the parentheses must be escaped with a backslash (\( and \)) to prevent them from being interpreted as special shell characters. The -ls operator is not available on all versions of find.
Execute an action
$ find /var/ftp/mp3 -name '*.mp3' -type f -exec chmod 644 {} \;
This command changes the permissions of all regular files whose names end with .mp3 in the directory tree /var/ftp/mp3. The action is carried out by specifying the statement -exec chmod 644 {} \; in the command. For every regular file whose name ends in .mp3, the command chmod 644 {} is executed replacing {} with the name of the file. The semicolon (backslashed to avoid the shell interpreting it as a command separator) indicates the end of the command. Permission 644, usually shown as rw-r--r--, gives the file owner full permission to read and write the file, while other users have read-only access. In some shells, the {} must be quoted. The trailing "" is customarily quoted with a leading "", but could just as effectively be enclosed in single quotes.
Note that the command itself should not be quoted; otherwise you get error messages like
find: echo "mv ./3bfn rel071204": No such file or directory
which means that find is trying to run a file called '' and failing.
If you will be executing over many results, it is more efficient to use a variant of the exec primary that collects filenames up to and then executes COMMAND with a list of filenames.
$ find . -exec COMMAND {} +
This will ensure that filenames with whitespaces are passed to the executed without being split up by the shell.
Delete files and directories
The -delete action is a GNU extension, and using it turns on -depth. So, if you are testing a find command with -print instead of -delete in order to figure out what will happen before going for it, you need to use -depth -print.
Delete empty files and print the names (note that -empty is a vendor unique extension from GNU find that may not be available in all find implementations):
$ find . -empty -delete -print
Delete empty regular files:
$ find . -type f -empty -delete
Delete empty directories:
$ find . -type d -empty -delete
Delete empty files named 'bad':
$ find . -name bad -empty -delete
Warning. — The -delete action should be used with conditions such as -empty or -name:
$ find . -delete # this deletes all in .
Search for a string
This command will search all files from the /tmp directory tree for a string:
$ find /tmp -type f -exec grep 'search string' /dev/null '{}' \+
The /dev/null argument is used to show the name of the file before the text that is found. Without it, only the text found is printed. (Alternatively, some versions of grep support a flag that forces the file name to be printed.)
GNU grep can be used on its own to perform this task:
$ grep -r 'search string' /tmp
Example of search for "LOG" in jsmith's home directory tree:
$ find ~jsmith -exec grep LOG '{}' /dev/null \; -print
/home/jsmith/scripts/errpt.sh:cp $LOG $FIXEDLOGNAME
/home/jsmith/scripts/errpt.sh:cat $LOG
/home/jsmith/scripts/title:USER=$LOGNAME
Example of search for the string "ERROR" in all XML files in the current working directory tree:
$ find . -name "*.xml" -exec grep "ERROR" /dev/null '{}' \+
The double quotes (" ") surrounding the search string and single quotes (' ') surrounding the braces are optional in this example, but needed to allow spaces and some other special characters in the string. Note with more complex text (notably in most popular shells descended from `sh` and `csh`) single quotes are often the easier choice, since double quotes do not prevent all special interpretation. Quoting filenames which have English contractions demonstrates how this can get rather complicated, since a string with an apostrophe in it is easier to protect with double quotes:
$ find . -name "file-containing-can't" -exec grep "can't" '{}' \; -print
Search for all files owned by a user
$ find . -user <userid>
Search in case insensitive mode
Note that -iname is not in the standard and may not be supported by all implementations.
$ find . -iname 'MyFile*'
If the -iname switch is not supported on your system then workaround techniques may be possible such as:
$ find . -name '[mM][yY][fF][iI][lL][eE]*'
Search files by size
Searching files whose size is between 100 kilobytes and 500 kilobytes:
$ find . -size +100k -a -size -500k
Searching empty files:
$ find . -size 0k
Searching non-empty files:
$ find . ! -size 0k
Search files by name and size
$ find /usr/src ! \( -name '*,v' -o -name '.*,v' \) '{}' \; -print
This command will search the /usr/src directory tree. All files that are of the form and are excluded. Important arguments to note are in the tooltip that is displayed on mouse-over.
for file in $(find /opt \( -name error_log -o -name 'access_log' -o -name 'ssl_engine_log' -o -name 'rewrite_log' -o -name 'catalina.out' \) -size +300000k -a -size -5000000k); do
cat /dev/null > $file
done
The units should be one of , 'b' means 512-byte blocks, 'c' means byte, 'k' means kilobytes and 'w' means 2-byte words. The size does not count indirect blocks, but it does count blocks in sparse files that are not actually allocated.
Searching files by time
Date ranges can be used to, for example, list files changed since a backup.
: modification time
: inode change time
: access time
Files modified a relative number of days ago:
+[number] = At least this many days ago.
-[number] = Less than so many days ago.
[number] = Exactly this many days ago.
Optionally add -daystart to measure time from the beginning of a day (0 o'clock) rather than the last 24 hours.
Example to find all text files in the document folder modified since a week (meaning 7 days):
$ find ~/Documents/ -iname "*.txt" -mtime -7
Files modified before or after an absolute date and time:
-newermt YYYY-MM-DD: Last modified after date
-not -newermt YYYY-MM-DD: Last modified before date
Example to find all text files last edited in February 2017:
$ find ~/Documents/ -iname "*.txt" -newermt 2017-02-01 -not -newermt 2017-03-01
-newer [file]: More recently modified than specified file.
-cnewer: Same with inode change time.
-anewer: Same with access time.
Also prependable with -not for inverse results or range.
List all text files edited more recently than "document.txt":
$ find ~/Documents/ -iname "*.txt" -newer document.txt
Related utilities
locate is a Unix search tool that searches a prebuilt database of files instead of directory trees of a file system. This is faster than find but less accurate because the database may not be up-to-date.
grep is a command-line utility for searching plain-text data sets for lines matching a regular expression and by default reporting matching lines on standard output.
tree is a command-line utility that recursively lists files found in a directory tree, indenting the filenames according to their position in the file hierarchy.
GNU Find Utilities (also known as findutils) is a GNU package which contains implementations of the tools find and xargs.
BusyBox is a utility that provides several stripped-down Unix tools in a single executable file, intended for embedded operating systems with very limited resources. It also provides a version of find.
dir has the /s option that recursively searches for files or directories.
Plan 9 from Bell Labs uses two utilities to replace : a that only walks the tree and prints the names and a that only filters (like grep) by evaluating expressions in the form of a shell script. Arbitrary filters can be used via pipes. The commands are not part of Plan 9 from User Space, so Google's Benjamin Barenblat has a ported version to POSIX systems available through GitHub.
is a simple alternative to written in the Rust programming language.
See also
mdfind, a similar utility that utilizes metadata for macOS and Darwin
List of Unix commands
List of DOS commands
Filter (higher-order function)
find (Windows), a DOS and Windows command that is very different from Unix find
forfiles, a Windows command that finds files by attribute, similar to Unix find
grep, a Unix command that finds text matching a pattern, similar to Windows find
References
External links
Official webpage for GNU find
Command find – 25 practical examples
Information retrieval systems
Standard Unix programs
Unix SUS2008 utilities
Plan 9 commands
IBM i Qshell commands | Find (Unix) | [
"Technology"
] | 4,146 | [
"IBM i Qshell commands",
"Information retrieval systems",
"Standard Unix programs",
"Information technology",
"Computing commands",
"Plan 9 commands"
] |
1,486,300 | https://en.wikipedia.org/wiki/Frost%20heaving | Frost heaving (or a frost heave) is an upwards swelling of soil during freezing conditions caused by an increasing presence of ice as it grows towards the surface, upwards from the depth in the soil where freezing temperatures have penetrated into the soil (the freezing front or freezing boundary). Ice growth requires a water supply that delivers water to the freezing front via capillary action in certain soils. The weight of overlying soil restrains vertical growth of the ice and can promote the formation of lens-shaped areas of ice within the soil. Yet the force of one or more growing ice lenses is sufficient to lift a layer of soil, as much as or more. The soil through which water passes to feed the formation of ice lenses must be sufficiently porous to allow capillary action, yet not so porous as to break capillary continuity. Such soil is referred to as "frost susceptible". The growth of ice lenses continually consumes the rising water at the freezing front. Differential frost heaving can crack road surfaces—contributing to springtime pothole formation—and damage building foundations. Frost heaves may occur in mechanically refrigerated cold-storage buildings and ice rinks.
Needle ice is essentially frost heaving that occurs at the beginning of the freezing season, before the freezing front has penetrated very far into the soil and there is no soil overburden to lift as a frost heave.
Mechanisms
Historical understanding of frost heaving
Urban Hjärne described frost effects in soil in 1694.
By 1930, Stephen Taber, head of the Department of Geology at the University of South Carolina, had disproved the hypothesis that frost heaving results from molar volume expansion with freezing of water already present in the soil prior to the onset of subzero temperatures, i.e. with little contribution from the migration of water within the soil.
Since the molar volume of water expands by about 9% as it changes phase from water to ice at its bulk freezing point, 9% would be the maximum expansion possible owing to molar volume expansion, and even then only if the ice were rigidly constrained laterally in the soil so that the entire volume expansion had to occur vertically. Ice is unusual among compounds because it increases in molar volume from its liquid state, water. Most compounds decrease in volume when changing phase from liquid to solid. Taber showed that the vertical displacement of soil in frost heaving could be significantly greater than that due to molar volume expansion.
Taber demonstrated that liquid water migrates towards the freeze line within soil. He showed that other liquids, such as benzene, which contracts when it freezes, also produce frost heave. This excluded molar volume changes as the dominant mechanism for vertical displacement of freezing soil. His experiments further demonstrated the development of ice lenses inside columns of soil that were frozen by cooling the upper surface only, thereby establishing a temperature gradient.
Development of ice lenses
The dominant cause of soil displacement in frost heaving is the development of ice lenses. During frost heave, one or more soil-free ice lenses grow, and their growth displaces the soil above them. These lenses grow by the continual addition of water from a groundwater source that is lower in the soil and below the freezing line in the soil. The presence of frost-susceptible soil with a pore structure that allows capillary flow is essential to supplying water to the ice lenses as they form.
Owing to the Gibbs–Thomson effect of the confinement of liquids in pores, water in soil can remain liquid at a temperature that is below the bulk freezing point of water. Very fine pores have a very high curvature, and this results in the liquid phase being thermodynamically stable in such media at temperatures sometimes several tens of degrees below the bulk freezing point of the liquid. This effect allows water to percolate through the soil towards the ice lens, allowing the lens to grow.
Another water-transport effect is the preservation of a few molecular layers of liquid water on the surface of the ice lens, and between ice and soil particles. Faraday reported in 1860 on the unfrozen layer of premelted water.
Ice premelts against its own vapor, and in contact with silica.
Micro-scale processes
The same intermolecular forces that cause premelting at surfaces contribute to frost heaving at the particle scale on the bottom side of the forming ice lens. When ice surrounds a fine soil particle as it premelts, the soil particle will be displaced downward towards the warm direction within the thermal gradient due to melting and refreezing of the thin film of water that surrounds the particle. The thickness of such a film is temperature dependent and is thinner on the colder side of the particle.
Water has a lower thermodynamic free energy when in bulk ice than when in the supercooled liquid state. Therefore, there is a continuous replenishment of water flowing from the warm side to the cold side of the particle, and continuous melting to re-establish the thicker film on the warm side. The particle migrates downwards toward the warmer soil in a process that Faraday called "thermal regelation." This effect purifies the ice lenses as they form by repelling fine soil particles. Thus a 10-nanometer film of unfrozen water around each micrometer-sized soil particle can move it 10 micrometers/day in a thermal gradient of as low as 1 °C m−1. As ice lenses grow, they lift the soil above, and segregate soil particles below, while drawing water to the freezing face of the ice lens via capillary action.
Frost-susceptible soils
Frost heaving requires a frost-susceptible soil, a continual supply of water below (a water table) and freezing temperatures, penetrating into the soil. Frost-susceptible soils are those with pore sizes between particles and particle surface area that promote capillary flow. Silty and loamy soil types, which contain fine particles, are examples of frost-susceptible soils. Many agencies classify materials as being frost susceptible if 10 percent or more constituent particles pass through a 0.075 mm (No. 200) sieve or 3 percent or more pass through a 0.02 mm (No. 635) sieve. Chamberlain reported other, more direct methods for measuring frost susceptibility. Based on such research, standard tests exist to determine the relative frost and thaw weakening susceptibility of soils used in pavement systems by comparing the heave rate and thawed bearing ratio with values in an established classification system for soils where frost-susceptibility is uncertain.
Non-frost-susceptible soils may be too dense to promote water flow (low hydraulic conductivity) or too open in porosity to promote capillary flow. Examples include dense clays with a small pore size and therefore a low hydraulic conductivity and clean sands and gravels, which contain small amounts of fine particles and whose pore sizes are too open to promote capillary flow.
Landforms created by frost heaving
Frost heaving creates raised-soil landforms in various geometries, including circles, polygons and stripes, which may be described as palsas in soils that are rich in organic matter, such as peat, or lithalsa in more mineral-rich soils. The stony lithalsa (heaved mounds) found on the archipelago of Svalbard are an example. Frost heaves occur in alpine regions, even near the equator, as illustrated by palsas on Mount Kenya.
In Arctic permafrost regions, a related type of ground heaving over hundreds of years can create structures, as high as 60 metres, known as pingos, which are fed by an upwelling of ground water, instead of the capillary action that feeds the growth of frost heaves. Cryogenic earth hummocks are a small formation resulting from granular convection that appear in seasonally frozen ground and have many different names; in North America they are earth hummocks; thúfur in Greenland and Iceland; and pounus in Fennoscandia.
Polygonal forms apparently caused by frost heave have been observed in near-polar regions of Mars by the Mars Orbiter Camera (MOC) aboard the Mars Global Surveyor and the HiRISE camera on the Mars Reconnaissance Orbiter. In May 2008 the Mars Phoenix lander touched down on such a polygonal frost-heave landscape and quickly discovered ice a few centimetres below the surface.
In refrigerated buildings
Cold-storage buildings and ice rinks that are maintained at sub-freezing temperatures may freeze the soil below their foundations to a depth of tens of meters. Seasonally frozen buildings, e.g. some ice rinks, may allow the soil to thaw and recover when the building interior is warmed. If a refrigerated building's foundation is placed on frost-susceptible soils with a water table within reach of the freezing front, then the floors of such structures may heave, due to the same mechanisms found in nature. Such structures may be designed to avoid such problems by employing several strategies, separately or in tandem. The strategies include placement of non-frost-susceptible soil beneath the foundation, adding insulation to diminish the penetration of the freezing front, and heating the soil beneath the building sufficiently to keep it from freezing. Seasonally operated ice rinks can mitigate the rate of subsurface freezing by raising the temperature of the ice.
See also
Cryoturbation
Frost law
Frost weathering
Ice jacking
Palsa
Footnotes
References
Further reading
Building defects
Geomorphology
Glaciology
Ground freezing
Patterned grounds
Soil mechanics
Frost and rime | Frost heaving | [
"Physics",
"Materials_science"
] | 1,959 | [
"Soil mechanics",
"Mechanical failure",
"Applied and interdisciplinary physics",
"Building defects"
] |
1,486,318 | https://en.wikipedia.org/wiki/SDSS%20J090745.0%2B024507 | SDSS J090744.99+024506.8 (SDSS 090745.0+024507) is a short-period variable star in the constellation Hydra. It has a Galactic rest-frame radial velocity of 709 km/s.
Its effective temperature is 10,500 K (corresponding to a spectral type of B9) and its age is estimated to be at most 350 million years. It has a heliocentric distance of 71 kpc. It was ejected from the centre of the galaxy less than 100 million years ago, which implies the existence of a population of young stars at the galactic centre less than 100 million years ago.
Christened by the astronomer Warren Brown as the "outcast star", it is the first discovered member of a class of objects named hypervelocity stars. It was discovered in 2005 at the MMT Observatory of the Center for Astrophysics Harvard & Smithsonian (CfA), by astronomers Warren Brown, Margaret J. Geller, Scott J. Kenyon and Michael J. Kurtz.
See also
List of star extremes
S5-HVS1 – another fast moving star
US 708 – another fast moving star
References
Further reading
External links
Press release
First Stellar Outcast Speeding at Over 1.5 Million Miles Per Hour (PhysOrg.com)
Hydra (constellation)
B-type main-sequence stars
Hypervelocity stars
SDSS objects | SDSS J090745.0+024507 | [
"Astronomy"
] | 292 | [
"Hydra (constellation)",
"Constellations"
] |
1,486,657 | https://en.wikipedia.org/wiki/235%20%28number%29 | 235 (two hundred [and] thirty-five) is the integer following 234 and preceding 236.
Additionally, 235 is:
a semiprime
a heptagonal number
a centered triangular number
therefore a figurate number in two ways
palindromic in bases 4 (32234), 7 (4547), 8 (3538), 13 (15113), and 46 (5546)
a Harshad number in bases 6, 47, 48, 95, 116, 189 and 231
a Smarandache–Wellin number
Also:
There are 235 different trees with 11 unlabeled nodes.
If an equilateral triangle is subdivided into smaller equilateral triangles whose side length is 1/9 as small, the resulting "matchstick arrangement" will have exactly 235 different equilateral triangles of varying sizes in it.
References
Integers | 235 (number) | [
"Mathematics"
] | 179 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
1,486,886 | https://en.wikipedia.org/wiki/TI%20InterActive%21 | TI InterActive! was a Texas Instruments computer program which combined the functionality of all of the TI graphing calculators with extra features into a text editor which allowed users to save equations, graphs, tables, spreadsheets, and text onto a document. TI InterActive! also included a web browser, but it was just an embedded version of Internet Explorer. It also worked with TI Connect to share data with the TI Graphing Calculators.
References
Computer algebra systems | TI InterActive! | [
"Mathematics"
] | 95 | [
"Computer algebra systems",
"Mathematical software"
] |
1,487,123 | https://en.wikipedia.org/wiki/Investigational%20New%20Drug | The United States Food and Drug Administration's Investigational New Drug (IND) program is the means by which a pharmaceutical company obtains permission to start human clinical trials and to ship an experimental drug across state lines (usually to clinical investigators) before a marketing application for the drug has been approved. Regulations are primarily at . Similar procedures are followed in the European Union, Japan, and Canada due to regulatory harmonization efforts by the International Council for Harmonisation.
Types
Research or investigator INDs are non-commercial INDs filed by researchers to study an unapproved drug or to study an approved drug for a new indication or in a new patient population.
Emergency Use INDs, also called compassionate use or single-patient INDs, are filed for emergency use of an unapproved drug when the clinical situation does not allow sufficient time to submit an IND in accordance with 21 CFR §§ 312.23, 312.24. These are most commonly used for life-threatening conditions for which there is no standard treatment.
Treatment INDs are filed to make a drug available for the treatment of serious or immediately life-threatening conditions prior to FDA approval. Serious diseases or conditions are stroke, schizophrenia, rheumatoid arthritis, osteoarthritis, chronic depression, seizures, Alzheimer's dementia, amyotrophic lateral sclerosis (ALS), and narcolepsy.
Screening INDs are filed for multiple, closely related compounds in order to screen for the preferred compounds or formulations. The preferred compound can then be developed under a separate IND. Used for screening different salts, esters and other drug derivatives that are chemically different, but pharmacodynamically similar.
Application
The IND application may be divided into the following categories:
Preclinical testing consists of animal pharmacology and toxicology studies to assess whether the drug is safe for testing in humans. Also included are any previous experience with the drug in humans (often foreign use).
Manufacturing Information includes composition, manufacturer, and stability of, and the controls used for, manufacturing the drug. Used to ensure that the company can adequately produce and supply consistent batches of the drug.
Investigator information on the qualifications of clinical investigators, that is, the professionals (generally physicians) who oversee the administration of the experimental drug to the study subjects. Used to assess whether the investigators are qualified to fulfill their clinical trial duties.
Clinical trial protocols are the centerpiece of the IND. Detailed protocols for proposed clinical studies to assess whether the initial-phase trials will expose the subjects to unnecessary risks.
Other commitments are commitments to obtain informed consent from the research subjects, to obtain a review of the study by an institutional review board (IRB), and to adhere to the investigational new drug regulations.
An IND application must also include an Investigator's Brochure intended to educate the trial investigators of the significant facts about the trial drug they need to know to conduct their clinical trial with the least hazard to the subjects or patients.
Once an IND application is submitted, the FDA has 30 days to object to the IND or it automatically becomes effective and clinical trials may begin. If the FDA detects a problem, it may place a clinical hold on the IND, prohibiting the start of the clinical studies until the problem is resolved, as outlined in .
An IND must be labeled "Caution: New Drug – Limited by Federal (or United States) law to investigational use," per
Prevalence
Approximately two-thirds of both INDs and new drug applications (NDAs) are small-molecule drugs. The rest is biopharmaceuticals. About half of the INDs fail in preclinical and clinical phases of drug development.
Examples
The FDA runs a medical marijuana IND program (the Compassionate Investigational New Drug program). It stopped accepting new patients in 1992 after public health authorities concluded there was no scientific value to it, and due to President George H. W. Bush administration's desire to "get tough on crime and drugs." As of 2011, four patients continue to receive cannabis from the government under the program.
Sanctioned by Executive Order 13139, the US Department of Defense employed an anthrax vaccine classified as an investigational new drug (IND) in its Anthrax Vaccine Immunization Program (AVIP).
See also
Abigail Alliance for Better Access to Developmental Drugs
Animal drug
Biologics license application
Drug discovery
FDA Fast Track Development Program
Good Manufacturing Practice
Inverse benefit law
Orphan drug
TOL101
References
External links
Investigational New Drug (IND) Application Process Center for Drug Evaluation and Research, Food and Drug Administration.
ICH Guidance for Industry, E6 Good Clinical Practice: Consolidated Guidance. BROKEN LINK
Troetel, W.M.: Achieving a Successful US IND Filing (1) The Regulatory Affairs Journal. 6: 22–28, January 1995.
Troetel, W.M.: Achieving a Successful US IND Filing (2) The Regulatory Affairs Journal. 6: 104–108, February 1995.
IND Forms and Instructions from the US Food and Drug Administration
Clinical research
Drug safety
Food and Drug Administration | Investigational New Drug | [
"Chemistry"
] | 1,033 | [
"Drug safety"
] |
1,487,249 | https://en.wikipedia.org/wiki/Human-centered%20computing | Human-centered computing (HCC) studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.
Human-centered computing researchers and practitioners usually come from one or more disciplines such as computer science, human factors, sociology, psychology, cognitive science, anthropology, communication studies, graphic design, and industrial design. Some researchers focus on understanding humans, both as individuals and in social groups, by focusing on the ways that human beings adopt and organize their lives around computational technologies. Others focus on designing and developing new computational artifacts.
Overview
Scope
HCC aims at bridging the existing gaps between the various disciplines involved with the design and implementation of computing systems that support human's activities. Meanwhile, it is a set of methodologies that apply to any field that uses computers in applications in which people directly interact with devices or systems that use computer technologies.
HCC facilitates the design of effective computer systems that take into account personal, social, and cultural aspects and addresses issues such as information design, human information interaction, human-computer interaction, human-human interaction, and the relationships between computing technology and art, social, and cultural issues.
HCC topics
The National Science Foundation (NSF) defines three-dimensional research as "a three dimensional space comprising human, computer, and environment." According to the NSF, the human dimension ranges from research that supports individual needs, through teams as goal-oriented groups, to society as an unstructured collection of connected people. The computer dimension ranges from fixed computing devices, through mobile devices, to computational systems of visual/audio devices that are embedded in the surrounding physical environment. The environment dimension ranges from discrete physical computational devices, through mixed reality systems, to immersive virtual environments. Some examples of topics in the field are listed below.
List of topics in the HCC field
Problem-solving in distributed environments, ranging across Internet-based information systems, grids, sensor-based information networks, and mobile and wearable information appliances.
Multimedia and multi-modal interfaces in which combinations of speech, text, graphics, gesture, movement, touch, sound, etc. are used by people and machines to communicate with one another.
Intelligent interfaces and user modeling, information visualization, and adaptation of content to accommodate different display capabilities, modalities, bandwidth, and latency.
Multi-agent systems that control and coordinate actions and solve complex problems in distributed environments in a wide variety of domains, such as disaster response teams, e-commerce, education, and successful aging.
Models for effective computer-mediated human-human interaction under a variety of constraints, (e.g., video conferencing, collaboration across high vs. low bandwidth networks, etc.).
Definition of semantic structures for multimedia information to support cross-modal input and output.
Specific solutions to address the special needs of particular communities.
Collaborative systems that enable knowledge-intensive and dynamic interactions for innovation and knowledge generation across organizational boundaries, national borders, and professional fields.
Novel methods to support and enhance social interaction, including innovative ideas like social orthotics, affective computing, and experience capture.
Studies of how social organizations, such as government agencies or corporations, respond to and shape the introduction of new information technologies, especially with the goal of improving scientific understanding and technical design.
Knowledge-driven human-computer interaction that uses ontologies to address the semantic ambiguities between human and computer's understandings towards mutual behaviors
Human-centered semantic relatedness measure that employs human power to measure the semantic relatedness between two concepts
Human-centered systems
Human-centered systems (HCS) are systems designed for human-centered computing. This approach was developed by Mike Cooley in his book Architect or Bee? drawing on his experience working with the Lucas Plan. HCS focuses on the design of interactive systems as they relate to human activities. According to Kling et al., the Committee on Computing, Information, and Communication of the National Science and Technology Council, identified human-centered systems, or HCS, as one of five components for a High Performance Computing Program. Human-centered systems can be referred to in terms of human-centered automation. According to Kling et al., HCS refers to "systems that are:
based on the analysis of the human tasks the system is aiding
monitored for performance in terms of human benefits
built to take account of human skills and
adaptable easily to changing human needs."
In addition, Kling et al. defines four dimensions of human-centeredness that should be taken into account when classifying a system: systems that are human centered must analyze the complexity of the targeted social organization, and the varied social units that structure work and information; human centeredness is not an attribute of systems, but a process in which the stakeholder group of a particular system assists in evaluating the benefit of the system; the basic architecture of the system should reflect a realistic relationship between humans and machines; the purpose and audience the system is designed for should be an explicit part of the design, evaluation, and use of the system.
Human-computer interaction
Within the field of human-computer interaction (HCI), the term "user-centered" is commonly used. The main focus of this approach is to thoroughly understand and address user needs to drive the design process. However, human-centered computing (HCC) goes beyond conventional areas like usability engineering, human-computer interaction, and human factors which primarily deal with user interfaces and interactions. Experts define HCC as a discipline that integrates disciplines such as learning sciences, social sciences, cognitive sciences, and intelligent systems more extensively compared to traditional HCI practices.
The concept of human-centered computing (HCC) is regarded as an essential aspect within the realm of computer-related research, extending beyond being just a subset discipline of computer science. The HCC perspective acknowledges that "computing" encompasses tangible technologies that enable diverse tasks while also serving as a significant social and economic influence.
In addition, Dertouzos elaborates on how HCC goes beyond the notion of interfaces that are easy for users to navigate by strategically incorporating five technologies: natural interaction, automation, personalized information retrieval, collaborative capabilities, and customization.
While the scope of HCC is extensive, three fundamental factors are proposed to constitute the core of HCC system and algorithm design processes:
Social and culturally aware considerations.
Direct augmentation and/or consideration of human abilities.
Adaptability is a key feature.
Adherence to these factors in system and algorithm design for HCC applications is anticipated to yield qualities such as:
Responsive actions aligned with the social and cultural context of deployment.
Integration of input from various sensors, with communication through diverse media as output.
Accessibility for a diverse range of individuals.
Human-centered activities in multimedia
The human-centered activities in multimedia, or HCM, can be considered as follows according to: media production, annotation, organization, archival, retrieval, sharing, analysis, and communication, which can be clustered into three areas: production, analysis, and interaction.
Multimedia production
Multimedia production is the human task of creating media. For instance, photographing, recording audio, remixing, etc. All aspects of media production concerned must directly involve humans in HCM. There are two main characteristics of multimedia production. The first is culture and social factors. HCM production systems should consider cultural differences and be designed according to the culture in which they will be deployed. The second is to consider human abilities. Participants involved in HCM production should be able to complete the activities during the production process. The field of Multimedia in Human-Centered Multimedia (HCM) is dedicated to the creation and development of various forms of media, including photography, audio recording, and remixing. What sets HCM apart is its emphasis on active human involvement throughout the production process. This means that cultural differences must be taken into account to tailor HCM systems according to specific cultural contexts. Furthermore, a key factor for achieving success in HCM production lies in recognizing and utilizing human capabilities effectively; this enables active participation and ensures efficient completion of all production activities.
Multimedia analysis
Multimedia analysis can be considered as a type of HCM applications which is the automatic analysis of human activities and social behavior in general. There is a broad area of potential relevant uses from facilitating and enhancing human communications, to allowing for improved information access and retrieval in the professional, entertainment, and personal domains. The field of Multimedia Analysis in Human-Centered Multimedia (HCM), involves automatically analyzing human activities and social behavior. This application area covers a wide range of domains, including improving communication between individuals and enhancing information access in professional, entertainment, and personal contexts. The possibilities for utilizing multimedia analysis are extensive, as it goes beyond simple categorization to achieve a nuanced understanding of human behavior. By doing so, system functionalities can be enhanced while providing users with improved experiences.
Multimedia interaction
Multimedia interaction can be considered as the interaction activity area of HCM. It is paramount to understand both how humans interact with each other and why, so that we can build systems to facilitate such communication and so that people can interact with computers in natural ways. To achieve natural interaction, cultural differences and social context are primary factors to consider, due to the potential different cultural backgrounds. For instance, a couple of examples include: face-to-face communications where the interaction is physically located and real-time; live-computer mediated communications where the interaction is physically remote but remains real-time; and non-real time computer-mediated communications such as instant SMS, email, etc.
Human-Centered Design Process
The Human-Centered Design Process is a method to problem-solving used in design. The process involves, first, empathizing with the user to learn about the target audience of the product and understand their needs. Empathizing will then lead to research, and asking the target audience specific question to further understand their goals for the product at hand. This researching stage may also involve competitor analysis to find more design opportunities in the product's market. Once the designer has compiled data on the user and the market for their product design, they will then move on to the ideation stage, in which they will brainstorm design solutions through sketches and wireframes. Wireframing is a digital or physical illustration of a user interface, focusing on information architecture, space allocation, and content functionality. Consequently, a wireframe typically does not have any colors or graphics and only focuses on the intended functionalities of the interface.
To conclude the Human-Centered Design Process, there are two final steps. Upon wireframing or sketching, the designer will usually turn their paper sketches or low-fidelity wireframes into high-fidelity prototypes. Prototyping allows the designer to explore their design ideas further and focus on the overall design concept. High-fidelity means that the prototype is interactive or "clickable" and simulates the a real application. After creating this high-fidelity prototype of their design, the designer can then conduct usability testing. This involves collecting participants that represent the target audience of the product and having them walk through the prototype as if they were using the real product. The goal of usability testing is to identify any issues with the design that need to be improved and analyze how real users will interact with the product. To run an effective usability test, it is imperative to take notes on the users behavior and decisions and also have the user thinking out loud while they use the prototype.
Career
Academic programs
As human-centered computing has become increasingly popular, many universities have created special programs for HCC research and study for both graduate and undergraduate students.
User interface designer
A user interface designer is an individual who usually with a relevant degree or high level of knowledge, not only on technology, cognitive science, human–computer interaction, learning sciences, but also on psychology and sociology. A user interface designer develops and applies user-centered design methodologies and agile development processes that includes consideration for overall usability of interactive software applications, emphasizing interaction design and front-end development.
Information architect (IA)
Information architects mainly work to understand user and business needs in order to organize information to best satisfy these needs. Specifically, information architects often act as a key bridge between technical and creative development in a project team. Areas of interest in IA include search schemas, metadata, and taxonomy.
Projects
NASA/Ames Computational Sciences Division
The Human-Centered Computing (HCC) group at NASA/Ames Computational Sciences Division is conducting research at Haughton as members of the Haughton-Mars Project (HMP) to determine, via an analog study, how we will live and work on Mars.
HMP/Carnegie Mellon University (CMU) Field Robotics Experiments—HCC is collaborating with researchers on the HMP/CMU field robotics research program at Haughton to specify opportunities for robots assisting scientists. Researchers in this project have carried out a parallel investigation that documents work during traverses. A simulation module has been built, using a tool that represents people, their tools, and their work environment, that will serve as a partial controller for a robot that assist scientists in the field work in mars. When it comes to take human, computing and environment all into consideration, theory and techniques in HCC field will be the guideline.
Ethnography of Human Exploration of Space—HCC lab is carrying out an ethnographic study of scientific field work, covering all aspects of a scientist's life in the field. This study involves observing as participants at Haughton and writing about HCC lab`s experiences. HCC lab then look for patterns in how people organize their time, space, and objects and how they relate to each other to accomplish their goals. In this study, HCC lab is focusing on learning and conceptual change.
Center for Cognitive Ubiquitous Computing (CUbiC) at Arizona State University
Based on the principles of human-centered computing, the Center for Cognitive Ubiquitous Computing (CUbiC) at Arizona State University develops assistive, rehabilitative and healthcare applications. Founded by Sethuraman Panchanathan in 2001, CUbiC research spans three main areas of multimedia computing: sensing and processing, recognition and learning, and interaction and delivery. CUbiC places an emphasis on transdisciplinary research and positions individuals at the center of technology design and development. Examples of such technologies include the Note-Taker, a device designed to aid students with low vision to follow classroom instruction and take notes, and VibroGlove, which conveys facial expressions via haptic feedback to people with visual impairments.
In 2016, researchers at CUbiC introduced "Person-Centered Multimedia Computing", a new paradigm adjacent to HCC, which aims to understand a user's needs, preferences, and mannerisms including cognitive abilities and skills to design ego-centric technologies. Person-centered multimedia computing stresses the multimedia analysis and interaction facets of HCC to create technologies that can adapt to new users despite being designed for an individual.
See also
Cognitive science
Computer-mediated communication
Context awareness
Crowdsourcing
Health information technology
Human-based computation
Human-computer interaction
Information science
Social computing
Socially relevant computing
Ubiquitous computing
User-centered design
References
Further reading
"HMP-99 Science Field Report" NASA Ames Research Center
Human–computer interaction
Information science
Applied psychology | Human-centered computing | [
"Engineering"
] | 3,182 | [
"Human–computer interaction",
"Human–machine interaction"
] |
1,487,830 | https://en.wikipedia.org/wiki/Gastrin-releasing%20peptide | Gastrin-releasing peptide GRP, is a neuropeptide, a regulatory molecule encoded in the human by the GRP gene. GRP has been implicated in a number of physiological and pathophysiological processes. Most notably, GRP stimulates the release of gastrin from the G cells of the stomach.
GRP encodes a number of bombesin-like peptides. Its 148-amino acid preproprotein, following cleavage of a signal peptide, is further processed to produce either the 27-amino acid gastrin-releasing peptide or the 10-amino acid neuromedin C. These smaller peptides regulate numerous functions of the gastrointestinal and central nervous systems, including release of gastrointestinal hormones, smooth muscle cell contraction, and epithelial cell proliferation.
Function
Gastrin-releasing peptide is a regulatory human peptide that elicits gastrin release and regulates gastric acid secretion and enteric motor function. The post-ganglionic fibers of the vagus nerve that innervate bombesin/GRP neurons of the stomach release GRP, which stimulates the G cells to release gastrin.
GRP is also involved in the biology of the circadian system, playing a role in the signaling of light to the master circadian oscillator in the suprachiasmatic nuclei of the hypothalamus.
Furthermore, GRP seems to mediate certain aspects of stress. This is the reason for the observed fact that atropine does not block the vagal effect on gastrin release.
Gene
GRP is located on chromosome 18q21. PreproGRP (the unprocessed form of GRP) is encoded in three exons separated by two introns. Alternative splicing results in multiple transcript variants encoding different isoforms.
Synthesis
PreproGRP begins with signal peptidase cleavage to generate the pro-gastrin-releasing-peptide (proGRP), which is then processed by proteolytic cleavages, to form smaller GRP peptides.
These smaller peptides are released by the post-ganglionic fibers of the vagus nerve, which innervate the G cells of the stomach and stimulate them to release gastrin. GRP regulates numerous functions of the gastrointestinal and central nervous systems, including release of gastrointestinal hormones, smooth muscle cell contraction, and epithelial cell proliferation.
Clinical significance
Gastrin-releasing peptide and neuromedin C, it is postulated, play a role in human cancers of the lung, colon, stomach, pancreas, breast, and prostate.
References
Further reading
External links
Neurotransmitters | Gastrin-releasing peptide | [
"Chemistry"
] | 562 | [
"Neurochemistry",
"Neurotransmitters"
] |
1,487,880 | https://en.wikipedia.org/wiki/Isolecithal | Isolecithal (Greek iso = equal, lekithos = yolk) refers to the even distribution of yolk in the cytoplasm of ova of mammals and other vertebrates, notably fishes of the families Petromyzontidae, Amiidae, and Lepisosteidae. Isolecithal cells have two equal hemispheres of yolk. However, during cellular development, normally under the influence of gravity, some of the yolk settles to the bottom of the egg, producing an uneven distribution of yolky hemispheres. Such uneven cells are known as telolecithal and are common where there is sufficient yolk mass.
In the absence of a large concentration of yolk, four major cleavage types can be observed in isolecithal cells: radial holoblastic, spiral holoblastic, bilateral holoblastic, and rotational holoblastic cleavage. These holoblastic cleavage planes pass all the way through isolecithal zygotes during the process of cytokinesis. Coeloblastula is the next stage of development for eggs that undergo this radial cleavage. In mammals, because the isolecithal cells have only a small amount of yolk, they require immediate implantation onto the uterine wall to receive nutrients.
See also
Cell cycle
Centrolecithal
Telolecithal
References
Cell biology | Isolecithal | [
"Biology"
] | 284 | [
"Cell biology"
] |
1,487,910 | https://en.wikipedia.org/wiki/Schwarz%E2%80%93Ahlfors%E2%80%93Pick%20theorem | In mathematics, the Schwarz–Ahlfors–Pick theorem is an extension of the Schwarz lemma for hyperbolic geometry, such as the Poincaré half-plane model.
The Schwarz–Pick lemma states that every holomorphic function from the unit disk U to itself, or from the upper half-plane H to itself, will not increase the Poincaré distance between points. The unit disk U with the Poincaré metric has negative Gaussian curvature −1. In 1938, Lars Ahlfors generalised the lemma to maps from the unit disk to other negatively curved surfaces:
Theorem (Schwarz–Ahlfors–Pick). Let U be the unit disk with Poincaré metric ; let S be a Riemann surface endowed with a Hermitian metric whose Gaussian curvature is ≤ −1; let be a holomorphic function. Then
for all
A generalization of this theorem was proved by Shing-Tung Yau in 1973.
References
Hyperbolic geometry
Riemann surfaces
Theorems in complex analysis
Theorems in differential geometry | Schwarz–Ahlfors–Pick theorem | [
"Mathematics"
] | 218 | [
"Theorems in differential geometry",
"Theorems in mathematical analysis",
"Theorems in complex analysis",
"Theorems in geometry"
] |
1,488,075 | https://en.wikipedia.org/wiki/Logic%20error | In computer programming, a logic error is a bug in a program that causes it to operate incorrectly, but not to terminate abnormally (or crash). A logic error produces unintended or undesired output or other behaviour, although it may not immediately be recognized as such.
Logic errors occur in both compiled and interpreted languages. Unlike a program with a syntax error, a program with a logic error is a valid program in the language, though it does not behave as intended. Often the only clue to the existence of logic errors is the production of wrong solutions, though static analysis may sometimes spot them.
Debugging logic errors
One of the ways to find this type of error is to put out the program's variables to a file or on the screen in order to determine the error's location in code. Although this will not work in all cases, for example when calling the wrong subroutine, it is the easiest way to find the problem if the program uses the incorrect results of a bad mathematical calculation.
Examples
This example function in C to calculate the average of two numbers contains a logic error. It is missing parentheses in the calculation, so it compiles and runs but does not give the expected answer due to operator precedence (division is evaluated before addition).
float average(float a, float b)
{
return a + b / 2; // should be (a + b) / 2
}
See also
Syntax error
Off-by-one error
Computer errors
Programming language theory
bg:Логическа грешка
de:Logischer Fehler | Logic error | [
"Technology"
] | 327 | [
"Computer errors"
] |
1,488,243 | https://en.wikipedia.org/wiki/Sodium%20acetate | Sodium acetate, CH3COONa, also abbreviated NaOAc, is the sodium salt of acetic acid. This salt is colorless deliquescent, and hygroscopic.
Applications
Biotechnological
Sodium acetate is used as the carbon source for culturing bacteria. Sodium acetate can also be useful for increasing yields of DNA isolation by ethanol precipitation.
Industrial
Sodium acetate is used in the textile industry to neutralize sulfuric acid waste streams and also as a photoresist while using aniline dyes. It is also a pickling agent in chrome tanning and helps to impede vulcanization of chloroprene in synthetic rubber production. It is also used to reduce static electricity during production of disposable cotton pads.
Concrete longevity
Sodium acetate is used to mitigate water damage to concrete by acting as a concrete sealant, while also being environmentally benign and cheaper than the commonly used epoxy alternative for sealing concrete against water permeation.
Food
Sodium acetate (anhydrous) is widely used as a shelf-life extending agent and pH-control agent. It is safe to eat at low concentration.
Buffer solution
A solution of sodium acetate (a basic salt of acetic acid) and acetic acid can act as a buffer to keep a relatively constant pH level. This is useful especially in biochemical applications where reactions are pH-dependent in a mildly acidic range (pH 4–6).
Heating pad
Sodium acetate is also used in heating pads, hand warmers, and hot ice. A supersaturated solution of sodium acetate in water is supplied with a device to initiate crystallization, a process that releases substantial heat.
Sodium acetate trihydrate crystals melt at , and the liquid sodium acetate dissolves in the released water of crystallization. When heated past the melting point and subsequently allowed to cool, the aqueous solution becomes supersaturated. This solution is capable of cooling to room temperature without forming crystals. By pressing on a metal disc within the heating pad, a nucleation center is formed, causing the solution to crystallize back into solid sodium acetate trihydrate. The process of crystallization is exothermic. The latent heat of fusion is about 264–289 kJ/kg. Unlike some types of heat packs, such as those dependent upon irreversible chemical reactions, a sodium acetate heat pack can be easily reused by immersing the pack in boiling water for a few minutes, until the crystals are completely dissolved, and allowing the pack to slowly cool to room temperature.
Preparation
For laboratory use, sodium acetate is inexpensive and usually purchased instead of being synthesized. It is sometimes produced in a laboratory experiment by the reaction of acetic acid, commonly in the 5–18% solution known as vinegar, with sodium carbonate ("washing soda"), sodium bicarbonate ("baking soda"), or sodium hydroxide ("lye", or "caustic soda"). Any of these reactions produce sodium acetate and water. When a sodium and carbonate ion-containing compound is used as the reactant, the carbonate anion from sodium bicarbonate or carbonate, reacts with the hydrogen from the carboxyl group (-COOH) in acetic acid, forming carbonic acid. Carbonic acid readily decomposes under normal conditions into gaseous carbon dioxide and water. This is the reaction taking place in the well-known "volcano" that occurs when the household products, baking soda and vinegar, are combined.
CH3COOH + NaHCO3 → CH3COONa +
→ +
Industrially, sodium acetate trihydrate is prepared by reacting acetic acid with sodium hydroxide using water as the solvent.
CH3COOH + NaOH → CH3COONa + H2O.
To manufacture anhydrous sodium acetate industrially, the Niacet Process is used. Sodium metal ingots are extruded through a die to form a ribbon of sodium metal, usually under an inert gas atmosphere such as N2 then immersed in anhydrous acetic acid.
2 CH3COOH + 2 Na →2 CH3COONa + H2.
The hydrogen gas is normally a valuable byproduct.
Structure
The crystal structure of anhydrous sodium acetate has been described as alternating sodium-carboxylate and methyl group layers. Sodium acetate trihydrate's structure consists of distorted octahedral coordination at sodium. Adjacent octahedra share edges to form one-dimensional chains. Hydrogen bonding in two dimensions between acetate ions and water of hydration links the chains into a three-dimensional network.
Reactions
Sodium acetate can be used to form an ester with an alkyl halide such as bromoethane:
CH3COONa + BrCH2CH3 → CH3COOCH2CH3 + NaBr
Sodium acetate undergoes decarboxylation to form methane (CH4) under forcing conditions (pyrolysis in the presence of sodium hydroxide):
CH3COONa + NaOH → CH4 + Na2CO3
Calcium oxide is the typical catalyst used for this reaction.
Cesium salts also catalyze this reaction.
References
External links
Hot Ice – Instructions, Pictures, and Videos
How Sodium Acetate heating pads work
Acetates
E-number additives
Food additives
Organic sodium salts
Photographic chemicals | Sodium acetate | [
"Chemistry"
] | 1,123 | [
"Organic sodium salts",
"Salts"
] |
1,488,266 | https://en.wikipedia.org/wiki/St.%20Clair%20Tunnel | The St. Clair Tunnel is the name for two separate rail tunnels which were built under the St. Clair River between Sarnia, Ontario and Port Huron, Michigan. The original, opened in 1891 and used until it was replaced by a new larger tunnel in 1994, was the first full-size subaqueous tunnel built in North America. (By full-size it is meant that it allowed a railroad to run through it.) It is a National Historic Landmark of the United States, and has been designated a civil engineering landmark by both US and Canadian engineering bodies.
First tunnel (1891–1995)
The first underwater rail tunnel in North America was opened by the St. Clair Tunnel Company in 1891. The company was a subsidiary of the Grand Trunk Railway (GTR), which used the new route to connect with its subsidiary Chicago and Grand Trunk Railway, predecessor to the Grand Trunk Western Railroad (GTW). Before the tunnel's construction, Grand Trunk was forced to use time-consuming rail ferries to transfer cargo.
The tunnel was an engineering marvel in its day and designed by Joseph Hobson. The development of original techniques were achieved for excavating in a compressed air environment. The Beach tunnelling shield, designed by Alfred Ely Beach, was used to assist workmen in removing material from the route of the tunnel and left a continuous iron tube nearly long. Freight trains used the tunnel initially with the first passenger trains using it in 1892.
The tunnel measured from portal to portal. The actual width of the St. Clair River at this crossing is only . The tube had a diameter of and hosted a single standard gauge track. It was built at a cost of $2.7 million (equivalent to $ in ).
Locomotives
Steam locomotives were used in the early years to pull trains through the tunnel, however concerns about the potential dangers of suffocation should a train stall in the tunnel led to the installation of catenary wires for electric-powered locomotives by 1907. The first use of electric locomotives through the tunnel in regular service occurred on May 17, 1908. The locomotives were built by Baldwin-Westinghouse.
A total of six electric locomotives were supplied by 1909. Each were equipped with three 240 horse power single phase motors and weighed 65 tons. They had a rigid wheel base and operated on a 3,300-volt, 25 cycle, single phase current. They had a maximum draw bar pull of 40,000 pounds, and a running draw bar pull of at . According to a 1909 publication, it was standard practice to use two units together to pull a 1,000 ton train up the 2% grade. The entire length of the electric line was and the trains were able to have a running speed of to . The Grand Trunk Railway used the locomotives to transfer both passenger and freight trains through the tunnel.
In 1923, the GTR was nationalized by Canada's federal government, which then merged the bankrupt railway into the recently formed Canadian National Railway. CN also assumed control of Grand Trunk Western as a subsidiary and the tunnel company and continued operations much as before.
The electric-powered locomotives were retired in 1958 and scrapped in 1959 after CN withdrew its last steam locomotives on trains passing through the tunnel. New diesel locomotives did not cause the same problems with air quality in this relatively short tunnel.
Freight cars
After the World War II, railways in North America started to see the dimensions of freight cars increase. Canadian National (identified as CN after 1960) was forced to rely upon rail ferries to carry freight cars, such as hicube boxcars, automobile carriers, certain intermodal cars and chemical tankers, which exceeded the limits of the tunnel's dimensions.
Recognition
The tunnel was designated a Civil Engineering Landmark by both the Canadian and the American Societies of Civil Engineers in 1991.
The tunnel was declared a U.S. National Historic Landmark in 1993.
The construction of the tunnel has also been recognized as National Historic Event by Parks Canada since 1992, with a plaque at the site.
Second tunnel (1995–present)
The second tunnel was built to handle intermodal rail cars with double-stacked shipping containers, which could not fit through the original tunnel or the Michigan Central Railway Tunnel in Detroit. By the early 1990s, CN had commissioned engineering studies for a replacement tunnel to be built adjacent to the existing St. Clair River tunnel. In 1992, new CN president Paul Tellier foresaw that CN would increase its traffic in the Toronto–Chicago corridor. The Canada-U.S. Free Trade Agreement was implemented in 1989 and discussions for a North American Free Trade Agreement between Canada, the United States and Mexico discussions were underway at that time (NAFTA was implemented in 1994). It was anticipated that import/export traffic on CN's corridor would increase dramatically as a result.
In 1993, CN began construction of the newer and larger tunnel. Tellier declared at the ceremonies:
[The] tunnel will give CN the efficiencies it needs to become a strong competitive force in North American transportation
Unlike the first tunnel, which was hand dug from both ends, the new tunnel was constructed using a tunnel boring machine named Excalibore. It started on the Canadian side and dug its way to the U.S.
The tunnel opened in late 1994 whereupon trains stopped using the adjacent original tunnel, whose bore was sealed. The new tunnel was dedicated on May 5, 1995. It measures from portal to portal with a bore diameter of . It has a single standard gauge track that can accommodate all freight cars currently in service in North America; for this reason, the rail ferries were also retired in 1994 when the new tunnel opened.
On November 30, 2004, CN announced that the new St. Clair River tunnel would be named the Paul M. Tellier Tunnel in honour of the company's retired president, Paul Tellier, who foresaw the impact the tunnel would have on CN's eastern freight corridor. Signs bearing his name were installed over each tunnel portal.
Incident
On June 28, 2019, train CN M38331 28, hauling 100+ cars, had 40 cars derail in the tunnel, spilling of sulfuric acid and closing the tunnel for several days afterwards. The tunnel re-opened on July 10, 2019. The Transportation Safety Board of Canada revealed that a modified gondola partial failure caused the car's trucks to become askew and a derailment.
Proposed projects
Tunnel doubling in order to track doubling completion from South Bend via Port Huron and Sarnia to London. the new tunnel would be at the north of the current tunnel or the south of the current tunnel; the latter option would require the old tunnel to be filled with concrete.
Electrification at 25kV AC catenaries for CN Flint Line (South Bend–St. Clair Tunnel–London), NS Chicago Line and BNSF Northern Transcon.
See also
List of National Historic Landmarks in Michigan
National Register of Historic Places listings in St. Clair County, Michigan
Port Huron station
Blue Water Bridge, a nearby international highway bridge
References
Sources
Further reading
External links
Historic American Engineering Record (HAER) documentation:
1890-07-26: THE SIMS - EDISON ELECTRIC TORPEDO - THE TORPEDO AT FULL SPEED - SECTIONAL VIEW OF THE TORPEDO
Pictures of both tunnels
at MichMarkers.com
Railway tunnels in Ontario
Railroad tunnels in Michigan
St. Clair River
Canada–United States border crossings
Buildings and structures in Sarnia
Buildings and structures in St. Clair County, Michigan
Port Huron, Michigan
Rail infrastructure in Sarnia
Transportation in St. Clair County, Michigan
Canadian National Railway tunnels
Grand Trunk Railway
Tunnels completed in 1891
Historic Civil Engineering Landmarks
National Historic Landmarks in Michigan
National Register of Historic Places in St. Clair County, Michigan
Railroad-related National Historic Landmarks
Railway buildings and structures on the National Register of Historic Places in Michigan
Railway tunnels on the National Register of Historic Places
1891 establishments in Michigan
1891 establishments in Ontario
Historic American Engineering Record in Michigan
Tunnels completed in 1994
1994 establishments in Ontario
1994 establishments in Michigan
Michigan State Historic Sites in St. Clair County | St. Clair Tunnel | [
"Engineering"
] | 1,614 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
1,488,293 | https://en.wikipedia.org/wiki/Heating%20pad | A heating pad is a pad used for warming of parts of the body in order to manage pain. Localized application of heat causes the blood vessels in that area to dilate, enhancing perfusion to the targeted tissue. Types of heating pads include electrical, chemical and hot water bottles.
Specialized heating pads (mats) are also used in other settings. Heat mats in plant propagation stimulate seed germination and root development; they operate at cooler temperatures. Heat mats also are available in the pet trade, especially as warming spots for reptiles such as lizards and snakes.
Types
Electrical
Electric pads usually operate from household current and must have protection against overheating.
A moist heating pad is used damp on the user's skin. These pads register temperatures from and are intended for deep tissue treatment and can be dangerous if left on unattended. Moist heating pads are used mainly by physical therapists but can be found for home use. A moist cloth can be added with a stupe cover to add more moisture to the treatment.
An electric heating pouch is similar in form to an electric heating pad but is curved to wrap around a joint.
Chemical
Disposable chemical pads employ a one-time exothermic chemical reaction. One type, frequently used for hand warmers, is triggered by unwrapping an air-tight packet containing slightly moist iron powder and salt or catalysts which rusts over a period of hours after being exposed to oxygen in the air. Another type contains separate compartments within the pad; when the user squeezes the pad, a barrier ruptures and the compartments mix, producing heat such as the enthalpy change of solution of calcium chloride dissolving.
The most common reusable heat pads contain a supersaturated solution of sodium acetate in water. Crystallization is triggered by flexing a small flat disc of notched ferrous metal embedded in the liquid. Pressing the disc releases very tiny adhered crystals of sodium acetate into the solution which then act as nucleation sites for the crystallization of the sodium acetate into the hydrated salt (sodium acetate trihydrate, CH3COONa · 3 H2O). Because the liquid is supersaturated, this causes the solution to begin to crystallize over a few seconds, typically by propagating from the initial nucleation site and eventually causing the entire contained liquid to solidify, thereby releasing the thermal energy of the crystal lattice. The use of the metal disc was invented in 1978.
The pad can be reused by placing it in boiling water for 10–15 minutes, which redissolves the sodium acetate trihydrate in the contained water and reconstites a supersaturated solution. Once the pad has returned to room temperature it can be triggered again. Triggering the pad before it has reached room temperature results in the pad reaching a lower peak temperature, as compared to waiting until it had completely cooled.
This process can be repeated indefinitely.
High specific-heat capacity materials
Heating packs can also be made by filling a container with a material that has a high specific heat capacity, which then gradually releases the heat over time. A hot water bottle is the most familiar example of this type of heating pad.
A microwavable heating pad is a heating pad that is warmed by placing it in a microwave oven before use. Microwavable heating pads are typically made out of a thick insulative fabric such as flannel and filled with grains such as wheat, buckwheat or flax seed. Due to their relative simplicity to make, they are frequently sewn by hand, often with a custom shape to fit the intended area of use.
Often, aromatic compounds will also be added to the filler mixture to create a pleasant or soothing smell when heated. The source of these can vary significantly, ranging from adding essential oils to ground-up spices such as cloves and nutmeg, or even dried rose petals.
Phase-change materials
Phase change materials can be used for heating pads intended to operate at a fixed temperature. The heat of fusion is used to release thermal energy. This results in the pad heating up.
Function
Many episodes of pain come from muscle exertion or strain, which creates tension in the muscles and soft tissues. This tension can constrict circulation, sending pain signals to the brain. Heat application eases pain by:
dilating the blood vessels surrounding the painful area. Increased blood flow provides additional oxygen and nutrients to help heal the damaged muscle tissue.
stimulating sensation in the skin and therefore decreasing the pain signals being transmitted to the brain
increasing the flexibility (and decreasing painful stiffness) of soft tissues surrounding the injured area, including muscles and connective tissue.
As many heating pads are portable, heat may be applied as needed at home, at work, or while travelling. Some physicians recommend alternating heat and ice for pain relief. As with any pain treatment, a physician should be consulted prior to beginning treatment.
See also
Hand warmer
References
Medical treatments
Medical equipment
Heating
de:Wärmekissen | Heating pad | [
"Biology"
] | 1,025 | [
"Medical equipment",
"Medical technology"
] |
1,488,320 | https://en.wikipedia.org/wiki/No-communication%20theorem | In physics, the no-communication theorem (also referred to as the no-signaling principle) is a no-go theorem in quantum information theory. It asserts that during the measurement of an entangled quantum state, it is impossible for one observer to transmit information to another observer, regardless of their spatial separation. This conclusion preserves the principle of causality in quantum mechanics and ensures that information transfer does not violate special relativity by exceeding the speed of light.
The theorem is significant because quantum entanglement creates correlations between distant events that might initially appear to enable faster-than-light communication. The no-communication theorem establishes conditions under which such transmission is impossible, thus resolving paradoxes like the Einstein-Podolsky-Rosen (EPR) paradox and addressing the violations of local realism observed in Bell's theorem. Specifically, it demonstrates that the failure of local realism does not imply the existence of "spooky action at a distance," a phrase originally coined by Einstein.
Informal overview
The no-communication theorem states that, within the context of quantum mechanics, it is not possible to transmit classical bits of information by means of carefully prepared mixed or pure states, whether entangled or not. The theorem is only a sufficient condition that states that if the Kraus matrices commute then there can be no communication through the quantum entangled states and that this is applicable to all communication. From a relativity and quantum field perspective, also faster than light or "instantaneous" communication is disallowed. Being only a sufficient condition, there can be other reasons communication is not allowed.
The basic premise entering into the theorem is that a quantum-mechanical system is prepared in an initial state with some entangled states, and that this initial state is describable as a mixed or pure state in a Hilbert space H. After a certain amount of time, the system is divided in two parts each of which contains some non-entangled states and half of the quantum entangled states, and the two parts become spatially distinct, A and B, sent to two distinct observers, Alice and Bob, who are free to perform quantum mechanical measurements on their portion of the total system (viz, A and B). The question is: is there any action that Alice can perform on A that would be detectable by Bob making an observation of B? The theorem replies 'no'.
An important assumption going into the theorem is that neither Alice nor Bob is allowed, in any way, to affect the preparation of the initial state. If Alice were allowed to take part in the preparation of the initial state, it would be trivially easy for her to encode a message into it; thus neither Alice nor Bob participates in the preparation of the initial state. The theorem does not require that the initial state be somehow 'random' or 'balanced' or 'uniform': indeed, a third party preparing the initial state could easily encode messages in it, received by Alice and Bob. Simply, the theorem states that, given some initial state, prepared in some way, there is no action that Alice can take that would be detectable by Bob.
The proof proceeds by defining how the total Hilbert space H can be split into two parts, HA and HB, describing the subspaces accessible to Alice and Bob. The total state of the system is described by a density matrix σ. The goal of the theorem is to prove that Bob cannot in any way distinguish the pre-measurement state σ from the post-measurement state P(σ). This is accomplished mathematically by comparing the trace of σ and the trace of P(σ), with the trace being taken over the subspace HA. Since the trace is only over a subspace, it is technically called a partial trace. Key to this step is that the (partial) trace adequately summarizes the system from Bob's point of view. That is, everything that Bob has access to, or could ever have access to, measure, or detect, is completely described by a partial trace over HA of the system σ. The fact that this trace never changes as Alice performs her measurements is the conclusion of the proof of the no-communication theorem.
Formulation
The proof of the theorem is commonly illustrated for the setup of Bell tests in which two observers Alice and Bob perform local observations on a common bipartite system, and uses the statistical machinery of quantum mechanics, namely density states and quantum operations.
Alice and Bob perform measurements on system S whose underlying Hilbert space is
It is also assumed that everything is finite-dimensional to avoid convergence issues. The state of the composite system is given by a density operator on H. Any density operator σ on H is a sum of the form:
where Ti and Si are operators on HA and HB respectively. For the following, it is not required to assume that Ti and Si are state projection operators: i.e. they need not necessarily be non-negative, nor have a trace of one. That is, σ can have a definition somewhat broader than that of a density matrix; the theorem still holds. Note that the theorem holds trivially for separable states. If the shared state σ is separable, it is clear that any local operation by Alice will leave Bob's system intact. Thus the point of the theorem is no communication can be achieved via a shared entangled state.
Alice performs a local measurement on her subsystem. In general, this is described by a quantum operation, on the system state, of the following kind
where Vk are called Kraus matrices which satisfy
The term
from the expression
means that Alice's measurement apparatus does not interact with Bob's subsystem.
Supposing the combined system is prepared in state σ and assuming, for purposes of argument, a non-relativistic situation, immediately (with no time delay) after Alice performs her measurement, the relative state of Bob's system is given by the partial trace of the overall state with respect to Alice's system. In symbols, the relative state of Bob's system after Alice's operation is
where is the partial trace mapping with respect to Alice's system.
One can directly calculate this state:
From this it is argued that, statistically, Bob cannot tell the difference between what Alice did and a random measurement (or whether she did anything at all).
Some comments
The no-communication theorem implies the no-cloning theorem, which states that quantum states cannot be (perfectly) copied. That is, cloning is a sufficient condition for the communication of classical information to occur. To see this, suppose that quantum states could be cloned. Assume parts of a maximally entangled Bell state are distributed to Alice and Bob. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the z direction, collapsing Bob's state to either or . To transmit "1", Alice does nothing to her qubit. Bob creates many copies of his electron's state, and measures the spin of each copy in the z direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will have outcomes or with equal probability. This would allow Alice and Bob to communicate classical bits between each other (possibly across space-like separations, violating causality).
The version of the no-communication theorem discussed in this article assumes that the quantum system shared by Alice and Bob is a composite system, i.e. that its underlying Hilbert space is a tensor product whose first factor describes the part of the system that Alice can interact with and whose second factor describes the part of the system that Bob can interact with. In quantum field theory, this assumption can be replaced by the assumption that Alice and Bob are spacelike separated. This alternate version of the no-communication theorem shows that faster-than-light communication cannot be achieved using processes which obey the rules of quantum field theory.
History
In 1978, Phillippe H. Eberhard's paper, Bell's Theorem and the Different Concepts of Locality, rigorously demonstrated the impossibility of faster-than-light communication through quantum systems. Eberhard introduced several mathematical concepts of locality and showed how quantum mechanics contradicts most of them while preserving causality.
Further, in 1988, the paper Quantum Field Theory Cannot Provide Faster-Than-Light Communication by Eberhard and Ronald R. Ross analyzed how relativistic quantum field theory inherently forbids faster-than-light communication. This work elaborates on how misinterpretations of quantum field properties had led to claims of superluminal communication and pinpoints the mathematical principles that prevent it.
In regards to communication, a quantum channel can always be used to transfer classical information by means of shared quantum states.
In 2008 Matthew Hastings proved a counterexample where the minimum output entropy is not additive for all quantum channels. Therefore, by an equivalence result due to Peter Shor, the Holevo capacity is not just additive, but super-additive like the entropy, and by consequence there may be some quantum channels where you can transfer more than the classical capacity. Typically overall communication happens at the same time via quantum and non quantum channels, and in general time ordering and causality cannot be violated.
In August 24th 2015, a team led by physicist Ronald Hanson from Delft University of Technology in the Netherlands uploaded their latest paper to the preprint website arXiv, reporting the first Bell experiment that simultaneously addressed both the detection loophole and the communication loophole. The research team used a clever technique known as "entanglement swapping," which combines the benefits of photons and matter particles. The final measurements showed coherence between the two electrons that exceeded the Bell limit, once again supporting the standard view of quantum mechanics and rejecting Einstein's hidden variable theory. Furthermore, since electrons are easily detectable, the detection loophole is no longer an issue, and the large distance between the two electrons also eliminates the communication loophole.
See also
No-broadcast theorem
No-cloning theorem
No-deleting theorem
No-hiding theorem
No-teleportation theorem
References
Quantum measurement
Quantum information science
Theorems in quantum mechanics
Statistical mechanics theorems
No-go theorems | No-communication theorem | [
"Physics",
"Mathematics"
] | 2,097 | [
"Theorems in dynamical systems",
"Theorems in quantum mechanics",
"No-go theorems",
"Equations of physics",
"Quantum mechanics",
"Statistical mechanics theorems",
"Theorems in mathematical physics",
"Quantum measurement",
"Statistical mechanics",
"Physics theorems"
] |
1,488,382 | https://en.wikipedia.org/wiki/Sump%20pump | A sump pump is a pump used to remove water that has accumulated in a water-collecting sump basin, commonly found in the basements of homes and other buildings, and in other locations where water must be removed, such as construction sites. The water may enter via the perimeter drains of a basement waterproofing system funneling into the basin, or because of rain or natural ground water seepage if the basement is below the water table level.
More generally, a "sump" is any local depression where water may accumulate. For example, many industrial cooling towers have a built-in sump where a pool of water is used to supply water spray nozzles higher in the tower. Sump pumps are used in industrial plants, construction sites, mines, power plants, military installations, transportation facilities, or anywhere that water can accumulate.
Description
Sump pumps are used where basement flooding may otherwise happen, and to solve dampness where the water table is near or above the foundation of a structure. Sump pumps send water away from a location to any place where it is no longer problematic, such as a municipal storm drain, a dry well, or simply an open-air site downhill from the building (sometimes called "pumping to daylight").
Pumps may discharge to the sanitary sewer in older installations. Once considered acceptable, this practice may now violate the plumbing code or municipal bylaws, because it can overwhelm the municipal sewage treatment system. Municipalities urge building owners to disconnect and reroute sump pump discharge away from sanitary sewers. Fines may be imposed for noncompliance. Many homeowners have inherited their sump pump configurations and do not realize that their pump discharges into the sanitary sewer.
Sump pump systems are also utilized in industrial and commercial applications to control water table-related problems in surface soil. An artesian aquifer or periodic high water table situation can cause the ground to become unstable due to water saturation. As long as the pump functions, the surface soil will remain stable. These sumps are typically ten feet in depth or more; lined with corrugated metal pipe that contains perforations or drain holes throughout. They may include electronic control systems with visual and audible alarms and are usually covered to prevent debris and animals from falling in.
Power
Sump pumps may be plugged into an electrical power receptacle. In this case, it is safer to use a dedicated circuit, which is less likely to lose power from a blown fuse or tripped circuit breaker. In addition, the dedicated circuit may not require GFCI protection, as it is less vulnerable to false tripping due to electrical noise, especially during thunderstorms. The dedicated circuit receptacle may be specially labeled to warn against unplugging the pump, or the plug may be attached using a special retaining bracket to discourage unplugging. Instead, the pump may be hardwired to electrical power, so that it cannot be unplugged.
Since a sump basin may overflow if not pumped, a backup system is important for cases when the main power is out for prolonged periods of time, as during a severe storm.
Some sump pumps can be automatically powered from a battery backup system, or a separate battery-powered system may be installed, typically with its float switch set slightly higher than the float switch of the primary pump.
Using a separate generator is another option. These do often require a manual setup.
Alternatively, the municipal pressurized water supply powers some pumps, which can operate using a water turbine, or by using the Venturi effect. This design eliminates the need for electricity but consumes potable water, potentially making it more expensive to operate than electrical pumps and creating an additional water disposal problem. This design is used more for backup pumps rather than primary pumps.
The main thing to check with the alternative backup electricity sources is whether they offer enough power.
Sump pumps tend to require at least 230 volts although smaller models in the United States can sometimes run on 120 volts. Similarly, watt and amp needs of sump pumps can vary. Consumer models can vary from 700 running watts to 2300 watts and more. Gallons per watt-hour is a measure of efficiency in sump pumps.
Additionally, sump pumps will typically require an extra burst of power known as additional starting watts to get running. This can be as much as 1.5 times and more than the running watt and amp needs.
Industrial sump pumps may be powered by other means, such a steam or compressed air, especially for backup pumps or in locations where access for maintenance is difficult.
Physical configuration
There are generally two types of residential sump pumps: pedestal and submersible. In the case of the pedestal pump, the motor is not sealed and sits on a column to protect it from moisture. This type of installation is more conspicuous since the motor is usually positioned above the top of the basin. Within the column, the motor shaft is connected to the impeller, which rotates inside a scroll housing at the bottom of the basin. A switch mounted in the motor turns the motor on as the water level in the basin rises, and shuts it off when the basin is empty; this automatic level control permits operation of the pump without constant monitoring. In a submersible pump, the motor is sealed in a special housing, with the impeller mounted directly to the shaft. Automatic level control can be integrated within the pump, or as an auxiliary switch with little or no physical connection to the pump.
Which type of sump pump to use is a matter of personal choice. Pedestal sump pumps were the only option through most of the 20th Century, with cast iron and bronze construction similar to large commercial pumps. But technological and design improvements made submersible pumps much more reliable, and the compact design allows the basin to be covered, eliminating a trip and fall hazard, and reducing the potential of radon intrusion in the home. Since the level control switch is the primary point of failure with any sump pump, submersible pumps with auxiliary, or "piggyback", designs allow for replacement of the switch with minimal effort. Today, submersible pumps are the most popular style, while residential pedestal pumps are low volume, economical designs with limited service life.
Components
In the United States, modern sump pump components are standardized. They consist of:
A basin, made of composite materials like ABS plastic or fiberglass reinforced resin, with a minimum inside diameter of 18 inches, and a minimum depth of 30 inches, and a inlet connection to accommodate a 4" drainpipe, an approximately 15 to 25 US gallons (60 to 100 litre) capacity;
A sump pump, either 1/3 or 1/2 horsepower (250 or 370 W), operates on 115 volts, plugs into a standard receptacle. Pipe connections are either 1.25" or 1.5", National Pipe Thread (NPT), and a level control to allow for automatic operation;
A section of pipe to carry water away from the home; manufactured from steel, iron, or plastic, National Sanitation Foundation (or NSF) rated for pressure;
A one-way valve (check valve) that allows water to flow away from the basin, but prevents water in the pipe from returning to the basin.
A basin cover that provides passage for the power cord, discharge pipe, and possibly a vent pipe.
Pump selection
Selection of a sump pump may consider:
Automatic vs. manual operation – pump may be controlled automatically by a level switch.
Power – Sump pump motive power will vary from 1/4 horsepower to multiple horsepower.
Head pressure – The hydraulic head pressure of a sump pump describes the maximum height to which the pump will move water. For instance, a sump pump with a 15 feet (4.6 m) maximum head (also called a shutoff head) will raise water up 15 feet (4.6 m) before it completely loses flow.
Power cord length – Running a more powerful electrical motor a long distance from the main service panel will require heavier gauge wiring to assure sufficient voltage at the motor for proper pump performance.
Phase and voltage – Sump pumps powered from the AC mains are available with single-phase or three-phase induction motors, rated for 110–120, 220–240, or 460 volts. Three-phase power is typically not available in residential locations, but is common in industrial locations.
Water level sensing switch type – Pressure switches are fully enclosed, usually inside the pump body, making them immune to obstructions or floating debris in the sump basin. Float switches, particularly the types attached to the end of a short length of flexible electrical cable, can get tangled or obstructed, especially if the pump is prone to movement in the basin due to torque effects when starting and stopping. Pressure switches are typically factory set and not adjustable, while float switches can be adjusted in place to set the high and low water levels in the sump basin. Another option is a solid state switch utilizing field-effect technology, which can turn on and off the pump through use of an internal switch and a piggyback plug.
Backup system and alarm for critical applications.
Backup components
A secondary, typically battery-powered sump pump can operate if the first pump fails. A battery-powered secondary pump will have a separate battery and charger system to provide power if normal supply is interrupted.
Alternative sump pump systems can be driven by municipal water pressure. Water-powered ejector pumps have a separate pump, float and check valve. The float controlling a backup pump is mounted in the sump pit above the normal high water mark. Under normal conditions, the main electric powered sump pump will handle all the pumping duties. When water rises higher than normal for any reason, the backup float in the sump is lifted and activates the backup sump pump. An ejector pump can also be connected to a garden hose to supply high-pressure water, with another hose to carry the water away. Although such ejector pumps waste water and are relatively inefficient, they have the advantage of having no moving parts and offer the utmost in reliability.
If the backup sump system is rarely used, a component failure may not be noticed, and the system may fail when needed. Some battery control units test the system periodically and alert on failed electrical components.
A simple, battery-powered water alarm can be hung a short distance below the top of the sump to sound an alarm should the water level rise too high. The alarm may sound locally only, or optionally may trigger remote notification via a telephone or cellphone data link.
Maintenance
Sump basins and sump pumps must be maintained. Typical recommendations suggest examining and testing equipment every year. Pumps running frequently due to higher water table, water drainage, or weather conditions should be examined more frequently. Sump pumps, being mechanical devices, will fail eventually, which could lead to a flooded basement and costly repairs. Redundancy in the system (multiple/secondary pumps) can help to avoid problems when maintenance and repairs are needed on the primary system.
When examining a sump pump and cleaning it, dirt, gravel, sand, and other loose debris should be removed to increase efficiency and extend the life of the pump. These obstructions can decrease the pump's ability to drain the sump, and can allow the sump to overflow. The check valve can also jam from the debris. Periodic examination of the discharge line opening, when applicable, ensures there are no obstructions or restrictions in the line. A partially obstructed discharge line can force a sump pump to work harder and increase its chance of overheating and failure.
Float switches are used to automatically turn the sump pump on when water rises to a preset level. Float switches must be kept clear of any obstructions within the sump. A float guard can be used to prevent the float switch from accidentally resting on the pump housing, and remaining on. If a sump pump remains operating for a long time (especially in the absence of cooling water for submersibles) it may overheat or burn out. Because mechanical float switches can wear out, they should be periodically tested by actuating them manually to assure that they continue to move freely and that the switch contacts are opening and closing properly.
If left in standing water, pedestal pumps should be manually run from time to time, even if the water in the sump is not high enough to trip the float switch. This is because these pumps are incapable of removing all the water in a sump, and the lower bearing or bushing for the pump impeller shaft tends to remain submerged, making it prone to corrosion and eventually freezing the drive shaft in the bearing. In the alternative, a pedestal pump that is expected to remain idle for an extended time should be removed from the sump and stored out of water, or the sump should be mopped out to bring the level of the remaining water well below the lower shaft bearing.
Lastly, if an independent water detector and alarm system is installed, it should be tested regularly.
See also
Bilge pump
Chopper pump
Sewage pump
References
Further reading
Ann Cameron Siegal, "The Sump Pump's Fault, or Yours?", Washington Post, August 9, 2008
"Sump Pump Helps Keep Water Out", North Dakota State University Extension Service, June 14, 2005
Thomas Scherer, "Sump Pump Questions", North Dakota State University Extension Service
"Sizing Up a Sump Pump" (pdf), University of Illinois Extension
Home appliances
Plumbing
Pumps
Stormwater management | Sump pump | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 2,775 | [
"Pumps",
"Machines",
"Turbomachinery",
"Water treatment",
"Stormwater management",
"Plumbing",
"Water pollution",
"Physical systems",
"Construction",
"Hydraulics",
"Home appliances"
] |
1,488,422 | https://en.wikipedia.org/wiki/Arab%20tone%20system | The modern Arab tone system, or system of musical tuning, is based upon the theoretical division of the octave into twenty-four equal divisions or 24-tone equal temperament, the distance between each successive note being a quarter tone (50 cents). Each tone has its own name not repeated in different octaves, unlike systems featuring octave equivalency. The lowest tone is named yakah and is determined by the lowest pitch in the range of the singer. The next higher octave is nawa and the second tuti. However, from these twenty-four tones, seven are selected to produce a scale and thus the interval of a quarter tone is never used and the three-quarter tone or neutral second should be considered the characteristic interval.
By contrast, in the European equally tempered scale the octave is divided into twelve equal divisions, or exactly half as many as the Arab system. Thus, when Arabic music is written in European musical notation, a slashed or reversed flat sign is used to indicate a quarter-tone flat, a standard flat symbol for a half-tone flat, and a flat sign combined with a slashed or reversed flat sign for a three-quarter-tone flat, sharp with one vertical line for quarter sharps, standard sharp symbol (♯) for a half-step sharp, and a sharp with three vertical lines for a three-quarter-tone sharp. A two octave range starting with yakah arbitrarily on the G below middle C is used.
In practice much fewer than twenty-four tones are used in a single performance. All twenty-four tones are individual pitches differentiated into a hierarchy of important pitches—pillars—which occur more frequently in the tone rows of traditional music and most often begin tone rows, and scattered less important or seldom occurring pitches (see tonality).
The specific notes used in a piece will be part of one of more than seventy modes or maqam rows named after characteristic tones that are rarely the first tone (unlike in European-influenced music theory where the tonic is listed first). The rows are heptatonic and constructed from augmented, major, neutral, and minor seconds. Many different but similar ratios are proposed for the frequency ratios of the tones of each row and performance practice, as of 1996, has not been investigated using electronic measurements.
The current tone system is derived from the work of Farabi (d. 950 CE) (heptatonic scales constructed from seconds), who used a 25-tone unequal scale (see tetrachord), and Mikha'il Mishaqah (1800–1888) who first presented the 24-tone equal-tempered division. Some strict traditionalists and musicians also use a 17-tone set, rejecting the 24-tone division as commercial.
See also
Jins
Arabic maqam
References
T
Equal temperaments
Microtonality | Arab tone system | [
"Physics"
] | 576 | [
"Physical quantities",
"Musical symmetry",
"Logarithmic scales of measurement",
"Equal temperaments",
"Symmetry"
] |
1,488,463 | https://en.wikipedia.org/wiki/Agroforestry | Agroforestry (also known as agro-sylviculture or forest farming) is a land use management system that integrates trees with crops or pasture. It combines agricultural and forestry technologies. As a polyculture system, an agroforestry system can produce timber and wood products, fruits, nuts, other edible plant products, edible mushrooms, medicinal plants, ornamental plants, animals and animal products, and other products from both domesticated and wild species.
Agroforestry can be practiced for economic, environmental, and social benefits, and can be part of sustainable agriculture. Apart from production, benefits from agroforestry include improved farm productivity, healthier environments, reduction of risk for farmers, beauty and aesthetics, increased farm profits, reduced soil erosion, creating wildlife habitat, less pollution, managing animal waste, increased biodiversity, improved soil structure, and carbon sequestration.
Agroforestry practices are especially prevalent in the tropics, especially in subsistence smallholdings areas, with particular importance in sub-Saharan Africa. Due to its multiple benefits, for instance in nutrient cycle benefits and potential for mitigating droughts, it has been adopted in the USA and Europe.
Definition
At its most basic, agroforestry is any of various polyculture systems that intentionally integrate trees with crops or pasture on the same land. An agroforestry system is intensively managed to optimize helpful interactions between the plants and animals included, and “uses the forest as a model for design."
Agroforestry shares principles with polyculture practices such as intercropping, but can also involve much more complex multi-strata agroforests containing hundreds of species. Agroforestry can also utilise nitrogen-fixing plants such as legumes to restore soil nitrogen fertility. The nitrogen-fixing plants can be planted either sequentially or simultaneously.
History and scientific study
The term “agroforestry” was coined in 1973 by Canadian forester John Bene, but the concept includes agricultural practices that have existed for millennia.
Scientific agroforestry began in the 20th century with ethnobotanical studies carried out by anthropologists. However, indigenous communities that have lived in close relationships with forest ecosystems have practiced agroforestry informally for centuries. For example, Indigenous peoples of California periodically burned oak and other habitats to maintain a ‘pyrodiversity collecting model,’ which allowed for improved tree health and habitat conditions. Likewise Native Americans in the eastern United States extensively altered their environment and managed land as a “mosaic” of woodland areas, orchards, and forest gardens.
Agroforestry in the tropics is ancient and widespread throughout various tropical areas of the world, notably in the form of "tropical home gardens." Some “tropical home garden” plots have been continuously cultivated for centuries. A “home garden” in Central America could contain 25 different species of trees and food crops on just one-tenth of an acre. "Tropical home gardens" are traditional systems developed over time by growers without formalized research or institutional support, and are characterized by a high complexity and diversity of useful plants, with a canopy of tree and palm species that produce food, fuel, and shade, a mid-story of shrubs for fruit or spices, and an understory of root vegetables, medicinal herbs, beans, ornamental plants, and other non-woody crops.
In 1929, J. Russel Smith published Tree Crops: A Permanent Agriculture, in which he argued that American agriculture should be changed two ways: by using non-arable land for tree agriculture, and by using tree-produced crops to replace the grain inputs in the diets of livestock. Smith wrote that the honey locust tree, a legume that produced pods that could be used as nutritious livestock feed, had great potential as a crop. The book's subtitle later led to the coining of the term permaculture.
The most studied agroforestry practices involve a simple interaction between two components, such as simple configurations of hedges or trees integrated with a single crop. There is significant variation in agroforestry systems and the benefits they have. Agroforestry as understood by modern science is derived from traditional indigenous and local practices, developed by living in close association with ecosystems for many generations.
Benefits
Benefits include increasing farm productivity and profitability, reduced soil erosion, creating wildlife habitat, managing animal waste, increased biodiversity, improved soil structure, and carbon sequestration.
Agroforestry systems can provide advantages over conventional agricultural and forest production methods. They can offer increased productivity; social, economic and environmental benefits, as well as greater diversity in the ecological goods and services provided. These benefits are conditional on good farm management. This includes choosing the right trees, as well as pruning them regularly etc.
Biodiversity
Biodiversity in agroforestry systems is typically higher than in conventional agricultural systems. Two or more interacting plant species in a given area create a more complex habitat supporting a wider variety of fauna.
Agroforestry is important for biodiversity for different reasons. It provides a more diverse habitat than a conventional agricultural system in which the tree component creates ecological niches for a wide range of organisms both above and below ground. The life cycles and food chains associated with this diversification initiate an agroecological succession that creates functional agroecosystems that confer sustainability. Tropical bat and bird diversity, for instance, can be comparable to the diversity in natural forests. Although agroforestry systems do not provide as many floristic species as forests and do not show the same canopy height, they do provide food and nesting possibilities. A further contribution to biodiversity is that the germplasm of sensitive species can be preserved. As agroforests have no natural clear areas, habitats are more uniform. Furthermore, agroforests can serve as corridors between habitats. Agroforestry can help conserve biodiversity, positively influencing other ecosystem services.
Soil and plant growth
Depleted soil can be protected from soil erosion by groundcover plants such as naturally growing grasses in agroforestry systems. These help to stabilise the soil as they increase cover compared to short-cycle cropping systems. Soil cover is a crucial factor in preventing erosion. Cleaner water through reduced nutrient and soil surface runoff can be a further advantage of agroforestry. Trees can help reduce water runoff by decreasing water flow and evaporation and thereby allowing for increased soil infiltration. Compared to row-cropped fields nutrient uptake can be higher and reduce nutrient loss into streams.
Further advantages concerning plant growth:
Bioremediation
Drought tolerance
Increased crop stability
Sustainability
Agroforestry systems can provide ecosystem services which can contribute to sustainable agriculture in the following ways:
Diversification of agricultural products, such as fuelwood, medicinal plants, and multiple crops, increases income security
Increased food security and nutrition by restored soil fertility, crop diversity and resilience to weather shocks for food crops
Land restoration through reducing soil erosion and regulating water availability
Multifunctional site use, e.g., crop production and animal grazing
Reduced deforestation and pressure on woodlands by providing farm-grown fuelwood
Possibility of reduced chemicals inputs, e.g. due to improved use of fertilizer, increased resilience against pests, and increased ground cover which reduces weeds
Growing space for medicinal plants e.g., in situations where people have limited access to mainstream medicines
According to the United Nations Food and Agriculture Organization (FAO)'s The State of the World’s Forests 2020, adopting agroforestry and sustainable production practices, restoring the productivity of degraded agricultural lands, embracing healthier diets and reducing food loss and waste are all actions that urgently need to be scaled up. Agribusinesses must meet their commitments to deforestation-free commodity chains and companies that have not made zero-deforestation commitments should do so.
Other environmental goals
Carbon sequestration is an important ecosystem service. Agroforestry practices can increase carbon stocks in soil and woody biomass. Trees in agroforestry systems, like in new forests, can recapture some of the carbon that was lost by cutting existing forests. They also provide additional food and products. The rotation age and the use of the resulting products are important factors controlling the amount of carbon sequestered. Agroforests can reduce pressure on primary forests by providing forest products.
Adaptation to climate change
Agroforestry can significantly contribute to climate change mitigation along with adaptation benefits. A case study in Kenya found that the adoption of agroforestry drove carbon storage and increased livelihoods simultaneously among small-scale farmers. In this case, maintaining the diversity of tree species, especially land use and farm size are important factors.
Poor smallholder farmers have turned to agroforestry as a means to adapt to climate change. A study from the CGIAR research program on Climate Change, Agriculture and Food Security found from a survey of over 700 households in East Africa that at least 50% of those households had begun planting trees in a change from earlier practices. The trees were planted with fruit, tea, coffee, oil, fodder and medicinal products in addition to their usual harvest. Agroforestry was one of the most widespread adaptation strategies, along with the use of improved crop varieties and intercropping.
Tropical
Trees in agroforestry systems can produce wood, fruits, nuts, and other useful products. Agroforestry practices are most prevalent in the tropics, especially in subsistence smallholdings areas such as sub-Saharan Africa.
Research with the leguminous tree Faidherbia albida in Zambia showed maximum maize yields of 4.0 tonnes per hectare using fertilizer and inter-cropped with the trees at densities of 25 to 100 trees per hectare, compared to average maize yields in Zimbabwe of 1.1 tonnes per hectare.
Hillside systems
A well-studied example of an agroforestry hillside system is the Quesungual Slash and Mulch Agroforestry System in Lempira Department, Honduras. This region was historically used for slash-and-burn subsistence agriculture. Due to heavy seasonal floods, the exposed soil was washed away, leaving infertile barren soil exposed to the dry season. Farmed hillside sites had to be abandoned after a few years and new forest was burned. The UN's FAO helped introduce a system incorporating local knowledge consisting of the following steps:
Thin and prune Hillside secondary forest, leaving individual beneficial trees, especially nitrogen-fixing trees. They help reduce soil erosion, maintain soil moisture, provide shade and provide an input of nitrogen-rich organic matter in the form of litter.
Plant maize in rows. This is a traditional local crop.
Harvest from the dried plant and plant beans. The maize stalks provide an ideal structure for the climbing bean plants. Bean is a nitrogen-fixing plant and therefore helps introduce more nitrogen.
Pumpkins can be planted during this time. The plant's large leaves and horizontal growth provide additional shade and moisture retention. It does not compete with the beans for sunlight since the latter grow vertically on the stalks.
Every few seasons, rotate the crop by grazing cattle, allowing grass to grow and adding soil organic matter and nutrients (manure). The cattle prevent total reforestation by grazing around the trees.
Repeat.
Kuojtakiloyan
The kuojtakiloyan of Mexico is a jungle-landscaped polyculture that grows avocadoes, sweet potatoes, cinnamon, black cherries, , citrus fruits, gourds, macadamia, mangoes, bananas and sapotes.
Kuojtakiloyan is a Masehual term that means 'useful forest' or 'forest that produces', and it is an agroforestry system developed and maintained by indigenous peoples of the Sierra Norte of the State of Puebla, Mexico. It has become a vital fountain of resources (food, medicinal herbs, fuels, floriculture, etc.) for the local population, but it is also a respectful transformation of the environment, with its biodiversity and nature conservation. The kuojtakiloyan comes directly from the ancestral Nahua and Totonaku knowledge of their natural environment. Despite its unawareness among the mainstream Mexican population, many agronomic experts in the world point it out as a successful case of sustainable agroforestry practiced communally.
The kuojtakiloyan is a jungle-landscaped polyculture in which avocados, sweet potatoes, cinnamon, black cherries, chalahuits, citrus fruits, gourds, macadamia, mangoes, bananas and sapotes are grown. In addition, a wide variety of harvested wild edible mushrooms and herbs (quelites). The jonote is planted because its fiber is useful in basketry, and also bamboo, which is fast growing, to build cabins and other structures. Concurrently to kuojtakiloyan, shade coffee is grown (café bajo sombra in Spanish; kafentaj in Masehual). Shade is essential to obtain high quality coffee. The local population has favored the proliferation of the stingless bee (pisilnekemej) by including the plants that it pollinates. From bees, they get honey, pollen, wax and propolis.
Shade crops
With shade applications, crops are purposely raised under tree canopies within the shady environment. The understory crops are shade tolerant or the overstory trees have fairly open canopies. A conspicuous example is shade-grown coffee. This practice reduces weeding costs and improves coffee quality and taste.
Crop-over-tree systems
Crop-over-tree systems employ woody perennials in the role of a cover crop. For this, small shrubs or trees pruned to near ground level are utilized. The purpose is to increase in-soil nutrients and/or to reduce soil erosion.
Intercropping and alley cropping
With alley cropping, crop strips alternate with rows of closely spaced tree or hedge species. Normally, the trees are pruned before planting the crop. The cut leafy material - for example, from Alchornea cordifolia and Acioa barteri - is spread over the crop area to provide nutrients. In addition to nutrients, the hedges serve as windbreaks and reduce erosion.
In tropical areas of North and South America, various species of Inga such as I. edulis and I. oerstediana have been used for alley cropping.
Intercropping is advantageous in Africa, particularly in relation to improving maize yields in the sub-Saharan region. Use relies upon the nitrogen-fixing tree species Sesbania sesban, Tephrosia vogelii, Gliricidia sepium and Faidherbia albida. In one example, a ten-year experiment in Malawi showed that, by using the fertilizer tree Gliricidia (G. sepium) on land on which no mineral fertilizer was applied, maize/corn yields averaged as compared to in plots without fertilizer trees or mineral fertilizers.
Weed control is inherent to alley cropping, by providing mulch and shade.
Syntropic systems
Syntropic farming, syntropic agriculture or syntropic agroforestry is an organic, permaculture agroforestry system developed by Ernst Götsch in Brazil. Sometimes this system is referred to as a successional agroforestry systems or SAFS, which sometimes refer to a broader concept originating in Latin America. The system focuses on replicating natural systems of accumulation of nutrients in ecosystems, replicating secondary succession, in order to create productive forest ecosystems that produce food, ecosystem services and other forest products.
The system relies heavily on several processes:
Dense planting mixing perennial and annual crops
Rapid cutting and composting of fast growing pioneer species, to accumulate nutrients and biomass
Creating greater water retention on the land through improving penetration of water into soil and plant water cycling
The systems were first developed in tropical Brazil, but many similar systems have been tested in temperate environments as soil and ecosystem restoration tactics.
The framework for the syntropic agroforestry is advocated for by Agenda Gotsch an organization built to promote the systems.
Syntropic systems have a number of documented benefits, including increased soil water penetration, increases to productivity on marginal land that has since become and soil temperature moderation.
In Burma
Taungya is a system from Burma. In the initial stages of an orchard or tree plantation, trees are small and widely spaced. The free space between the newly planted trees accommodates a seasonal crop. Instead of costly weeding, the underutilized area provides an additional output and income. More complex taungyas use between-tree space for multiple crops. The crops become more shade tolerant as the tree canopies grow and the amount of sunlight reaching the ground declines. Thinning can maintain sunlight levels.
In India
Itteri agroforestry systems have been used in Tamil Nadu since time immemorial. They involve the deliberate management of multipurpose trees and shrubs grown in intimate association with herbaceous species. They are often found along village and farm roads, small gullies, and field boundaries.
Bamboo-based agroforestry systems (Dendrocalamus strictus + sesame–chickpea) have been studied for enhancing productivity in semi-arid tropics of central India.
In Africa
A project to mitigate climate change with agriculture was launched in 2019 by the "Global EverGreening Alliance". The target is to sequester carbon from the atmosphere. By 2050 the restored land should sequestrate 20 billion tons of carbon annually
Shamba (Swahili for 'plantation') is an agroforestry system practiced in East Africa, particularly in Kenya. Under this system, various crops are combined: bananas, beans, yams and corn, to which are added timber resources, beekeeping, medicinal herbs, mushrooms, forest fruits, fodder for livestock, etc.
In Hawai'i
Native Hawaiians formerly practiced agroforestry adapted to the islands' tropical landscape. Their ability to do this influenced the region's carrying capacity, social conflict, cooperation, and political complexity. More recently, after scientific study of lo’I systems, attempts have been made to reintroduce dryland agroforestry in Hawai’i Island and Maui, fostering interdisciplinary collaboration between political leaders, landowners, and scientists.
Temperate
Although originally a concept in tropical agronomy, agroforestry's multiple benefits, for instance in nutrient cycles and potential for mitigating droughts, have led to its adoption in the USA and Europe.
The United States Department of Agriculture distinguishes five applications of agroforestry for temperate climates, namely alley cropping, forest farming, riparian forest buffers, silvopasture, and windbreaks.
Alley cropping
Alley cropping can also be used in temperate climates. Strip cropping is similar to alley cropping in that trees alternate with crops. The difference is that, with alley cropping, the trees are in single rows. With strip cropping, the trees or shrubs are planted in wide strips. The purpose can be, as with alley cropping, to provide nutrients, in leaf form, to the crop. With strip cropping, the trees can have a purely productive role, providing fruits, nuts, etc. while, at the same time, protecting nearby crops from soil erosion and harmful winds.
Inga alley cropping
Inga alley cropping is the planting agricultural crops between rows of Inga trees. It has been promoted by Mike Hands.
Using the Inga tree for alley cropping has been proposed as an alternative to the much more ecologically destructive slash and burn cultivation. The technique has been found to increase yields. It is sustainable agriculture as it allows the same plot to be cultivated over and over again thus eliminating the need for burning of the rainforests to get fertile plots.
Inga tree
Inga trees are native to many parts of Central and South America. Inga grows well on the acid soils of the tropical rainforest and former rainforest. They are leguminous and fix nitrogen into a form usable by plants. Mycorrhiza growing within the roots (arbuscular mycorrhiza) was found to take up spare phosphorus, allowing it to be recycled into the soil.
Other benefits of Inga include the fact that it is fast growing with thick leaves which, when left on the ground after pruning, form a thick cover that protects both soil and roots from the sun and heavy rain. It branches out to form a thick canopy so as to cut off light from the weeds below and withstands careful pruning year after year.
History
The technique was first developed and trialled by tropical ecologist Mike Hands in Costa Rica in the late 1980s and early '90s. Research funding from the EEC allowed him to experiment with species of Inga. Although alley cropping had been widely researched, it was thought that the tough pinnate leaves of the Inga tree would not decompose quickly enough.
The Inga is used as hedges and pruned when large enough to provide a mulch in which bean and corn seeds are planted. This results in both improving crop yields and the retention of soil fertility on the plot that is being farmed. Hands had seen the devastating consequences that are caused by slash and burn agriculture while working in Honduras; this new technique seemed to offer the solution to the environmental and economic problems faced by so many slash and burn farmers.
Although this technique has the potential to save rainforest and lift many out of poverty, Inga alley cropping has not yet reached its full potential, although the charity Inga Foundation, headed by Mike Hands, has been consulted about potential projects in Haiti ( which is almost completely deforested) and the Congo. Discussions have also been mooted about projects in Peru and Madagascar. Another charity, Rainforest Saver formed to promote Inga Alley Cropping, started a project in 2016 in Ecuador, in the area of the Amazon where Inga edulis originates from, and by the end of 2018 more than 60 farms in the area had Inga plots. Rainforest Saver also started a project in Cameroon in 2009, where in late 2018 there were around 100 farms with Inga plots, mainly in Western Cameroon.
Method
For Inga alley cropping the trees are planted in rows (hedges) close together, with a gap, the alley, of about 4m between the rows. An initial application of rock phosphate has kept the system going for many years.
When the trees have grown, usually in about two years, the canopies close over the alley and cut off the light and so smother the weeds.
The trees are then carefully pruned. The larger branches are used for firewood. The smaller branches and leaves are left on the ground in the alleys. These rot down into a good mulch (compost). If any weeds haven't been killed off by lack of light the mulch smothers them.
The farmer then pokes holes into the mulch and plants their crops into the holes.
The crops grow, fed by the mulch. The crops feed on the lower layers while the latest prunings form a protective layer over the soil and roots, shielding them from both the hot sun and heavy rain. This makes it possible for the roots of both the crops and the trees to stay to a considerable extent in the top layer of soil and the mulch, thus benefiting from the food in the mulch, and escaping soil pests and toxic minerals lower down. Pruning the Inga also makes its roots die back, thus reducing competition with the crops.
Forest farming
In forest farming, high-value crops are grown under a suitably-managed tree canopy. This is sometimes called multi-story cropping, or in tropical villages as home gardening. It can be practised at varying levels of intensity but always involves some degree of management; this distinguishes it from simple harvesting of wild plants from the forest.
Riparian forest buffers
Riparian buffers are strips of permanent vegetation located along or near active watercourses or in ditches where water runoff concentrates. The purpose is to keep nutrients and soil from contaminating the water.
Silvopasture
Trees can benefit fauna in a silvopasture system, where cattle, goats, or sheep browse on grasses grown under trees.
In hot climates, the animals are less stressed and put on weight faster when grazing in a cooler, shaded environment. The leaves of trees or shrubs can also serve as fodder. Similar systems support other fauna. Deer and pigs gain when living and feeding in a forest ecosystem, especially when the tree forage nourishes them. In aquaforestry, trees shade fish ponds. In many cases, the fish eat the leaves or fruit from the trees.
The dehesa or montado system of silviculture are an example of pigs and bulls being held extensively in Spain and Portugal.
Windbreaks
Windbreaks reduce wind velocity over and around crops. This increases yields through reduced drying of the crop and/or by preventing the crop from toppling in strong wind gusts.
In Switzerland
Since the 1950s, four-fifths of Swiss Hochstammobstgärten (traditional orchards with tall trees) have disappeared. An agroforestry scheme was tested here with trees together with annual crops. Trees tested were walnut (Juglans regia) and cherry (Prunus avium). Forty to seventy trees per hectare were recommended, yields were somewhat decreasing with increasing tree height and foliage. However, the total yield per area is shown to be up to 30 percent higher than for monocultural systems.
Another set of tests involve growing Populus tremula for biofuel at 52 trees a hectare and with grazing pasture alternated every two to three years with maize or sorghum, wheat, strawberries and fallowing between rows of modern short-pruned & grafted apple cultivars ('Boskoop' & 'Spartan') and growing modern sour cherry cultivars ('Morina', 'Coraline' and 'Achat') and apples, with bushes in the rows with tree (dogrose, Cornus mas, Hippophae rhamnoides) intercropped with various vegetables.
Forest gardening
Forest gardening is a low-maintenance, sustainable, plant-based food production and agroforestry system based on woodland ecosystems, incorporating fruit and nut trees, shrubs, herbs, vines and perennial vegetables which have yields directly useful to humans. Making use of companion planting, these can be intermixed to grow in a succession of layers to build a woodland habitat.
Forest gardening is a prehistoric method of securing food in tropical areas. In the 1980s, Robert Hart coined the term "forest gardening" after adapting the principles and applying them to temperate climates.
History
Since prehistoric times, hunter-gatherers might have influenced forests, for instance in Europe by Mesolithic people bringing favored plants like hazel with them. Forest gardens are probably the world's oldest form of land use and most resilient agroecosystem. First Nation villages in Alaska with forest gardens filled with nuts, stone fruit, berries, and herbs, were noted by an archeologist from the Smithsonian in the 1930s.
Forest gardens are still common in the tropics and known as Kandyan forest gardens in Sri Lanka; , family orchards in Mexico; agroforests; or shrub gardens. They have been shown to be a significant source of income and food security for local populations.
Robert Hart adapted forest gardening for the United Kingdom's temperate climate during the 1980s.
In temperate climates
Hart began farming at Wenlock Edge in Shropshire to provide a healthy and therapeutic environment for himself and his brother Lacon. Starting as relatively conventional smallholders, Hart soon discovered that maintaining large annual vegetable beds, rearing livestock and taking care of an orchard were tasks beyond their strength. However, a small bed of perennial vegetables and herbs he planted was looking after itself with little intervention.
Following Hart's adoption of a raw vegan diet for health and personal reasons, he replaced his farm animals with plants. The three main products from a forest garden are fruit, nuts and green leafy vegetables. He created a model forest garden from a 0.12 acre (500 m2) orchard on his farm and intended naming his gardening method ecological horticulture or ecocultivation. Hart later dropped these terms once he became aware that agroforestry and forest gardens were already being used to describe similar systems in other parts of the world. He was inspired by the forest farming methods of Toyohiko Kagawa and James Sholto Douglas, and the productivity of the Keralan home gardens; as Hart explained, "From the agroforestry point of view, perhaps the world's most advanced country is the Indian state of Kerala, which boasts no fewer than three and a half million forest gardens ... As an example of the extraordinary intensity of cultivation of some forest gardens, one plot of only was found by a study group to have twenty-three young coconut palms, twelve cloves, fifty-six bananas, and forty-nine pineapples, with thirty pepper vines trained up its trees. In addition, the smallholder grew fodder for his house-cow."
Seven-layer system
Further development
The Agroforestry Research Trust, managed by Martin Crawford, runs experimental forest gardening projects on a number of plots in Devon, United Kingdom. Crawford describes a forest garden as a low-maintenance way of sustainably producing food and other household products.
Ken Fern had the idea that for a successful temperate forest garden a wider range of edible shade tolerant plants would need to be used. To this end, Fern created the organisation Plants for a Future which compiled a plant database suitable for such a system. Fern used the term woodland gardening, rather than forest gardening, in his book Plants for a Future.
Kathleen Jannaway, the cofounder of Movement for Compassionate Living (MCL) with her husband Jack, wrote a book outlining a sustainable vegan future called Abundant Living in the Coming Age of the Tree in 1991. The MCL promotes forest gardening and other types of vegan organic gardening. In 2009 it provided a grant of £1,000 to the Bangor Forest Garden project in Gwynedd, North West Wales.
Permaculture
Bill Mollison, who coined the term permaculture, visited Hart at his forest garden in October 1990. Hart's seven-layer system has since been adopted as a common permaculture design element.
Numerous permaculturalists are proponents of forest gardens, or food forests, such as Graham Bell, Patrick Whitefield, Dave Jacke, Eric Toensmeier and Geoff Lawton. Bell started building his forest garden in 1991 and wrote the book The Permaculture Garden in 1995, Whitefield wrote the book How to Make a Forest Garden in 2002, Jacke and Toensmeier co-authored the two volume book set Edible Forest Gardens in 2005, and Lawton presented the film Establishing a Food Forest in 2008.
Geographical distribution
Forest gardens, or home gardens, are common in the tropics, using intercropping to cultivate trees, crops, and livestock on the same land. In Kerala in south India as well as in northeastern India, the home garden is the most common form of land use and is also found in Indonesia. One example combines coconut, black pepper, cocoa and pineapple. These gardens exemplify polyculture, and conserve much crop genetic diversity and heirloom plants that are not found in monocultures. Forest gardens have been loosely compared to the religious concept of the Garden of Eden.
Americas
The Amazon rainforest, rather than being a pristine wilderness, has been shaped by humans for at least 11,000 years through practices such as forest gardening and terra preta. Since the 1970s, numerous geoglyphs have been discovered on deforested land in the Amazon rainforest, furthering the evidence of pre-Columbian civilizations.
On the Yucatán Peninsula, much of the Maya food supply was grown in "orchard gardens", known as pet kot. The system takes its name from the low wall of stones (pet meaning 'circular' and kot, 'wall of loose stones') that characteristically surrounds the gardens.
The environmental historian William Cronon argued in his 1983 book Changes in the Land that indigenous North Americans used controlled burning to form ideal habitat for wild game. The natural environment of New England was sculpted into a mosaic of habitats. When indigenous Americans hunted, they were "harvesting a foodstuff which they had consciously been instrumental in creating". Most English settlers, however, assumed that the wealth of food provided by the forest was a result of natural forces, and that indigenous people lived off "the unplanted bounties of nature." Animal populations declined after settlement, while fields of strawberries and raspberries found by the earliest settlers became overgrown and disappeared for want of maintenance.
Plants
Some plants, such as wild yam, work as both a root plant and as a vine. Ground covers are low-growing edible forest garden plants that help keep weeds in control and provide a way to utilize areas that would otherwise be unused.
Cardamom
Ginger
Chervil
Bergamot
Sweet woodruff
Sweet cicely
Projects
El Pilar on the Belize–Guatemala border features a forest garden to demonstrate traditional Maya agricultural practices. A further one acre model forest garden, called Känan K'aax (meaning 'well-tended garden' in Mayan), is funded by the National Geographic Society and developed at Santa Familia Primary School in Cayo.
In the United States, the largest known food forest on public land is believed to be the seven acre Beacon Food Forest in Seattle, Washington. Other forest garden projects include those at the central Rocky Mountain Permaculture Institute in Basalt, Colorado, and Montview Neighborhood farm in Northampton, Massachusetts. The Boston Food Forest Coalition promotes local forest gardens.
In Canada Richard Walker has been developing and maintaining food forests in British Columbia for over 30 years. He developed a three-acre food forest that at maturity provided raw materials for a plant nursery and herbal business as well as food for his family. The Living Centre has developed various forest garden projects in Ontario.
In the United Kingdom, other than those run by the Agroforestry Research Trust (ART), projects include the Bangor Forest Garden in Gwynedd, northwest Wales. Martin Crawford from ART administers the Forest Garden Network, an informal network of people and organisations who are cultivating forest gardens.
Since 2014, Gisela Mir and Mark Biffen have been developing a small-scale edible forest garden in Cardedeu near Barcelona, Spain, for experimentation and demonstration.
Forest farming
Forest farming is the cultivation of high-value specialty crops under a forest canopy that is intentionally modified or maintained to provide shade levels and habitat that favor growth and enhance production levels. Forest farming encompasses a range of cultivated systems from introducing plants into the understory of a timber stand to modifying forest stands to enhance the marketability and sustainable production of existing plants.
Forest farming is a type of agroforestry practice characterized by the "four I's": intentional, integrated, intensive and interactive. Agroforestry is a land management system that combines trees with crops or livestock, or both, on the same piece of land. It focuses on increasing benefits to the landowner as well as maintaining forest integrity and environmental health. The practice involves cultivating non-timber forest products or niche crops, some of which, such as ginseng or shiitake mushrooms, can have high market value.
Non-timber forest products (NTFPs) are plants, parts of plants, fungi, and other biological materials harvested from within and on the edges of natural, manipulated, or disturbed forests. Examples of crops are ginseng, shiitake mushrooms, decorative ferns, and pine straw. Products typically fit into the following categories: edible, medicinal and dietary supplements, floral or decorative, or specialty wood-based products.
History
Forest farming, though not always by that name, is practiced around the world. For centuries, humans have relied on fruits, nuts, seeds, parts of foliage and pods from trees and shrubs in the forests to feed themselves and their livestock. Over time, certain species have been selected for cultivation near homes or livestock to provide food or medicine. For example, in the southern United States, mulberry trees are used as a feedstock for pigs and often cultivated near pig quarters.
In 1929, J. Russell Smith, Emeritus Professor of Economic Geography at Columbia University, published "Tree Crops – A Permanent Agriculture" which stated that crop-yielding trees could provide useful substitutes for cereals in animal feeding programs, as well as conserve environmental health. Toyohiko Kagawa read and was heavily influenced by Smith’s publication and began experimental cultivation under trees in Japan during the 1930s. Through forest farming, or three-dimensional forestry, Kagawa addressed problems of soil erosion by persuading many of Japan's upland farmers to plant fodder trees to conserve soil, supply food and feed animals. He combined extensive plantings of walnut trees, harvested the nuts and fed them to the pigs, then sold the pigs as a source of income. When the walnut trees matured, they were sold for timber and more trees were planted so that there was a continuous cycle of economic cropping that provided both short-term and long-term income to the small landowner. The success of these trials prompted similar research in other countries. World War II disrupted communication and slowed advances in forest farming. In the mid-1950s research resumed in places such as southern Africa. Kagawa was also an inspiration to Robert Hart pioneered forest gardening in temperate climates in the sixties in Shropshire, England.
In earlier years, livestock were often considered part of the forest farming system. Now they are typically excluded and agroforestry systems that integrate trees, forages and livestock are referred to as silvopastures. Because forest farming combines ecological stability of natural forests and productive agriculture systems, it is considered to have great potential for regenerating soils, restoring ground water supplies, controlling floods and droughts and cultivating marginal lands.
Principles
Forest farming principles constitute an ecological approach to forest management. Forest resources are judiciously used while biodiversity and wildlife habitat are conserved. Forest farms have the potential to restore ecological balance to fragmented second growth forests through intentional manipulation to create the desired forest ecosystem.
In some instances, the intentional introduction of species for botanicals, medicinals, food or decorative products is accomplished using existing forests. The tree cover, soil type, water supply, land form and other site characteristics determine what species will thrive. Developing an understanding of species/site relationships as well as understanding the site limitations is necessary to utilize these resources for production needs, while conserving adequate resources for the long-term health of the forest.
Apart from the environmental benefits, forest farming can increase the economic value of forest property and provide short- and long-term benefits to the landowner. Forest farming provides economic return from intact forest ecosystems, but timber sales can remain part of the long-term management strategy.
Methods
Forest farming methods may include: Intensive, yet careful thinning of overstocked, suppressed tree stands; multiple integrated entries to accomplish thinning so that systemic shock is minimized; and interactive management to maintain a cross-section of healthy trees and shrubs of all ages and species. Physical disturbance to the surrounding area should be minimized. The following are forest farming techniques described in the Training Manual produced by the Center for Agroforestry at the University of Missouri.
Level of management that is required
(from most intense to least intense)
1. Forest gardening is the most intensive of forest farming methods. In addition to thinning the overstory, this method involves clearing the understory of undesirable vegetation and other practices that are closely related to agronomy (tillage, fertilization, weeding, and control of disease and insects and wildlife management). Due to input levels, this method often produces lower valued products compared to other methods. Forest gardens take advantage of the vertical levels of light availability and space under the forest canopy so that more than one crop can be grown at once if desired.
2. Wild-simulated seeks to maintain a natural growing environment, yet enriches local NTFP populations to create an abundant renewable supply of the products. Minimal disturbance and natural growing conditions ensure products will be similar in appearance and quality of those harvested from the wild. Rather than till, practitioners often rake leaves to expose soil, sow seed directly onto the ground, and then cover with leaves again. Since this method produces NTFPs that closely resemble wild plants; they often command a higher price than NTFPs produced using the forest gardening method.
3. Forest tending involves adjusting tree crown density to manipulate light levels that favor natural reproduction of desirable NTFPs. This low intensity management approach does not involve supplemental planting to increase populations of desired NTFPs.
4. Wildcrafting is the harvesting of naturally growing NTFPs. It is not considered a forest farming practice since there is no human involvement in the plant’s establishment and maintenance. However, wildcrafters often take steps to protect NTFPs with future harvests in mind. It becomes agroforestry once forest thinnings, or other inputs, are applied to sustain or maintain plant populations that might otherwise succumb to successional changes in the forest. The most important difference between forest farming and wildcrafting is that forest farming intentionally produces NTFPS, whereas wildcrafting seeks and gathers from naturally growing NTFPs.
Production considerations
Forest farming can be a small business opportunity for landowners and requires careful planning, including a business and marketing plan. Learning how to market the NTFPs on the Internet is an option, but may entail higher shipping costs. Landowners should consider all options for selling their products including, farmer’s markets or restaurants that focus on locally grown ingredients. The development phase should include a forest management plan that states the landowner’s objectives and a resource inventory. Start-up costs should be analyzed as specific equipment may be necessary to harvest or process the product, whereas other crops require minimal initial investment. Local incentives for sustainable forest management, as well as regulations and policies should be explored. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) regulates international trade of certain plant (American ginseng and goldenseal) and animal species. To be legally exported, regulated plants must be harvested and records kept according to CITES rules and restrictions. Many states also have harvesting regulations for certain native plants that are searchable online. Another good source to start with on information is the Medicinal Plants at Risk 2008 report, by the Center for Biological Diversity] in the U.S.
Examples of crops
(from the National Agroforestry Center)
Medicinal herbs:
Ginseng (Panax quinquefolius)
Black Cohosh (Actaea racemosa)
Goldenseal (Hydrastis canadensis)
Bloodroot (Sanguinaria canadensis)
Pacific yew (Taxus brevifolia)
Mayapple (Podophyllum peltatum)
Saw palmetto (Serenoa repens)
American Pokeweed (Phytolacca americana)
Nuts:
Black walnut (Juglans nigra)
Hazelnut (Corylus avellana)
Shagbark hickory (Carya ovata)
Beechnut (Fagus sylvatica)
Fruit:
Pawpaw (Asimina triloba)
Currants (Ribes spp)
Elderberry (Sambucus spp)
Serviceberry (Amelanchier spp)
Blackberry (Rubus spp)
Huckleberry (Gaylussacia brachycera)
Other food crops:
Ramps (wild leeks) (Allium tricoccum)
Syrups (maple)
Honey
Mushrooms
Other edible roots
Other products: (mulch, decoratives, crafts, dyes)
Pine straw
Willow twigs
Vines
Beargrass (Xerophyllum tenax)
Ferns
Pine cones
Moss
Native ornamentals:
Rhododendron (Rhododendron catawbiense)
Highbush cranberry (Viburnum trilobum)
Flowering dogwood (Cornus florida)
Farmer-managed natural regeneration
Farmer-managed natural regeneration (FMNR) is a low-cost, sustainable land restoration technique used to combat poverty and hunger amongst poor subsistence farmers in developing countries by increasing food and timber production, and resilience to climate extremes. It involves the systematic regeneration and management of trees and shrubs from tree stumps, roots and seeds. FMNR was developed by the Australian agricultural economist Tony Rinaudo in the 1980s in West Africa. The background and development are described in Rinaudo's book The Forest Underground.
FMNR is especially applicable, but not restricted to, the dryland tropics. As well as returning degraded croplands and grazing lands to productivity, it can be used to restore degraded forests, thereby reversing biodiversity loss and reducing vulnerability to climate change. FMNR can also play an important role in maintaining not-yet-degraded landscapes in a productive state, especially when combined with other sustainable land management practices such as conservation agriculture on cropland and holistic management on range lands.
FMNR adapts centuries-old methods of woodland management, called coppicing and pollarding, to produce continuous tree-growth for fuel, building materials, food and fodder without the need for frequent and costly replanting. On farmland, selected trees are trimmed and pruned to maximise growth while promoting optimal growing conditions for annual crops (such as access to water and sunlight). When FMNR trees are integrated into crops and grazing pastures there is an increase in crop yields, soil fertility and organic matter, soil moisture and leaf fodder. There is also a decrease in wind and heat damage, and soil erosion.
FMNR complements the evergreen agriculture, conservation agriculture and agroforestry movements. It is considered a good entry point for resource-poor and risk-averse farmers to adopt a low-cost and low-risk technique. This in turn has acted as a stepping stone to greater agricultural intensification as farmers become more receptive to new ideas.
Background
Throughout the developing world, immense tracts of farmland, grazing lands and forests have become degraded to the point they are no longer productive. Deforestation continues at a rapid pace. In Africa's drier regions, 74 percent of rangelands and 61 percent of rain-fed croplands are damaged by moderate to very severe desertification. In some African countries deforestation rates exceed planting rates by 300:1.
Degraded land has an extremely detrimental effect on the lives of subsistence farmers who depend on it for their food and livelihoods. Subsistence farmers often make up to 70–80 percent of the population in these regions and they regularly suffer from hunger, malnutrition and even famine as a consequence.
In the Sahel region of Africa, a band of savanna which runs across the continent immediately south of the Sahara Desert, large tracts of once-productive farmland are turning to desert. In tropical regions across the world, where rich soils and good rainfall would normally assure bountiful harvests and fat livestock, some environments have become so degraded they are no longer productive.
Severe famines across the African Sahel in the 1970s and 1980s led to a global response, and stopping desertification became a top priority. Conventional methods of raising exotic and indigenous tree species in nurseries were used. Despite investing millions of dollars and thousands of hours of labour, there was little overall impact. Conventional approaches to reforestation in such harsh environments faced insurmountable problems and were costly and labour-intensive. Once planted out, drought, sand storms, pests, competition from weeds and destruction by people and animals negated efforts. Low levels of community ownership were another inhibiting factor.
Existing indigenous vegetation was generally dismissed as 'useless bush', and it was often cleared to make way for exotic species. Exotics were planted in fields containing living and sprouting stumps of indigenous vegetation, the presence of which was barely acknowledged, let alone seen as important.
This was an enormous oversight. In fact, these living tree stumps are so numerous they constitute a vast 'underground forest' just waiting for some care to grow and provide multiple benefits at little or no cost. Each stump can produce between 10 and 30 stems each. During the process of traditional land preparation, farmers saw the stems as weeds and slashed and burnt them before sowing their food crops. The net result was a barren landscape for much of the year with few mature trees remaining. To the casual observer, the land was turning to desert. Most concluded that there were no trees present and that the only way to reverse the problem was through tree planting.
Meanwhile, established indigenous trees continued to disappear at an alarming rate. In Niger, from the 1930s until 1993, forestry laws took tree ownership and responsibility for the care of trees out of the hands of the people. Reforestation through conventional tree planting seemed to be the only way to address desertification at the time.
History
In the early-1980s, in the Maradi region of the Republic of Niger, the missionary organisation, Serving in Mission (SIM), was unsuccessfully attempting to reforest the surrounding districts using conventional means. In 1983, SIM began experimenting and promoting FMNR amongst about 10 farmers. During the famine of 1984, a food-for-work program was introduced that saw some 70,000 people exposed to FMNR and its practice on around 12,500 hectares of farmland. From 1985 to 1999, FMNR continued to be promoted locally and nationally as exchange visits and training days were organised for various NGOs, government foresters, Peace Corps volunteers, and farmer and civil society groups. Additionally, SIM project staff and farmers visited numerous locations across Niger to provide training.
By 2004 it was ascertained that FMNR was being practised on over five million hectares or 50 percent of Niger's farmland – an average reforestation rate of 250,000 hectares per year over a 20-year period. This transformation prompted a Senior Fellow of the World Resources Institute, Chris Reij, to comment that "this is probably the largest positive environmental transformation in the Sahel and perhaps all of Africa".
In 2004, World Vision Australia and World Vision Ethiopia initiated a forestry-based carbon sequestration project as a potential means to stimulate community development while engaging in environmental restoration. A partnership with the World Bank, the Humbo Community-based Natural Regeneration Project involved the regeneration of 2,728 hectares of degraded native forests. This brought social, economic and ecological benefits to the participating communities. Within two years, communities were collecting wild fruits, firewood, and fodder, and reported that wildlife had begun to return and erosion and flooding had been reduced. In addition, the communities are now receiving payments for the sale of carbon credits through the Clean Development Mechanism (CDM) of the Kyoto Protocol.
Following the success of the Humbo project, FMNR spread to the Tigray region of northern Ethiopia where 20,000 hectares have been set aside for regeneration, including 10 hectare FMNR model sites for research and demonstration in each of 34 sub-districts. The Government of Ethiopia has committed to reforest 15 million hectares of degraded land using FMNR as part of a climate change and renewable energy plan to become carbon neutral by 2025.
In Talensi, northern Ghana, FMNR is being practiced on 2,000–3,000 hectares and new projects are introducing FMNR into three new districts. In the Kaffrine and Diourbel regions of Senegal, FMNR has spread across 50,000 hectares in four years. World Vision is also promoting FMNR in Indonesia, Myanmar and East Timor. There are also examples of both independently promoted and spontaneous FMNR movements occurring. In Burkina Faso, for example, an increasing part of the country is being transformed into agro-forestry parkland. And in Mali, an ageing agro-forestry parkland of about six million hectares is showing signs of regeneration.
Key principles
FMNR depends on the existence of living tree stumps or roots in crop fields, grazing pastures, woodlands or forests. Each season bushy growth will sprout from the stumps/roots often appearing like small shrubs. Continuous grazing by livestock, regular burning and/or regular harvesting for fuel wood results in these 'shrubs' never attaining tree stature. On farmland, standard practice has been for farmers to slash this regrowth in preparation for planting crops, but with a little attention this growth can be turned into a valuable resource without jeopardising crop yields.
For each stump, a decision is made as to how many stems will be chosen to grow. The tallest and straightest stems are selected and the remaining stems culled. Best results are obtained when the farmer returns regularly to prune any unwanted new stems and side branches as they appear. Farmers can then grow other crops between and around the trees. When farmers want wood they can cut the stem(s) they want and leave the rest to continue growing. The remaining stems will increase in size and value each year, and will continue to protect the environment. Each time a stem is harvested, a younger stem is selected to replace it.
Various naturally occurring tree species can be used which may also provide berries, fruits and nuts or have medicinal qualities. In Niger, commonly used species include: Strychnos spinosa, Balanites aegyptiaca, Boscia senegalensis, Ziziphus spp., Annona senegalensis, Poupartia birrea and Faidherbia albida. However, the most important determinants are whatever species are locally available, their ability to re-sprout after cutting, and the value local people place on those species.
Faidherbia albida, also known as the 'fertiliser tree', is popular for intercropping across the Sahel as it fixes nitrogen into the soil, provides fodder for livestock, and shade for crops and livestock. By shedding its leaves in the wet season, Faidherbia provides beneficial light shade to crops when high temperatures would otherwise damage crops or retard growth. Leaf fall contributes useful nutrients and organic matter to the soil.
The practice of FMNR is not confined to croplands. It is being practised on grazing land and in degraded communal forests as well. When there are no living stumps, seeds of naturally occurring species are used. In reality, there is no fixed way of practising FMNR and farmers are free to choose which species they will leave, the density of trees they prefer, and the timing and method of pruning.
In practice
Benefits
FMNR can restore degraded farmlands, pastures and forests by increasing the quantity and value of woody vegetation, by increasing biodiversity and by improving soil structure and fertility through leaf litter and nutrient cycling. The reforestation also retards wind and water erosion; it creates windbreaks which decrease soil moisture evaporation, and protects crops and livestock against searing winds and temperatures. Often, dried up springs reappear and the water table rises towards historic levels; insect eating predators including insects, spiders and birds return, helping to keep crop pests in check; the trees can be a source of edible berries and nuts; and over time the biodiversity of plant and animal life is increased. FMNR can be used to combat deforestation and desertification and can also be an important tool in maintaining the integrity and productivity of land that is not yet degraded.
Trials, long-running programs and anecdotal data indicate that FMNR can at least double crop yields on low fertility soils. In the Sahel, high numbers of livestock and an eight month dry season can mean that pastures are completely depleted before the rains commence. However, with the presence of trees, grazing animals can make it through the dry season by feeding on tree leaves and seed pods of some species, at a time when no other fodder is available. In northeast Ghana, more grass became available with the introduction of FMNR because communities worked together to prevent bush fires from destroying their trees.
Well designed and executed FMNR projects can act as catalysts to empower communities as they negotiate land ownership or user rights for the trees in their care. This assists with self-organisation, and with the development of new agriculture-based micro-enterprises (e.g., selling firewood, timber and handcrafts made from timber or woven grasses).
Conventional approaches to reversing desertification, such as funding tree planting, rarely spread beyond the project boundary once external funding is withdrawn. By comparison, FMNR is cheap, rapid, locally led and implemented. It uses local skills and resources – the poorest farmers can learn by observation and teach their neighbours. Given an enabling environment, or at least the absence of a 'disabling' environment, FMNR can be done at scale and spread well beyond the original target area without ongoing government or NGO intervention.
World Vision evaluations of FMNR conducted in Senegal and Ghana in 2011 and 2012 found that households practising FMNR were less vulnerable to extreme weather shocks such as drought and damaging rain and wind storms.
The following table summarises FMNR's benefits which fit the sustainable development model of economic, social and environmental benefits:
Sources:
Key success factors and constraints
While there are numerous accounts of the uptake and spread of FMNR independent of aid and development agencies, the following factors have been found to be beneficial for its introduction and spread:
Awareness creation of FMNR's potential.
Capacity building through workshops and exchange visits.
Awareness of the devastating effects of deforestation. The adoption of FMNR is more likely when communities acknowledge their situation and the need to take action. This perception of need can be supported by education.
An FMNR champion/facilitator from within the community who encourages, challenges and trains peers. This is critical during the first three to five years, and continues to be important for up to 10 years. Regular site visits also ensure early detection and remedial action on resistance and threats to FMNR through deliberate damage to trees and theft.
The buy-in of all stakeholders including their agreement on any by-laws created for FMNR and the consequences for infringements. Stakeholders include FMNR practitioners, local, regional and national government departments of agriculture and forestry, men, women, youth, marginalised groups (including nomadic herders), cultivators and commercial interests.
Stakeholder buy-in is also important to create a critical mass of FMNR adopters in order to change social attitudes from a position of apathy or active participation in deforestation to one of proactive sustainable tree management through FMNR.
Government support through the creation of favourable policies, positive reinforcement of actions facilitating the spread of FMNR, and disincentives for actions working against the spread of FMNR. FMNR practitioners need to be confident that they will benefit from their labours (either private or community ownership of trees, or legally binding user rights).
Reinforcement of existing organisational structures (farmers clubs, development groups, traditional leadership structures) or establishment of new structures which will provide a framework for communities to practise FMNR on a local, district or region-wide basis.
A communications strategy which includes education in schools, radio programs and engagement with religious and traditional leaders to become advocates.
Establishment of a legal, transparent and accessible market for FMNR wood and non-timber forest products, enabling practitioners to benefit financially from their activities.
Brown et al. suggest that the two main reasons why FMNR has spread so widely in Niger are attitudinal change by the community of what constitutes good land management practices, and farmers' ownership of trees. Farmers need the assurance that they will benefit from their labour. Giving farmers either outright ownership of the trees they protect, or tree-user rights, has made it possible for large-scale farmer-led reforestation to take place.
Current and future directions
Over nearly 30 years, FMNR has changed the farming landscape in some of the poorest countries in the world, including parts of Niger, Burkina Faso, Mali, and Senegal, providing subsistence farmers with the methods necessary to become more food secure and resilient against severe weather events.
The 2011–2012 food crisis in East Africa gave a stark reminder of the importance of addressing root causes of hunger. In the 2011 State of the World Report, Bunch concludes that four major factors – lack of sustainable fertile land, loss of traditional fallowing, cost of fertiliser and climate change – are coming together all at once in a sort of "perfect storm" that will almost surely result in an African famine of unprecedented proportions, probably within the next four to five years. It will most heavily affect the lowland, semi-arid to sub-humid areas of Africa (including the Sahel, parts of eastern Africa, plus a band from Malawi across to Angola and Namibia); and unless the world does something dramatic, 10 to 30 million people could die from famine between 2015 and 2020. Restoration of degraded land through FMNR is one way of addressing these major contributors to hunger.
In recent years FMNR has come to the attention of global development agencies and grassroots movements alike. The World Bank, World Resources Institute, World Agroforestry Center, USAID and the Permaculture movement are amongst those either actively promoting or advocating for the uptake of FMNR and FMNR has received recognition from a number of quarters including:
In 2010, FMNR won the Interaction 4 Best Practice and Innovation Initiative award in recognition of high technical standards and effectiveness in addressing the food security and livelihood needs of small producers in the areas of natural resource management and agro forestry.
In 2011, FMNR won the World Vision International Global Resilience Award for the most innovative initiative in the area of resilient development practice and natural environment and climate issues.
In 2012 WVA was awarded the Arbor Day Award for Education Innovation.
In April 2012, World Vision Australia – in partnership with the World Agroforestry Center and World Vision East Africa – held an international conference in Nairobi called "Beating Famine" to analyse and plan how to improve food security for the world's poor through the use of FMNR and Evergreen Agriculture. The conference was attended by more than 200 participants, including world leaders in sustainable agriculture, five East African ministers of agriculture and the environment, ambassadors, and other government representatives from Africa, Europe, and Australia, and leaders from non-government and international organisations.
Two major outcomes of the conference were:
The establishment of a global FMNR network of key stakeholders to promote, encourage and initiate the scale-up of FMNR globally.
Country, regional and global level plans as a basis for inter-organisation collaboration for FMNR scale-up.
The conference acted as a catalyst for media coverage of FMNR in some of the world's leading outlets and a noticeable increase in momentum for an FMNR global movement. This heightened awareness of FMNR has created an opportunity for it to spread exponentially worldwide.
Further reading
See also
References
Sources
d'Arms, Deborha 2011. Jardin d'Or (Garden of Gold): A Treatise on Forest Gardening, Recreating Sustainable Gardens of Eden. Los Gatos, CA: Robertson Publishing. .
Douglas, J. Sholto and Hart, Robert A. de J. 1985. Forest Farming. Intermediate Technology. .
Fern, Ken 1997. Plants for a Future: Edible and Useful Plants for a Healthier World. Hampshire: Permanent Publications. .
Jacke, Dave, and Toensmeier, Eric 2005. Edible Forest Gardens. Two volume set. Volume One: Ecological Vision and Theory for Temperate Climate Permaculture, . Volume Two: Ecological Design and Practice for Temperate Climate Permaculture, . White River Junction, VT: Chelsea Green.
Jannaway, Kathleen 1991. Abundant Living in the Coming Age of the Tree. Movement for Compassionate Living. .
Smith, Joseph Russell 1988 (first published in 1929). Tree Crops: A Permanent Agriculture. Island Press.
Mir, Gisela Biffen, Mark 2021. Bosques y jardines de alimentos. La Fertilidad de la Tierra Ediciones. (in Spanish) ISBN 978-84-121830-1-6
T.D.Pennignton and E.C.M. Fernandes (editors) "The Genus Inga, Utilization" Inga species and alley-cropping by Mike Hands, Kew Publications.
External links
Why Food Forests?, Permaculture Research Institute
Plant an Edible Forest Garden, Mother Earth News
The garden of the future?, The Guardian
Forest gardens, Permaculture Association
El Pilar Forest Garden Network, information on traditional Maya forest gardening
National Agroforestry Center (USDA)
Agroforestry Practices by The Center for Agroforestry, University of Missouri.
Hwwff.cce.cornell.edu
Ces.ncsu.edu
Trees with Edible Leaves The Perennial Agriculture Institute.
Ntfpinfo.us
Dcnr.state.pa.us
Inga Foundation
Rainforest Saver Foundation (Inga alley cropping projects in Honduras and Cameroon)
Inga alley cropping as an agrometeorogical service to slash and burn cultivation
What is inga alley cropping?
Farmer Managed Natural Regeneration Website
Re-Greening the Sahel at IFPRI
The Development of Farmer Managed Natural Regeneration
Farmer Managed Natural Regeneration – Video
National Agroforesty Center (USDA)
World Agroforestry Centre
The CGIAR Research Program on Forests, Trees and Agroforestry (FTA)
Australian agroforestry
Green Belt Movement
Plants For A Future
Agroforestry in France and Europe
Media
.
.
.
.
.
Agroforestry, stakes and perspectives. Agroof Production, Liagre F. and Girardin N.
Environmental issues with forests
Tropical agriculture
Forest management
Sustainable agriculture
Non-timber forest products
Organic farming
Agriculture in Brazil
Artificial ecosystems
Agriculture in Mesoamerica
Agroforestry systems
Agroforestry
Climate change and agriculture
Agriculture and the environment
Polyculture
Desert greening
Reforestation
Forestry in Africa
Sustainable forest management
Forestry in Ethiopia
Permaculture concepts | Agroforestry | [
"Biology"
] | 13,648 | [
"Artificial ecosystems",
"Ecosystems"
] |
1,488,483 | https://en.wikipedia.org/wiki/Panbiogeography | Panbiogeography, originally proposed by the French-Italian scholar Léon Croizat (1894–1982) in 1958, is a cartographical approach to biogeography that plots distributions of a particular taxon or group of taxa on maps, and connects the disjunct distribution areas or collection localities together with lines called tracks , regarding vicariance as the primary mechanism for the distribution of organisms rather than dispersal. While Panbiogeography influenced development of modern biogeography, the ideas in their original form are not considered mainstream biogeographical theory, and the theory was described in 2007 as "almost moribund".
Tracks
A track is a representation of the spatial form of a species distribution and can give insights into the spatial processes that generated that distribution. Crossing of an ocean or sea basin or any other major tectonic structure (e.g. a fault zone) by an individual track constitutes a baseline.
Individual tracks are superimposed, and if they coincide according to a specified criterion (e.g. shared baselines or compatible track geometries), the resulting summary lines are considered generalized (or standard) tracks. Generalized tracks suggest the pre-existence of ancestral biotas, which subsequently become fragmented by tectonic and/or climate change. The area where two or more generalized tracks intersect is called node. It means that different ancestral biotic and geological fragments interrelate in space/time, as a consequence of terrain collision, docking, or suturing, thus constituting a composite area. A concentration of numerical, genetical or morphological diversity within a taxon in a given area constitutes a main massing.
Panbiogeography was first conceived by Croizat and further applied by researchers in New Zealand and Latin America. Panbiogeography provides a method for analyzing the geographic (spatial) structure of distributions in order to generate predictions about the evolution of species and other taxa in space and time.
Panbiogeographic key concepts of track, node, baseline, and main massing have shown to be powerful analytical tools, especially following the mathematical formalization of these concepts with the development of quantitative panbiogeography. Such developments were based on the application of concepts and methods from graph theory, for example minimum spanning trees to depict individual tracks in a more rigorous way, clique analysis to identify standard tracks, and nodal analysis to determine the precise location of panbiogeographic nodes.
Panbiogeography emphasizes the analysis of raw locality and broader distribution data for taxa, and may thus benefit from modern technological advances for the collection, storage, and analysis of such data, as are online biodiversity databases of georeferenced records, the Global Positioning System (GPS) and Geographic Information Systems (GIS) technology. Furthermore, panbiogeographers have suggested their paradigm may also be useful to address the critical issue of global biodiversity conservation in a potentially fast and cost-effective way.
Reception
Panbiogeography has generally been dismissed by mainstream biologists, and it has been described as "almost moribund" and as having "fallen by the wayside" in biogeography following widespread criticism. Robert H. Cowie, writing in a book review in Heredity stated "Panbiogeography seems to me at best to offer little new insight, at worst to be fundamentally flawed" criticising panbiogeographers for not placing enough emphasis on phylogenetics, which Cowie states is "the underpinning of any biogeographical analysis". Subsequent researchers have also criticised panbiogeography and argued that the approach is detrimental to biogeography as a scientific discipline.
Notes
References
Further reading
Nelson, G. (1973). Comments on Leon Croizat's Biogeography. Systematic Zoology. Vol. 22, No. 3. pp. 312–320.
External links
The Panbiogeography Gate
Axiomatic Panbiogeography
Panbiogeography
Biogeography | Panbiogeography | [
"Biology"
] | 795 | [
"Biogeography"
] |
1,488,888 | https://en.wikipedia.org/wiki/Cabin%20pressurization | Cabin pressurization is a process in which conditioned air is pumped into the cabin of an aircraft or spacecraft in order to create a safe and comfortable environment for humans flying at high altitudes. For aircraft, this air is usually bled off from the gas turbine engines at the compressor stage, and for spacecraft, it is carried in high-pressure, often cryogenic, tanks. The air is cooled, humidified, and mixed with recirculated air by one or more environmental control systems before it is distributed to the cabin.
The first experimental pressurization systems saw use during the 1920s and 1930s. In the 1940s, the first commercial aircraft with a pressurized cabin entered service. The practice would become widespread a decade later, particularly with the introduction of the British de Havilland Comet jetliner in 1949. However, two catastrophic failures in 1954 temporarily grounded the Comet worldwide. These failures were investigated and found to be caused by a combination of progressive metal fatigue and aircraft skin stresses caused from pressurization. Improved testing involved multiple full-scale pressurization cycle tests of the entire fuselage in a water tank, and the key engineering principles learned were applied to the design of subsequent jet airliners.
Certain aircraft have unusual pressurization needs. For example, the supersonic airliner Concorde had a particularly high pressure differential due to flying at unusually high altitude: up to while maintaining a cabin altitude of . This increased airframe weight and saw the use of smaller cabin windows intended to slow the decompression rate if a depressurization event occurred.
The Aloha Airlines Flight 243 incident in 1988, involving a Boeing 737-200 that suffered catastrophic cabin failure mid-flight, was primarily caused by the aircraft's continued operation despite having accumulated more than twice the number of flight cycles that the airframe was designed to endure.
For increased passenger comfort, several modern airliners, such as the Boeing 787 Dreamliner and the Airbus A350 XWB, feature reduced operating cabin altitudes as well as greater humidity levels; the use of composite airframes has aided the adoption of such comfort-maximizing practices.
Need for cabin pressurization
Pressurization becomes increasingly necessary at altitudes above above sea level to protect crew and passengers from the risk of a number of physiological problems caused by the low outside air pressure above that altitude. For private aircraft operating in the US, crew members are required to use oxygen masks if the cabin altitude (a representation of the air pressure, see below) stays above for more than 30 minutes, or if the cabin altitude reaches at any time. At altitudes above , passengers are required to be provided oxygen masks as well. On commercial aircraft, the cabin altitude must be maintained at or less. Pressurization of the cargo hold is also required to prevent damage to pressure-sensitive goods that might leak, expand, burst or be crushed on re-pressurization. The principal physiological problems are listed below.
Hypoxia
The lower partial pressure of oxygen at high altitude reduces the alveolar oxygen tension in the lungs and subsequently in the brain, leading to sluggish thinking, dimmed vision, loss of consciousness, and ultimately death: In some individuals, particularly those with heart or lung disease, symptoms may begin as low as , although most passengers can tolerate altitudes of without ill effect. At this altitude, there is about 25% less oxygen than there is at sea level.
Hypoxia may be addressed by the administration of supplemental oxygen, either through an oxygen mask or through a nasal cannula. Without pressurization, sufficient oxygen can be delivered up to an altitude of about . This is because a person who is used to living at sea level needs about partial oxygen pressure to function normally and that pressure can be maintained up to about by increasing the mole fraction of oxygen in the air that is being breathed. At , the ambient air pressure falls to about 0.2 bar, at which maintaining a minimum partial pressure of oxygen of 0.2 bar requires breathing 100% oxygen using an oxygen mask.
Emergency oxygen supply masks in the passenger compartment of airliners do not need to be pressure-demand masks because most flights stay below . Above that altitude the partial pressure of oxygen will fall below 0.2 bar even at 100% oxygen and some degree of cabin pressurization or rapid descent will be essential to avoid the risk of hypoxia.
Altitude sickness
Hyperventilation, the body's most common response to hypoxia, does help to partially restore the partial pressure of oxygen in the blood, but it also causes carbon dioxide (CO2) to out-gas, raising the blood pH and inducing alkalosis. Passengers may experience fatigue, nausea, headaches, sleeplessness, and (on extended flights) even pulmonary edema. These are the same symptoms that mountain climbers experience, but the limited duration of powered flight makes the development of pulmonary oedema unlikely. Altitude sickness may be controlled by a full pressure suit with helmet and faceplate, which completely envelops the body in a pressurized environment; however, this is impractical for commercial passengers.
Decompression sickness
The low partial pressure of gases, principally nitrogen (N2) but including all other gases, may cause dissolved gases in the bloodstream to precipitate out, resulting in gas embolism, or bubbles in the bloodstream. The mechanism is the same as that of compressed-air divers on ascent from depth. Symptoms may include the early symptoms of "the bends"—tiredness, forgetfulness, headache, stroke, thrombosis, and subcutaneous itching—but rarely the full symptoms thereof. Decompression sickness may also be controlled by a full-pressure suit as for altitude sickness.
Barotrauma
As the aircraft climbs or descends, passengers may experience discomfort or acute pain as gases trapped within their bodies expand or contract. The most common problems occur with air trapped in the middle ear (aerotitis) or paranasal sinuses by a blocked Eustachian tube or sinuses. Pain may also be experienced in the gastrointestinal tract or even the teeth (barodontalgia). Usually these are not severe enough to cause actual trauma but can result in soreness in the ear that persists after the flight and can exacerbate or precipitate pre-existing medical conditions, such as pneumothorax.
Cabin altitude
The pressure inside the cabin is technically referred to as the equivalent effective cabin altitude or more commonly as the cabin altitude. This is defined as the equivalent altitude above mean sea level having the same atmospheric pressure according to a standard atmospheric model such as the International Standard Atmosphere. Thus a cabin altitude of zero would have the pressure found at mean sea level, which is taken to be .
Aircraft
In airliners, cabin altitude during flight is kept above sea level in order to reduce stress on the pressurized part of the fuselage; this stress is proportional to the difference in pressure inside and outside the cabin. In a typical commercial passenger flight, the cabin altitude is programmed to rise gradually from the altitude of the airport of origin to a regulatory maximum of . This cabin altitude is maintained while the aircraft is cruising at its maximum altitude and then reduced gradually during descent until the cabin pressure matches the ambient air pressure at the destination.
Keeping the cabin altitude below generally prevents significant hypoxia, altitude sickness, decompression sickness, and barotrauma. Federal Aviation Administration (FAA) regulations in the U.S. mandate that under normal operating conditions, the cabin altitude may not exceed this limit at the maximum operating altitude of the aircraft. This mandatory maximum cabin altitude does not eliminate all physiological problems; passengers with conditions such as pneumothorax are advised not to fly until fully healed, and people suffering from a cold or other infection may still experience pain in the ears and sinuses. The rate of change of cabin altitude strongly affects comfort as humans are sensitive to pressure changes in the inner ear and sinuses and this has to be managed carefully. Scuba divers flying within the "no fly" period after a dive are at risk of decompression sickness because the accumulated nitrogen in their bodies can form bubbles when exposed to reduced cabin pressure.
The cabin altitude of the Boeing 767 is typically about when cruising at . This is typical for older jet airliners. A design goal for many, but not all, newer aircraft is to provide a lower cabin altitude than older designs. This can be beneficial for passenger comfort. For example, the Bombardier Global Express business jet can provide a cabin altitude of when cruising at . The Emivest SJ30 business jet can provide a sea-level cabin altitude when cruising at . One study of eight flights in Airbus A380 aircraft found a median cabin pressure altitude of , and 65 flights in Boeing 747-400 aircraft found a median cabin pressure altitude of .
Before 1996, approximately 6,000 large commercial transport airplanes were assigned a type certificate to fly up to without having to meet high-altitude special conditions. In 1996, the FAA adopted Amendment 25-87, which imposed additional high-altitude cabin pressure specifications for new-type aircraft designs. Aircraft certified to operate above "must be designed so that occupants will not be exposed to cabin pressure altitudes in excess of after any probable failure condition in the pressurization system". In the event of a decompression that results from "any failure condition not shown to be extremely improbable", the plane must be designed such that occupants will not be exposed to a cabin altitude exceeding for more than 2 minutes, nor to an altitude exceeding at any time. In practice, that new Federal Aviation Regulations amendment imposes an operational ceiling of on the majority of newly designed commercial aircraft. Aircraft manufacturers can apply for a relaxation of this rule if the circumstances warrant it. In 2004, Airbus acquired an FAA exemption to allow the cabin altitude of the A380 to reach in the event of a decompression incident and to exceed for one minute. This allows the A380 to operate at a higher altitude than other newly designed civilian aircraft.
Spacecraft
Russian engineers used an air-like nitrogen/oxygen mixture, kept at a cabin altitude near zero at all times, in their 1961 Vostok, 1964 Voskhod, and 1967 to present Soyuz spacecraft. This requires a heavier space vehicle design, because the spacecraft cabin structure must withstand the stress of 14.7 pounds per square inch (1 atm, 1.01 bar) against the vacuum of space, and also because an inert nitrogen mass must be carried. Care must also be taken to avoid decompression sickness when cosmonauts perform extravehicular activity, as current soft space suits are pressurized with pure oxygen at relatively low pressure in order to provide reasonable flexibility.
By contrast, the United States used a pure oxygen atmosphere for its 1961 Mercury, 1965 Gemini, and 1967 Apollo spacecraft, mainly in order to avoid decompression sickness. Mercury used a cabin altitude of (); Gemini used an altitude of (); and Apollo used () in space. This allowed for a lighter space vehicle design. This is possible because at 100% oxygen, enough oxygen gets to the bloodstream to allow astronauts to operate normally. Before launch, the pressure was kept at slightly higher than sea level at a constant above ambient for Gemini, and above sea level at launch for Apollo), and transitioned to the space cabin altitude during ascent. However, the high pressure pure oxygen atmosphere before launch proved to be a factor in a fatal fire hazard in Apollo, contributing to the deaths of the entire crew of Apollo 1 during a 1967 ground test. After this, NASA revised its procedure to use a nitrogen/oxygen mix at zero cabin altitude at launch, but kept the low-pressure pure oxygen atmosphere at in space.
After the Apollo program, the United States used "a 74-percent oxygen and 26-percent nitrogen breathing mixture" at for Skylab, and a cabin atmosphere of for the Space Shuttle orbiter and the International Space Station.
Mechanics
An airtight fuselage is pressurized using a source of compressed air and controlled by an environmental control system (ECS). The most common source of compressed air for pressurization is bleed air from the compressor stage of a gas turbine engine; from a low or intermediate stage or an additional high stage, the exact stage depending on engine type. By the time the cold outside air has reached the bleed air valves, it has been heated to around . The control and selection of high or low bleed sources is fully automatic and is governed by the needs of various pneumatic systems at various stages of flight. Piston-engine aircraft require an additional compressor, see diagram right.
The part of the bleed air that is directed to the ECS is then expanded to bring it to cabin pressure, which cools it. A final, suitable temperature is then achieved by adding back heat from the hot compressed air via a heat exchanger and air cycle machine known as a PAC (Pressurization and Air Conditioning) system. In some larger airliners, hot trim air can be added downstream of air-conditioned air coming from the packs if it is needed to warm a section of the cabin that is colder than others.
At least two engines provide compressed bleed air for all the plane's pneumatic systems, to provide full redundancy. Compressed air is also obtained from the auxiliary power unit (APU), if fitted, in the event of an emergency and for cabin air supply on the ground before the main engines are started. Most modern commercial aircraft today have fully redundant, duplicated electronic controllers for maintaining pressurization along with a manual back-up control system.
All exhaust air is dumped to atmosphere via an outflow valve, usually at the rear of the fuselage. This valve controls the cabin pressure and also acts as a safety relief valve, in addition to other safety relief valves. If the automatic pressure controllers fail, the pilot can manually control the cabin pressure valve, according to the backup emergency procedure checklist. The automatic controller normally maintains the proper cabin pressure altitude by constantly adjusting the outflow valve position so that the cabin altitude is as low as practical without exceeding the maximum pressure differential limit on the fuselage. The pressure differential varies between aircraft types, typical values are between and . At , the cabin pressure would be automatically maintained at about , ( lower than Mexico City), which is about of atmosphere pressure.
Some aircraft, such as the Boeing 787 Dreamliner, have re-introduced electric compressors previously used on piston-engined airliners to provide pressurization. The use of electric compressors increases the electrical generation load on the engines and introduces a number of stages of energy transfer; therefore, it is unclear whether this increases the overall efficiency of the aircraft air handling system. They do, however, remove the danger of chemical contamination of the cabin, simplify engine design, avert the need to run high pressure pipework around the aircraft, and provide greater design flexibility.
Unplanned decompression
Unplanned loss of cabin pressure at altitude/in space is rare but has resulted in a number of fatal accidents. Failures range from sudden, catastrophic loss of airframe integrity (explosive decompression) to slow leaks or equipment malfunctions that allow cabin pressure to drop.
Any failure of cabin pressurization above requires an emergency descent to or the closest to that while maintaining the minimum sector altitude (MSA), and the deployment of an oxygen mask for each seat. The oxygen systems have sufficient oxygen for all on board and give the pilots adequate time to descend to below . Without emergency oxygen, hypoxia may lead to loss of consciousness and a subsequent loss of control of the aircraft. Modern airliners include a pressurized pure oxygen tank in the cockpit, giving the pilots more time to bring the aircraft to a safe altitude. The time of useful consciousness varies according to altitude. As the pressure falls the cabin air temperature may also plummet to the ambient outside temperature with a danger of hypothermia or frostbite.
For airliners that need to fly over terrain that does not allow reaching the safe altitude within a maximum of 30 minutes, pressurized oxygen bottles are mandatory since the chemical oxygen generators fitted to most planes cannot supply sufficient oxygen.
In jet fighter aircraft, the small size of the cockpit means that any decompression will be very rapid and would not allow the pilot time to put on an oxygen mask. Therefore, fighter jet pilots and aircrew are required to wear oxygen masks at all times.
On June 30, 1971, the crew of Soyuz 11, Soviet cosmonauts Georgy Dobrovolsky, Vladislav Volkov, and Viktor Patsayev were killed after the cabin vent valve accidentally opened before atmospheric re-entry.
History
The aircraft that pioneered pressurized cabin systems include:
Packard-Le Père LUSAC-11, (1920, a modified French design, not actually pressurized but with an enclosed, oxygen enriched cockpit)
Engineering Division USD-9A, a modified Airco DH.9A (1921 – the first aircraft to fly with the addition of a pressurized cockpit module)
Junkers Ju 49 (1931 – a German experimental aircraft purpose-built to test the concept of cabin pressurization)
Farman F.1000 (1932 – a French record breaking pressurized cockpit, experimental aircraft)
Chizhevski BOK-1 (1936 – a Russian experimental aircraft)
Lockheed XC-35 (1937 – an American pressurized aircraft. Rather than a pressure capsule enclosing the cockpit, the monocoque fuselage skin was the pressure vessel.)
Renard R.35 (1938 – the first pressurized piston airliner)
Boeing 307 Stratoliner (1938 – the first pressurized airliner to enter commercial service)
Lockheed Constellation (1943 – the first pressurized airliner in wide service)
Avro Tudor (1946 – first British pressurized airliner)
de Havilland Comet (British, Comet 1 1949 – the first jetliner, Comet 4 1958 – resolving the Comet 1 problems)
Tupolev Tu-144 and Concorde (1968 USSR and 1969 Anglo-French respectively – first to operate at very high altitude)
Cessna P210 (1978) First commercially successful pressurized single-engine aircraft
SyberJet SJ30 (2005) First civilian business jet to certify 12.0 psi pressurization system allowing for a sea level cabin at .
The first airliner to enter commercial service with a pressurized cabin was the Boeing 307 Stratoliner, built in 1938, prior to World War II, though only ten were produced before the war interrupted production. The 307's "pressure compartment was from the nose of the aircraft to a pressure bulkhead in the aft just forward of the horizontal stabilizer."
World War II was a catalyst for aircraft development. Initially, the piston aircraft of World War II, though they often flew at very high altitudes, were not pressurized and relied on oxygen masks. This became impractical with the development of larger bombers where crew were required to move about the cabin. The first bomber built with a pressurised cabin for high altitude use was the Vickers Wellington Mark VI in 1941 but the RAF changed policy and instead of acting as Pathfinders the aircraft were used for other purposes. The US Boeing B-29 Superfortress long range strategic bomber was first into bomb service. The control system for this was designed by Garrett AiResearch Manufacturing Company, drawing in part on licensing of patents held by Boeing for the Stratoliner.
Post-war piston airliners such as the Lockheed Constellation (1943) made the technology more common in civilian service. The piston-engined airliners generally relied on electrical compressors to provide pressurized cabin air. Engine supercharging and cabin pressurization enabled aircraft like the Douglas DC-6, the Douglas DC-7, and the Constellation to have certified service ceilings from . Designing a pressurized fuselage to cope with that altitude range was within the engineering and metallurgical knowledge of that time. The introduction of jet airliners required a significant increase in cruise altitudes to the range, where jet engines are more fuel efficient. That increase in cruise altitudes required far more rigorous engineering of the fuselage, and in the beginning not all the engineering problems were fully understood.
The world's first commercial jet airliner was the British de Havilland Comet (1949) designed with a service ceiling of . It was the first time that a large diameter, pressurized fuselage with windows had been built and flown at this altitude. Initially, the design was very successful but two catastrophic airframe failures in 1954 resulting in the total loss of the aircraft, passengers and crew grounded what was then the entire world jet airliner fleet. Extensive investigation and groundbreaking engineering analysis of the wreckage led to a number of very significant engineering advances that solved the basic problems of pressurized fuselage design at altitude. The critical problem proved to be a combination of an inadequate understanding of the effect of progressive metal fatigue as the fuselage undergoes repeated stress cycles coupled with a misunderstanding of how aircraft skin stresses are redistributed around openings in the fuselage such as windows and rivet holes.
The critical engineering principles concerning metal fatigue learned from the Comet 1 program were applied directly to the design of the Boeing 707 (1957) and all subsequent jet airliners. For example, detailed routine inspection processes were introduced, in addition to thorough visual inspections of the outer skin, mandatory structural sampling was routinely conducted by operators; the need to inspect areas not easily viewable by the naked eye led to the introduction of widespread radiography examination in aviation; this also had the advantage of detecting cracks and flaws too small to be seen otherwise. Another visibly noticeable legacy of the Comet disasters is the oval windows on every jet airliner; the metal fatigue cracks that destroyed the Comets were initiated by the small radius corners on the Comet 1's almost square windows. The Comet fuselage was redesigned and the Comet 4 (1958) went on to become a successful airliner, pioneering the first transatlantic jet service, but the program never really recovered from these disasters and was overtaken by the Boeing 707.
Even following the Comet disasters, there were several subsequent catastrophic fatigue failures attributed to cabin pressurisation. Perhaps the most prominent example was Aloha Airlines Flight 243, involving a Boeing 737-200. In this case, the principal cause was the continued operation of the specific aircraft despite having accumulated 35,496 flight hours prior to the accident, those hours included over 89,680 flight cycles (takeoffs and landings), owing to its use on short flights; this amounted to more than twice the number of flight cycles that the airframe was designed to endure. Aloha 243 was able to land despite the substantial damage inflicted by the decompression, which had resulted in the loss of one member of the cabin crew; the incident had far-reaching effects on aviation safety policies and led to changes in operating procedures.
The supersonic airliner Concorde had to deal with particularly high pressure differentials because it flew at unusually high altitude (up to ) and maintained a cabin altitude of . Despite this, its cabin altitude was intentionally maintained at . This combination, while providing for increasing comfort, necessitated making Concorde a significantly heavier aircraft, which in turn contributed to the relatively high cost of a flight. Unusually, Concorde was provisioned with smaller cabin windows than most other commercial passenger aircraft in order to slow the rate of decompression in the event of a window seal failing. The high cruising altitude also required the use of high pressure oxygen and demand valves at the emergency masks unlike the continuous-flow masks used in conventional airliners. The FAA, which enforces minimum emergency descent rates for aircraft, determined that, in relation to Concorde's higher operating altitude, the best response to a pressure loss incident would be to perform a rapid descent.
The designed operating cabin altitude for new aircraft is falling and this is expected to reduce any remaining physiological problems. Both the Boeing 787 Dreamliner and the Airbus A350 XWB airliners have made such modifications for increased passenger comfort. The 787's internal cabin pressure is the equivalent of altitude resulting in a higher pressure than for the altitude of older conventional aircraft; according to a joint study performed by Boeing and Oklahoma State University, such a level significantly improves comfort levels. Airbus has stated that the A350 XWB provides for a typical cabin altitude at or below , along with a cabin atmosphere of 20% humidity and an airflow management system that adapts cabin airflow to passenger load with draught-free air circulation. The adoption of composite fuselages eliminates the threat posed by metal fatigue that would have been exacerbated by the higher cabin pressures being adopted by modern airliners, it also eliminates the risk of corrosion from the use of greater humidity levels.
See also
Aerotoxic syndrome
Air cycle machine
Atmosphere (unit)
Compressed air
Fume event
Rarefaction
Space suit
Time of useful consciousness
Footnotes
General references
Cornelisse, Diana G. Splendid Vision, Unswerving Purpose; Developing Air Power for the United States Air Force During the First Century of Powered Flight. Wright-Patterson Air Force Base, Ohio: U.S. Air Force Publications, 2002. . pp. 128–29.
Portions from the United States Naval Flight Surgeon's Manual
"121 Dead in Greek Air Crash", CNN
External links
Aerospace engineering
Pressure vessels
Aviation safety
Atmospheric pressure | Cabin pressurization | [
"Physics",
"Chemistry",
"Engineering"
] | 5,171 | [
"Structural engineering",
"Physical quantities",
"Chemical equipment",
"Meteorological quantities",
"Atmospheric pressure",
"Physical systems",
"Hydraulics",
"Aerospace engineering",
"Pressure vessels"
] |
1,488,989 | https://en.wikipedia.org/wiki/Hyponastic%20response | In plant biology, the hyponastic response is a nastic movement characterized by an upward bending of leaves or other plant parts, resulting from accelerated growth of the lower side of the petiole in comparison to its upper part. This can be observed in many terrestrial plants and is linked to the plant hormone ethylene.
The plant’s root senses the water excess and produces 1-Aminocyclopropane-1-carboxylic acid which then is converted into ethylene, regulating this process.
Submerged plants often show a hyponastic response, where the upward bending of the leaves and the elongation of the petioles might help the plant to restore normal gas exchange with the atmosphere.
Plants that are exposed to elevated ethylene levels in experimental set-ups also show a hyponastic response.
References
Plant physiology
Botany | Hyponastic response | [
"Biology"
] | 177 | [
"Plant physiology",
"Plants",
"Botany"
] |
1,489,035 | https://en.wikipedia.org/wiki/MPEG%20Industry%20Forum | The MPEG Industry Forum (MPEGIF) is a non-profit consortium dedicated to "further the adoption of MPEG Standards, by establishing them as well accepted and widely used standards among creators of content, developers, manufacturers, providers of services, and end users".
The group is involved in many tasks, which include promotion of MPEG standards (particularly MPEG-4, MPEG-4 AVC / H.264, MPEG-7 and MPEG-21); developing MPEG certification for products; organizing educational events; and collaborating on development of new de facto MPEG standards.
MPEGIF, founded in 2000, has played a significant role in facilitating the widespread adoption and deployment of MPEG-4 AVC/H.264 as the industry's standard video compression technology, powering next generation television, most mainstream content delivery and consumption applications including packaged media. MPEGIF serves as a single point of information on technology, products and services for these standards, offers interoperability testing, a conformance program, marketing activities and is supporting over 50 international trade shows and conferences per year.
The key activities of the forum are structured via three main Committees:
Technology & Engineering
Interoperability & Compliance
Marketing & Communication
2009–2010 focus areas
3DTV
Addressable advertising: extension and adoption of CableLabs SCTE-104 for all multimedia
MPEG-4/Scalable Video Coding (SVC)
Simplifying competitive licensing
Quality of Experience / Quality of Service metrics
Royalty free DRM initiatives
Online Video / Internet Streaming
IPTV ecosystem
Ultra HD (7680x4320)
MPEG/High-Performance Video Coding (HVC, H.265)
MPEG-7 / MPEG-21
MPEGIF is also running the MPEGIF Logo Qualification Program, which is designed to help guide interoperability among products and technology. The program, based on a self-certification process, is free of charge and open to all companies using MPEG technology, not just members of MPEGIF although, membership is encouraged. Qualified products have the right to display the MP4 Qualification Mark and also list their status in their documentation, literature, and advertising. They will also have their product listed in the MPEGIF Product Directory.
In June 2012 the MPEG Industry Forum officially "declared victory" and voted to close its operation and merge its remaining assets with that of the Open IPTV Forum.
External links
site at archive.org
MPEG
Digital television
History of television
Film and video technology
Standards organizations
High-definition television
Video compression
Videotelephony | MPEG Industry Forum | [
"Technology"
] | 521 | [
"Multimedia",
"MPEG"
] |
1,489,315 | https://en.wikipedia.org/wiki/Homodyne%20detection | In electrical engineering, homodyne detection is a method of extracting information encoded as modulation of the phase and/or frequency of an oscillating signal, by comparing that signal with a standard oscillation that would be identical to the signal if it carried null information. "Homodyne" signifies a single frequency, in contrast to the dual frequencies employed in heterodyne detection.
When applied to processing of the reflected signal in remote sensing for topography, homodyne detection lacks the ability of heterodyne detection to determine the size of a static discontinuity in elevation between two locations. (If there is a path between the two locations with smoothly changing elevation, then homodyne detection may in principle be able to track the signal phase along the path if sampling is dense enough). Homodyne detection is more readily applicable to velocity sensing.
In optics
In optical interferometry, homodyne signifies that the reference radiation (i.e. the local oscillator) is derived from the same source as the signal before the modulating process. For example, in a laser scattering measurement, the laser beam is split into two parts. One is the local oscillator and the other is sent to the system to be probed. The scattered light is then mixed with the local oscillator on the detector. This arrangement has the advantage of being insensitive to fluctuations in the frequency of the laser. Usually the scattered beam will be weak, in which case the (nearly) steady component of the detector output is a good measure of the instantaneous local oscillator intensity and therefore can be used to compensate for any fluctuations in the intensity of the laser..
The generated current signal from the photodetector is often too weak to measure. It is therefore converted into a voltage using a transimpedance amplifier.
Radio technology
In radio technology, the distinction is not the source of the local oscillator, but the frequency used. In heterodyne detection, the local oscillator is frequency-shifted, while in homodyne detection it has the same frequency as the radiation to be detected. See direct conversion receiver.
Applications
Lock-in amplifiers are homodyne detectors integrated into measurement equipment or packaged as stand-alone laboratory equipment for sensitive detection and highly selective filtering of weak or noisy signals. Homodyne/lock-in detection has been one of the most commonly used signal processing methods across a wide range of experimental disciplines for decades.
Homodyne and heterodyne techniques are commonly used in thermoreflectance techniques.
In the processing of signals in some applications of magnetic resonance imaging, homodyne detection can offer advantages over magnitude detection. The homodyne technique can suppress excessive noise and undesired quadrature components (90° out-of-phase), and provide stable access to information that may be encoded into the phase or polarity of images.
Homodyne detection was one of the key techniques in demonstrating quantum entanglement. This has led to the possibility of providing a room temperature quantum sensor with continuous-variable quantum information. However, challenges include reducing noise, increasing bandwidth and improving the integration of electronic and photonic components. Recently, these challenges have been overcome to demonstrate a free-space-coupled room temperature quantum sensor with large-scale integrated photonics and electronics.
An encrypted secure communication system can be based on quantum key distribution (QKD). An efficient receiver scheme for implementing QKD is balanced homodyne detection (BHD) using a positive–intrinsic–negative (PIN) diode.
See also
Heterodyne
Optical heterodyne detection
References
External links
Waves
Nonlinear optics
Electronic test equipment | Homodyne detection | [
"Physics",
"Technology",
"Engineering"
] | 763 | [
"Physical phenomena",
"Electronic test equipment",
"Measuring instruments",
"Waves",
"Motion (physics)"
] |
1,489,332 | https://en.wikipedia.org/wiki/Ball%20bearing%20motor | A ball bearing motor or ball-race motor consists simply of a small ball- bearing assembly with provision for passing current radially between inner and outer tracks to produce circular motion.
Explanation
A ball bearing motor is an unusual electric motor that consists of two ball-bearing-type bearings, with the inner races mounted on a common conductive shaft, and the outer races connected to a high current, low voltage power supply. An alternative construction fits the outer races inside a metal tube, while the inner races are mounted on a shaft with a non-conductive section (e.g. two sleeves on an insulating rod). This method has the advantage that the tube will act as a flywheel. The motor rarely starts without assistance, having effectively zero static torque, but once rotation begins the motor will accelerate until it reaches a steady speed, the direction of rotation is determined by the initial spin. Although ball bearing motors can reach reasonably high speeds they are very inefficient. Producing significant torque typically requires so much power that the bearings are heated to several hundred degrees.
Theory
There are multiple explanations of the effect, see the large bibliography in McDonald's work.
In 1965 Electronics and Power magazine published a letter by RH Barker asking for an explanation of how this type of motor worked. At the time various explanations had been offered. S. Marinov suggests that the device produces motion from electricity without magnetism being involved, operating purely by the resistance heating causing an asymmetric thermal expansion of the balls in the bearings as they rotate. The same explanation is given by Watson, Patel and Sedcole for rotating cylinders (instead of balls). However, H. Gruenberg has given a thorough theoretical explanation based on pure electromagnetism (and neglecting the thermal effects completely).
Also, P. Hatzikonstantinou and P. G. Moyssides claim to have found an excellent agreement between the results from the electromagnetic theory and the experiments measuring the total power and efficiency of the motor.
See also
Homopolar generator
Homopolar motor
Faraday paradox
References
External links
The Ball-Bearing electric motor
motor torque calculation
Electric motors
Rolling-element bearings | Ball bearing motor | [
"Technology",
"Engineering"
] | 440 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
1,489,559 | https://en.wikipedia.org/wiki/Net%20energy%20gain | Net Energy Gain (NEG) is a concept used in energy economics that refers to the difference between the energy expended to harvest an energy source and the amount of energy gained from that harvest. When the NEG of a resource is greater than zero, extraction yields excess energy. If the NEG is below zero, it requires more energy to extract the resource than can be extracted from it. The net energy gain, which can be expressed in joules, differs from the net financial gain that may result from the energy harvesting process, in that various sources of energy (e.g. natural gas, coal, etc.) may be priced differently for the same amount of energy.
Calculating NEG
A net energy gain is achieved by expending less energy acquiring a source of energy than is contained in the source to be consumed. That is
Factors to consider when calculating NEG is the type of energy, the way energy is used and acquired, and the methods used to store or transport the energy. It is also possible to overcomplicate the equation by an infinite number of externalities and inefficiencies that may be present during the energy harvesting process.
Sources of energy
The definition of an energy source is not rigorous. Anything that can provide energy to anything else can qualify. Wood in a stove is full of potential thermal energy; in a car, mechanical energy is acquired from the combustion of gasoline, and the combustion of coal is converted from thermal to mechanical, and then to electrical energy.
Examples of energy sources include:
Fossil fuels
Nuclear fuels (e.g., uranium and plutonium)
Radiation from the sun
Mechanical energy from wind, rivers, tides, etc.
Bio-fuels derived from biomass, in turn having consumed soil nutrients during growth.
Heat from within the earth (geothermal energy)
The term net energy gain can be used in slightly different ways:
Non-sustainables
The usual definition of net energy gain compares the energy required to extract energy (that is, to find it, remove it from the ground, refine it, and ship it to the energy user) with the amount of energy produced and transmitted to a user from some (typically underground) energy resource. To better understand this, assume an economy has a certain amount of finite oil reserves that are still underground, unextracted. To get to that energy, some of the extracted oil needs to be consumed in the extraction process to run the engines driving the pumps, therefore after extraction the net energy produced will be less than the amount of energy in the ground before extraction, because some had to be used up.
The extraction energy can be viewed in one of two ways: profitable extractable (NEG>0) or nonprofitable extractable (NEG<0). For instance, in the Athabasca Oil Sands, the highly diffuse nature of the tar sands and low price of crude oil rendered them uneconomical to mine until the late 1950s (NEG<0). Since then, the price of oil has risen and a new steam extraction technique has been developed, allowing the sands to become the largest oil provider in Alberta (NEG>0).
Sustainables
The situation is different with sustainable energy sources, such as hydroelectric, wind, solar, and geothermal energy sources, because there is no bulk reserve to account for (other than the Sun's lifetime), but the energy continuously trickles, so only the energy required for extraction is considered.
In all energy extraction cases, the life cycle of the energy-extraction device is crucial for the NEG-ratio. If an extraction device is defunct after 10 years, its NEG will be significantly lower than if it operates for 30 years. Therefore, the 'energy payback time (sometimes referred to as energy amortization) can be used instead, which is the time, usually given in years, a plant must operate until the running NEG becomes positive (i.e. until the amount of energy needed for the plant infrastructure has been harvested from the plant).
Biofuels
Net energy gain of biofuels has been a particular source of controversy for ethanol derived from corn (bioethanol). The actual net energy of biofuel production is highly dependent on both the bio source that is converted into energy, how it is grown and harvested (and in particular the use of petroleum-derived fertilizer), and how efficient the process of conversion to usable energy is. Details on this can be found in the Ethanol fuel energy balance article. Similar considerations also apply to biodiesel and other fuels.
ISO 13602
ISO 13602-1 provides methods to analyse, characterize and compare technical energy systems (TES) with all their inputs, outputs and risk factors. It contains rules and guidelines for the methodology for such analyses.
ISO 13602-1 describes a means of to establish relations between inputs and outputs (net energy) and thus to facilitate certification, marking, and labelling, comparable characterizations, coefficient of performance, energy resource planning, environmental impact assessments, meaningful energy statistics and forecasting of the direct natural energy resource or energyware inputs, technical energy system investments and the performed and expected future energy service outputs.
In ISO 13602-1:2002, renewable resource is defined as "natural resource for which the ratio of the creation of the natural resource to the output of that resource from nature to the technosphere is equal to or greater than one".
Examples
During the 1920s, of crude oil were extracted for every barrel of crude used in the extraction and refining process. Today only are harvested for every barrel used. When the net energy gain of an energy source reaches zero, then the source is no longer contributing energy to an economy.
See also
ISO 13600
Energy balance
Energy returned on energy invested
Energyware and energy carrier
Solar cells and energy payback
Energy cannibalism
References
External links
ISO 13602-1:2002 Methods for analysis of technical energy systems.
The Importance of ISO and IEC International Energy Standards.
Technical energy systems
Thinking clearly about biofuels: ending the irrelevant net energy debate and developing better performance metrics for alternative fuels.
Energy economics | Net energy gain | [
"Environmental_science"
] | 1,249 | [
"Energy economics",
"Environmental social science"
] |
1,489,680 | https://en.wikipedia.org/wiki/R%20Coronae%20Borealis | R Coronae Borealis is a low-mass yellow supergiant star in the constellation of Corona Borealis. It is the prototype of the R Coronae Borealis variable of variable stars, which fade by several magnitudes at irregular intervals. R Coronae Borealis itself normally shines at approximately magnitude 6, just about visible to the naked eye, but at intervals of several months to many years fades to as faint as 15th magnitude. Over successive months it then gradually returns to its normal brightness, giving it the nickname "reverse nova", after the more common type of star which rapidly increases in brightness before fading.
Nomenclature
R Coronae Borealis is a faint naked eye star, but does not have any traditional names. Johann Bayer did not give it a Greek letter designation although it is marked on his map. John Flamsteed numbered all the Bayer stars but did not add any additional designations for fainter stars, so R Coronae Borealis does not appear in either of these two catalogues.
At its discovery it was described simply as "the variable in the Northern crown". It was later referred to as Variabilis Coronae, "Variable (star) of Corona (Borealis)". It has also been called a "reverse nova" because of its habit of fading from sight. The variable star designation R Coronae Borealis was introduced, as "Coronae R" by Friedrich Wilhelm Argelander in 1850.
Variability
The variability of R Coronae Borealis was discovered by English astronomer Edward Pigott in 1795. In 1935 it was the first star shown to have a different chemical composition to the Sun via spectral analysis.
R Coronae Borealis is the prototype of the R Coronae Borealis class of variable stars. It is one of only two R Coronae Borealis variables bright enough to be seen with the naked eye, along with RY Sagittarii. Much of the time it shows variations of around a tenth of a magnitude with poorly defined periods that have been reported as 40 and 51 days. These correspond to the first overtone and fundamental radial pulsation modes for an extreme helium star slightly under .
At irregular intervals a few years or decades apart R Coronae Borealis fades from its normal brightness near 6th magnitude for a period of months or sometimes years. There is no fixed minimum, but the star can become fainter than 15th magnitude in the visual range. The fading is less pronounced at longer wavelengths. Typically the star starts to return to maximum brightness almost immediately from its minimum, although occasionally this is interrupted by another fade. The cause of this behaviour is believed to be a regular build-up of carbon dust in the star's atmosphere. The sudden drop in brightness may be caused by a rapid condensation of carbon-rich dust similar to soot, resulting in much of the star's light being blocked. The gradual restoration to normal brightness results from the dust being dispersed by radiation pressure.
In August 2007, R Coronae Borealis began a fade to an unprecedented minimum. It fell to 14th magnitude in 33 days, then continued to fade slowly, dropping below 15th magnitude in June 2009. It then began an equally slow rise, not reaching 12th magnitude until late 2011. This was an unusually deep and exceptional long minimum, longer even than a deep five year minimum which had occurred in 1962–7. It then faded again to near 15th magnitude, and by August 2014 it had been below 10th magnitude for 7 years. In late 2014, it brightened quickly to 7th magnitude but then began to fade again. By mid-2017, it had been below its "normal" brightness for ten years. It also reached a new record faintest at magnitude 15.2.
Spectrum
R Coronae Borealis at maximum light shows the spectrum a late F or early G yellow supergiant, but with marked peculiarities. Hydrogen lines are weak or absent, while carbon lines and molecular bands of cyanogen (CN) and C2 are exceptionally strong. Helium lines and metals such as calcium are also present. The spectrum is variable, most obviously during the brightness fades. The normal absorption spectrum is replaced by emission lines, especially HeI, CaII, NaI, and other metals. The lines are typically very narrow at this stage. Helium emission lines sometimes show P Cygni profiles. In deep minima, many of the metal lines disappear although the Ca doublet remains strong. Forbidden "nebular" lines of [OI], [OII], and [NII] can be detected at times.
The spectrum at maximum indicates that hydrogen in R Coronae Borealis is strongly depleted, helium is the dominant element, and carbon is strongly enhanced. At minimum, the spectrum shows the development of carbon clouds that obscure the photosphere, leaving chromospheric lines visible at times.
Properties
R Coronae Borealis is about 90% helium and less than 1% hydrogen. The majority of the remainder is carbon. This classifies it as a carbon-enhanced extreme helium star. Modelling the pulsations suggests that the star's mass is . The temperature at maximum is reasonably well known at 6,900K and appears to decrease during the fades as the photosphere is obscured by condensing dust.
The distance of R Coronae Borealis is not known exactly, but is estimated at 1.4 kiloparsecs from assumptions about its intrinsic brightness. The absolute magnitude of −5 is calculated by comparison with R CrB variables in the Large Magellanic Cloud whose distances are known quite accurately. The luminosity is estimated from helium star models to be and the star has a radius around . The Gaia data release 1 parallax also gives a distance of 1.4 kpc although with a considerable margin of error.
There is a fainter star 3" away from R Coronae Borealis, but it is believed to be a distant class K dwarf. Its colour and apparent magnitude are not consistent with being at the same distance as R Coronae Borealis.
Formation
There are two main models for the formation of R CrB stars: the merger of two white dwarfs; or a very late helium flash in a post-AGB star. Models of post-AGB stars calculate that a star with the appearance of R CrB would have a mass around so it is thought to have formed by the merger of a carbon-oxygen white dwarf and a helium white dwarf. The detection of significant lithium in the atmosphere is not easily explained by the merger model, but is a natural consequence of a late helium flash. Evolutionary models of post-AGB stars give a mass of for R CrB, but with a considerable margin of error.
Circumstellar material
Direct imaging with the Hubble Space Telescope shows extensive dust clouds out to a radius of around 2000 astronomical units from R Coronae Borealis, corresponding to a stream of fine dust (composed of grains about 5 nm in diameter) associated with the star's stellar wind, and coarser dust (composed of grains with a diameter of around 0.14 μm) ejected periodically. The obscuration appears to happen closer to the star as clouds of carbon condense at shock regions in an expanding front. "Puffs" of dust emitted from the star condense at about from the surface, and are visible as cometary knots when they lie to side of the star. There is also a shell about 4 pc wide containing dust at 25 K, which may be a fossil planetary nebula.
References
External links
AAVSO Variable Star of the Season, January 2000
141527
Corona Borealis
R Coronae Borealis variables
Coronae Borealis, R
5880
G-type supergiants
BD+28 2477
077442
IRAS catalogue objects
J15483440+2809242 | R Coronae Borealis | [
"Astronomy"
] | 1,581 | [
"Corona Borealis",
"Constellations"
] |
2,134,805 | https://en.wikipedia.org/wiki/Rolling%20block | A rolling-block action is a single-shot firearm action where the sealing of the breech is done with a specially shaped breechblock able to rotate on a pin. The breechblock is shaped like a section of a circle.
The breechblock is locked into place by the hammer, therefore preventing the cartridge from moving backward at the moment of firing. By cocking the hammer, the breechblock can be rotated freely to reload the breech of the weapon.
History
The Remington Rolling Block rifle is one of the most successful single-shot weapons ever developed. It is a strong, simple, and very reliable action, that is not prone to be jammed by debris or with rough handling. It was invented by Leonard Geiger during the United States Civil War and patented in 1863, who (along with his partner, Charles Alger) negotiated a royalty deal with Remington when they put it into production as the so-called "split breech" action late in the war. That design was re-engineered by Joseph Rider in 1865 and called the "Remington System". The first firearm based on it, the Model 1865 Remington Pistol, was offered for sale to the United States Army and Navy in 1866. While the Army turned the design down, the Navy committed to purchase 5,000 pistols.
The first rifle based on this design was introduced at the Paris Exposition in 1867 and the United States Navy placed an order for 12,000 rifles. Within a year it had become the standard military rifle of several nations, including Sweden, Norway, and Denmark.
Many earlier percussion rifles and muskets were converted to rolling-block designs in the interim before the development of more modern bolt-action designs.
The Remington M1867, Springfield Model 1870, and Springfield Model 1871 rifles also used the rolling-block action.
Remington built estimated 1.5 million firearms with rolling-block action, encompassing rifles, carbines, shotguns and pistols.
Barton Jenks rolling block action
A single-shot action developed by Barton Jenks from Bridesburg, Philadelphia right after the Civil War was locked not by the hammer itself, but by a separate hinging piece on the breechblock; it was tested by the US military in 1866 but not adopted.
See also
Bolt action
Lever action
Pump action
Break action
Falling-block action
Semi-automatic rifle
References
Firearm actions
Firearm components | Rolling block | [
"Technology"
] | 475 | [
"Firearm components",
"Components"
] |
2,134,979 | https://en.wikipedia.org/wiki/Elbaite | Elbaite, a sodium, lithium, aluminium boro-silicate, with the chemical composition Na(Li1.5Al1.5)Al6Si6O18(BO3)3(OH)4, is a mineral species belonging to the six-member ring cyclosilicate tourmaline group.
Elbaite forms three series, with dravite, with fluor-liddicoatite, and with schorl. Due to these series, specimens with the ideal endmember formula are not found occurring naturally.
As a gemstone, elbaite is a desirable member of the tourmaline group because of the variety and depth of its colours and quality of the crystals. Originally discovered on the island of Elba, Italy in 1913, it has since been found in many parts of the world. In 1994, a major locality was discovered in Canada, at O'Grady Lakes in the Yukon.
Elbaite forms in igneous and metamorphic rocks and veins in association with lepidolite, microcline, and spodumene in granite pegmatites; with andalusite and biotite in schist; and with molybdenite and cassiterite in massive hydrothermal replacement deposits.
Elbaite is allochromatic, meaning trace amounts of impurities can tint crystals, and it can be strongly pleochroic. Every color of the rainbow may be represented by elbaite, some exhibiting multicolor zonation. Microscopic acicular inclusions in some elbaite crystals show the cat's eye effect in polished cabochons.
Elbaite varieties
Colorless: achroite variety ()
Red or pinkish-red: Rubellite variety (from ruby)
Light blue to bluish green: Brazilian indicolite variety (from indigo)
Green: Brazilian verdelite variety (from emerald)
Watermelon tourmaline is a zoned variety with a reddish center surrounded by a green outer zone resembling watermelon rind, evident in cross-sectional slices of prisms, often displaying curved sides.
See also
List of minerals
References
Sodium minerals
Lithium minerals
Cyclosilicates
Trigonal minerals
Minerals in space group 160
Gemstones
Tourmalines | Elbaite | [
"Physics"
] | 463 | [
"Materials",
"Gemstones",
"Matter"
] |
2,135,319 | https://en.wikipedia.org/wiki/ITRON%20project | The ITRON project was the first sub-project of the TRON project. It has formulated and defined Industrial TRON (ITRON) specification for an embedded real-time OS (RTOS) kernel.
Originally undertaken in 1984, ITRON is a Japanese open standard for a real-time operating system initiated under the guidance of Ken Sakamura. This project aims to standardize the RTOS and related specifications for embedded systems, particularly small-scale embedded systems. The ITRON RTOS specification is targeted for consumer electronic devices, such as mobile phones and fax machines. Various vendors sell their own implementations of the RTOS.
Details
ITRON, and μITRON (sometimes also spelled uITRON or microITRON) are the names of RTOS specifications derived from ITRON projects. The 'μ' character indicates that the particular specification is meant for the smaller 8-bit or 16-bit CPU targets. Specifications are available for free. Commercial implementations are available, and offered under many different licenses.
A few sample source implementations of ITRON specification exist, as do many commercial source offerings.
Examples of open source RTOSes incorporating an API based on μITRON specification are eCos and RTEMS.
ITRON specification is meant for hard real-time embedded RTOS.
It is very popular in the embedded market, as there are many applications for it, i.e., devices with the OS embedded inside.
For example, there is an ACM Queue interview with Jim Ready, founder of MontaVista (realtime linux company), "Interview with Jim Reddy", April 2003, ACM Queue. He says in the interview, "The single, most successful RTOS in Japan historically is μITRON. This is an indigenous open specification led by Dr. Ken Sakamura of the University of Tokyo. It is an industry standard there."
Many Japanese digital cameras, for example, have use ITRON specification OS. Toyota automobile has used ITRON specification OS for engine control.
According to the "Survey Report on Embedded Real-time OS Usage Trends" conducted every year by TRON Forum at the Embedded Technology (ET, organized by Japan Embedded Systems Technology Association: JASA), ITRON specification OS has long held the top share in the embedded OS market in Japan and is adopted as the industry standard OS. For example, in the FY 2016 survey, TRON OSs (including ITRON specification OS and T-Kernel) accounted for around 60% of the embedded systems market. ITRON specification OSs (including μITRON) alone accounted for 43% of the market, and had a 20% lead over UNIX-based OSs (including POSIX), which are in second place behind TRON OSs.
Although ITRON specification may not be very well known overseas, OSs that conform to it have been installed in Japanese-made home appliances and exported around the world, so ITRON specification OS has a high market share. As of 2003, it was ranked number one in the world in terms of OS market share. Because its license could be easily obtained and it was free, it was used quite a bit in Asia.
μITRON (read as micro ITRON, not "mu" ITRON) specification started out as a subset of the original ITRON specification. However, after the version 3 of the μITRON specification appeared, since it covers both the low-end CPU market as well as large-scale systems, the term ITRON often refers to μITRON.
Supported CPUs are numerous. ARM, MIPS, x86, SH FR-V and many others including CPUs supported by open source RTOS eCos and RTEMS, both of which include the support for μITRON compatible APIs.
History
TRON Project began designing the computer architecture as an infrastructure of the future computer applications, and presented an overview of the basic design at the 29th National Convention of the Information Processing Society of Japan in 1984.
Around 1984: Development of μITRON began as a sub-project of TRON Project.
1987: A first comprehensive introduction of TRON Project in English was published in IEEE Micro ( Volume: 7, Issue: 2, April 1987).
ITRON1 specification was released.
1989: The specifications of ITRON2, the version 2 of ITRON, and of μITRON2, a subset of ITRON2, were released.
ITRON2 was released in two versions: ITRON, which was designed for large-scale 32-bit systems, and μITRON, a subset specification for small-scale 8-16-bit single-chip microprocessors. There is no version 1 of μITRON, and the first version is μITRON2. Of these, μITRON2 was used in a wide range of applications, including almost all major MCUs for embedded systems, because it could be used with MCUs with very low performance, and μITRON was adopted for large-scale 32-bit systems, and μITRON3 specification was developed so that it could handle 8-32 bit systems.
1993: μITRON3.0 specification was released.
μITRON3 covers from small to large systems with a single specification by dividing system calls into different levels. It defines functions that are almost equivalent to the full set of ITRON2.
Around 1996: The second phase of the ITRON subproject began.
As embedded systems rapidly became larger and more complex around this time, there was a growing demand for greater application portability. As the performance of embedded systems improved, the functions that were not included in the ITRON specifications at the time because of their high overhead were added. According to the TRON Association survey described in the μITRON4 specification, at the time, the main concerns were not so much that "the OS used too many resources," but rather that "engineers could not use it properly" and "the differences in the specifications were too great, making it a burden to switch".
1999: μITRON4 specification was released.
The original ITRON specification OS was based on the idea of "weak standardization" so that it could be used with CPUs with low performance. However, as the use of middleware on ITRON increased, there was a demand for "strong standardization" to improve software portability, so the compatibility and strictness of the specifications were improved.
2000: T-Engine project started.
T-Kernel project started to promote ITRON standardization and create a next-generation RTOS, T-Kernel, with "stronger standardization" by creating a single source implementation and publishing it.
2010: TRON Association,which promoted TRON Project by publishing specification documents, holding technical seminars, etc. became part of T-Engine Forum.
2015: T-Engine Forum changed its name into TRON Forum.
2017: On 10 November 2017, the Institute of Electrical and Electronics Engineers acquired co-ownership of the copyright of the specification of μT-Kernel (read as micro T-Kernel) from TRON Forum. uT-Kernel is a logical successor of ITRON specification OS. The copyright of the μT-Kernel specification is now co-owned by the two parties. This was to facilitate the creation of IEEE Standard 2050-2018, IEEE Standard for a Real-Time Operating System (RTOS) for Small-Scale Embedded Systems based on μT-Kernel specification.
2023: IEEE recognized the RTOSs proposed, created, and released by TRON Project as an IEEE Milestone by referring to them as "TRON Real-time Operating System Family, 1984", and a certified plaque was installed on the campus of the University of Tokyo, where TRON Project leader Ken Sakamura worked as a research assistant in 1984.
Current Status
The latest version of the μITRON specification, as of 2016, is μITRON4, released in 1999, and the latest version of μITRON4 is 4.03.03, released in December 2006. The specification states that the plan is to design specifications that will allow for a smooth transition from μITRON to T-Kernel in the future. (The English specicification is available: μITRON 4.0 Specification Ver. 4.03.00 )
Sakamura says that μITRON was already a "mature technology" in 2000. From the standpoint that more effort should be focused on the T-Kernel project than the ITRON project in the age of ubiquitous computing, μT-Kernel has been provided for small-scale systems, for which μITRON was traditionally used, and μT-Kernel 2.0 has also been provided for the IoT era.
T-Kernel is mainly used in embedded systems that require advanced information processing, but μITRON is still used in systems that do not require such advanced processing.
Main Adoption Examples
There are design wins such as Toyota PRADO (2005) which uses μITRON for its engine control system, which are listed in 30th anniversary of the TRON Project page. Other design wins that came after that date include Nintendo Switch, a game console which uses FreeBSD as the main OS of the main unit and μITRON4.0 for wireless communication control of the controller (Joy-Con) (2017).
Note, however, during the time ITRON specification OS was distributed, TRON Project did not ask the users to mention its use in the manual or the product itself and so exact tally of the design wins does not exist at all.
μITRON is used as an OS in the invisible realm of devices such as business equipment, home appliances, and game console remote controls.
It is also used in advanced devices such as TV recording servers and automobiles, and under the advanced OS that controls the entire system, multiple MCUs and multiple OSs are installed to control them. Even if the main OS uses embedded Linux or embedded Windows, μITRON is running in the invisible area, such as the MCU for writing media on recording servers or the MCU for controlling the engine of automobiles. Nintendo Switch, a game console released by Nintendo in 2017, uses a FreeBSD-compliant OS as its main OS, but it uses an RTOS from eSOL that complies with the μITRON4.0 specification for controlling the near-field communication (NFC) of the controller (Joy-Con). Nintendo Switch uses a variety of platforms, including TRON OSs, such as the "PrFILE2 exFAT" for the file system of its main unit, which is part of "eCROS" platform based on T-Kernel by eSOL Co., Ltd., and the "Libnfc-nci" as the communication stack for handling NFC, which is part of the Android platform. In addition to Nintendo game consoles, advanced devices such as cars and smartphones are equipped with multiple OSs, including RTOSs, in addition to the main OS.
As an OS with a GUI that is closest to the average consumer, μITRON was widely used as the OS for the high-function mobile phones that became popular in Japan in the early to late 2000s. Microprocessor manufacturers that provide processors to mobile phone manufacturers, such as the SH-Mobile3, which was released by Renesas in 2004 and was used as the main CPU in many of the high-function mobile phones released in Japan in the mid-2000s, provided ITRON specification OSs as part of their platforms. ITRON specification OS was not standardized well, and each company customized the software for each mobile phone, causing the software to expand, and the OS customization became a problem for third-generation mobile communication system (3G) mobile phones. In 2003, NTT DoCoMo announced that it would be recommending Symbian OS and Linux as the OSs for its 3G FOMA service. Thus, from around 2005, "Galapagos" mobile phones also began to use general-purpose OSs like Linux rather than RTOSs like ITRON.
Even after μITRON is no longer used as the main OS for mobile phones, it may still be running in microprocessrs for camera control, etc. For example, the "Milbeaut Mobile" image processing LSI, which was released by Fujitsu in 2003 and used in many of the high-function mobile phones with cameras that became popular in Japan in the early 2000s, used μITRON as its OS. The Milbeaut series is still being sold in the 2010s as an image processing LSI for dashboard cameras, drones, surveillance cameras, etc.
In multimedia devices from the 1990s to the early 2000s, in order to achieve advanced functions such as maximizing the performance of low-performance processors and controlling video processing and network communication in real time in parallel, it was necessary to use an RTOS such as ITRON. However, on the other hand, the burden on engineers was very great, and since the 2010s, when the performance of microprocessors has greatly improved, it is not recommended to use an RTOS to control such high-function devices. Basically, embedded Linux such as Android is used, and only the parts that require real-time performance use an RTOS. Because ITRON specification OS is not well standardized, TRON Forum recommends T-Kernel as an RTOS for high-function embedded systems. In embedded devices for general consumers in the early 2000s, the series of Colorio, Seiko Epson's printer, adopted "eCROS," a software platform based on T-Kernel from eSOL in 2008.
Loose vs Strong Standardization
ITRON's popularity comes from many factors, but one factor is the notion of "loose standardization": the API specification is at the source level, and does not specify binary API compatibility. This makes it possible for implementers to make use of features of the particular CPU model to which the implementation is targeted. The developer even has the freedom of choosing to pass the parameters using a consolidated packet, or separate parameters to API (system call, library call, etc.). Such freedom is important to make the best use of not so powerful 8-bit or 16-bit CPUs. This makes keeping the binary compatibility among different implementations impossible. This led to the development of T-Kernel in the 2000s in order to promote binary compatibility for middleware distribution. T-Kernel refers to both the specification and the single implementation based on the authorized source code available from TRON Forum (formerly T-Engine Forum) for free under T-License. So T-Kernel doesn't suffer from the binary API incompatibility.
ITRON specification was promoted by the various companies which sell the commercial implementations. There was also an NPO,TRON Association that promoted the specification by publishing it as well as other TRON specification OSes. But since the first quarter of 2010, TRON Association became part of T-Engine Forum, another non-profit organization that promotes other operating system such as the next generation RTOS, T-Kernel. T-Engine Forum, in turn, changed its name into TRON Forum in 2015.
JTRON (Java TRON) is a sub-project of ITRON to allow it to use the Java platform.
See also
Expeed – Nikon
Bionz – Sony
CxProcess – Konica Minolta
Softune – Fujitsu
References
External links
, TRON
Dr. Ken Sakamura Lab
ITRON project archive
Fair on TRON, technology showcase, occurs yearly, in English
The Most Popular Operating System in the World
TRON project
de:ITRON | ITRON project | [
"Technology"
] | 3,175 | [
"Computing platforms",
"TRON project"
] |
2,135,548 | https://en.wikipedia.org/wiki/Generosity | Generosity (also called largesse) is the virtue of being liberal in giving, often as gifts. Generosity is regarded as a virtue by various world religions and philosophies and is often celebrated in cultural and religious ceremonies.
Scientific investigation into generosity has examined the effect of a number of scenarios and games on individuals' generosity, potential links with neurochemicals such as oxytocin, and generosity's relationship with similar feelings such as empathy.
Other uses
Generosity often encompasses acts of charity, in which people give without expecting anything in return. This can involve offering time, assets, or talents to assist those in need, such as during natural disasters, where people voluntarily contribute resources, goods, and money. The impact of generosity is most profound when it arises spontaneously rather than being directed by an organization. People can experience joy and satisfaction when they positively affect someone's life through acts of generosity.
Generosity is a guiding principle for many registered charities, foundations, non-profit organizations, etc.
Etymology
The modern English word generosity derives from the Latin word , which means "of noble birth", which itself was passed down to English through the Old French word . The Latin stem is the declensional stem of , meaning "kin", "clan", "race", or "stock", with the root Indo-European meaning of being "to beget". The same root gives the words genesis, gentry, gender, genital, gentile, genealogy, and genius, among others.
Over the last five centuries in the English-speaking world, generosity has developed from being primarily the description of an ascribed status pertaining to the elite nobility to being an achieved mark of admirable personal quality and action capable of being exercised in theory by any person who had learned virtue and noble character.
Most recorded English uses of the word generous up to and during the sixteenth century reflect an aristocratic sense of being of noble lineage or high birth. Being generous was literally a way of complying with nobility.
During the 17th century, the meaning and use of the word began to change. Generosity came increasingly to identify not literal family heritage but a nobility of spirit thought to be associated with high birth—that is, with various admirable qualities that could now vary from person to person, depending not on family history but on personal character. Generosity came to signify gallantry, courage, strength, richness, gentleness, and fairness. In addition, generous became used to describe fertile land, the strength of animal breeds, abundant provisions of food, the vibrancy of colors, the strength of liquor, and the potency of medicine.
During the 18th century, the meaning of generosity continued to evolve to denote the more specific, contemporary meaning of munificence, open-handedness, and liberality in the giving of money and possessions to others. This more specific meaning came to dominate English usage by the 19th century.
In religion
In Buddhism, generosity is one of the Ten Perfections and is the antidote to the self-chosen poison called greed. Generosity is known as in the Eastern religious scriptures.
In Islam, the Quran states that whatever one gives away generously, with the intention of pleasing God, He will replace. God knows what is in the hearts of men. Say: “Truly, my Lord enlarges the provision for whom He wills of His slaves, and also restricts it for him, and whatever you spend of anything (in God’s Cause), He will replace it. And He is the Best of providers.”
In Christianity, in the Acts of the Apostles, Paul reports that Jesus said that giving is better than receiving, although the gospels do not record this as a saying of Jesus. In his first letter to Timothy, Paul tells rich Christians that they must be "generous and willing to share", and in his second letter to the Corinthians he states that "God loves a cheerful giver". Later Christian tradition of the virtue of charity.
In philosophy
Immanuel Kant also contemplates generosity in a universal and uninterested form in his categorical imperative.
Research and scholarship
Research associates generosity with empathy. Paul J. Zak and colleagues administered the peptide oxytocin or placebo was given to about 100 men who then they made several decisions regarding money. One scenario, the Dictator Game, was used to measure altruism by asking people to make a unilateral transfer of $10 they were given by the experimenters to a stranger in the lab; oxytocin had no effect on . Another task, the Ultimatum Game, was used to measure generosity. In this game, one person was endowed with $10 and was asked to offer some split of it to another person in the lab, via computer. If the second person did not like the split, he could reject it (for example, if it was stingy) and both people would get zero. In a clever twist, the researchers told participants they would be randomly chosen to be either the person making the offer or the person responding to it. This required the person making the offer to take the other's perspective explicitly. Generosity was defined as an offer greater than the minimum amount needed for acceptance. Oxytocin increased generosity 80% compared to those on placebo. In addition, oxytocin was quantitatively twice as important in predicting generosity as was .
Research indicates that higher-income individuals are less generous than poorer individuals, and that a perceived higher economic inequality leads higher-income individuals to be less generous.
The science of generosity initiative at the University of Notre Dame investigates the sources, origins, and causes of generosity; manifestations and expressions of generosity; and consequences of generosity for givers and receivers. Generosity for the purposes of this project is defined as the virtue of giving good things to others empathically and abundantly.
The impact of external circumstances on generosity was explored by Milan Tsverkova and Michael W. Macy. Generosity exhibited a form of social contagion, influencing people's willingness to be generous. The study examined two methods of spreading generosity behavior: generalized reciprocity and the influence of observing others' generous actions. The findings indicate that these methods increase the frequency of generous behaviors. However, a bystander effect can also arise, leading to a decrease in the frequency of such behaviors.
Peer punishment influences cooperation in human groups. In one set of laboratory experiments, participant roles included punishers, non-punishers, and generous and selfish people. Generous people were considered more trustworthy by participants than selfish people, and punishers were considered less trustworthy than non-punishers.
See also
References
External links
Center for Neuroeconomics Studies
Shareable: News on Sharing
On Generosity (G.W. Leibniz)
Philanthropy
Social concepts
Virtue
Fruit of the Holy Spirit | Generosity | [
"Biology"
] | 1,385 | [
"Philanthropy",
"Behavior",
"Altruism"
] |
2,135,845 | https://en.wikipedia.org/wiki/Meron%20%28physics%29 | A meron or half-instanton is a Euclidean space-time solution of the Yang–Mills field equations. It is a singular non-self-dual solution of topological charge 1/2. The instanton is believed to be composed of two merons.
A meron can be viewed as a tunneling event between two Gribov vacua. In that picture, the meron is an event which starts from vacuum, then a Wu–Yang monopole emerges, which then disappears again to leave the vacuum in another Gribov copy.
See also
BPST instanton
Dyon
Instanton
Monopole
Sphaleron
References
Gauge Fields, Classification and Equations of Motion, Moshe Carmeli, Kh. Huleilil and Elhanan Leibowitz, World Scientific Publishing
Gauge theories
Quantum chromodynamics | Meron (physics) | [
"Physics"
] | 169 | [
"Quantum mechanics",
"Quantum physics stubs"
] |
2,135,870 | https://en.wikipedia.org/wiki/Wu%E2%80%93Yang%20monopole | The Wu–Yang monopole was the first solution (found in 1968 by Tai Tsun Wu and Chen Ning Yang) to the Yang–Mills field equations. It describes a magnetic monopole which is pointlike and has a potential which behaves like 1/r everywhere.
See also
Meron
Dyon
Instanton
Wu–Yang dictionary
Notes
References
Gauge Fields, Classification and Equations of Motion, M.Carmeli, Kh. Huleilil and E. Leibowitz, World Scientific Publishing
Gauge theories
Magnetic monopoles | Wu–Yang monopole | [
"Physics",
"Astronomy"
] | 109 | [
"Astronomical hypotheses",
"Unsolved problems in physics",
"Quantum mechanics",
"Magnetic monopoles",
"Quantum physics stubs"
] |
2,135,890 | https://en.wikipedia.org/wiki/Nitrosomonas | Nitrosomonas is a genus of Gram-negative bacteria, belonging to the Betaproteobacteria. It is one of the five genera of ammonia-oxidizing bacteria and, as an obligate chemolithoautotroph, uses ammonia (NH3) as an energy source and carbon dioxide (CO2) as a carbon source in the presence of oxygen. Nitrosomonas are important in the global biogeochemical nitrogen cycle, since they increase the bioavailability of nitrogen to plants and in the denitrification, which is important for the release of nitrous oxide, a powerful greenhouse gas. This microbe is photophobic, and usually generate a biofilm matrix, or form clumps with other microbes, to avoid light. Nitrosomonas can be divided into six lineages: the first one includes the species Nitrosomonas europea, Nitrosomonas eutropha, Nitrosomonas halophila, and Nitrosomonas mobilis. The second lineage presents the species Nitrosomonas communis, N. sp. I and N. sp. II. The third lineage includes only Nitrosomonas nitrosa. The fourth lineage includes the species Nitrosomonas ureae and Nitrosomonas oligotropha. The fifth and sixth lineages include the species Nitrosomonas marina, N. sp. III, Nitrosomonas estuarii, and Nitrosomonas cryotolerans.
Morphology
All species included in this genus have ellipsoidal or rod-shaped cells which have extensive intracytoplasmic membranes displaying as flattened vesicles.
Most species are motile with a flagellum located in the polar region of the cell. Three basic morphological types of Nitrosomonas were studied, which are: short rods Nitrosomonas, rods Nitrosomonas, and Nitrosomonas with pointed ends. Nitrosomonas species cells have different criteria of size and shape:
N. europaea cells appear as short rods with pointed ends, with a size of 0.8–1.1 x 1.0–1.7 μm; motility has not been observed.
N. eutropha cells present as rod to pear shaped cells with one or both ends pointed, with a size of 1.0–1.3 x 1.6–2.3 μm. They show motility.
N. halophila cells have a coccoid shape and a size of 1.1–1.5 x 1.5–2.2 μm. Motility is possible because of a tuft of flagella.
N. communis has large rods with rounded end cells with a size of 1.0–1.4 x 1.7–2.2 μm. Motility has not been observed in this species.
N. nitrosa, N. oligotropha, and N. ureae cells are spheres or rods with rounded ends. Motility has not been observed in them as well.
N. marina presents slender rod cells with rounded ends with a size of 0.7–0.9 x 1.7- 2.2 μm.
N. aestuarii and N. cryotolerans present as rod shaped cells.
Genome
Genome sequencing of Nitrosomonas species has been important to understand the ecological role of these bacteria.
Among the various species of Nitrosomonas that are known today, the complete genomes of N. ureae strain Nm10 and N. europaea, N.sp. Is79 have been sequenced.
Ammonia-oxidation genes
The presence of the genes for ammonia oxidation characterizes all these species. The first enzyme involved in the ammonia oxidation is ammonia monooxygenase (AMO), which is encoded by the amoCAB operon. The AMO enzyme catalyzes the oxidation from NH3
(ammonia) to NH2OH (hydroxylamine). The amoCAB operon contains three different genes: amoA, amoB and amoC. While N. europaea presents two copies of the genes, N. sp. Is79 and N. ureae strain Nm10 have three copies of these genes.
The second enzyme involved in the ammonia oxidation is hydroxylamine oxidoreductase (HAO), encoded by the hao operon. This enzyme catalyzes the oxidation from NH2OH to NO, a highly reactive radical intermediate that can be partitioned into both of the main AOB products: N2O, a potent greenhouse gas, and NO2-, a form of nitrogen more bioavailable for crops, but that conversely washes away from fields faster. The hao operon contains different genes such as the haoA, which encodes for the functional cytochrome c subunit, the cycA which encodes for cytochrome c554, and cycB that encodes for quinone reductase. These genes are present in different copies in various species; for instance, in Nitrosomonas sp. Is79 there are only three copies, while in N. ureae there are four.
Denitrification genes
The discovery of genes that encode for enzymes involved in the denitrification process includes the first gene nirK which encodes for a nitrite reductase with copper. This enzyme catalyzes the reduction form NO2(nitrite) to NO (nitric oxide). While in N. europaea, N. eutropha, and N. cryotolerans, nirK is included in a multigenetic cluster; in Nitrosomonas sp. Is79 and N. sp. AL212, it is present as a single gene. A high expression of the nirK gene was found in N.ureae and this has been explained with the hypothesis that the NirK enzyme is also involved in the oxidation of NH2OH in this species. The second gene involved in denitrification is norCBQD which encodes a nitric-oxide reductase that catalyze the reduction from NO (nitric oxide) to N2O (nitrous oxide). These genes are present in N. sp. AL212, N.cryotolerans, and N. communis strain Nm2. In Nitrosomonas europaea, these genes are included in a cluster. These genes are absent in N. sp. Is79 and N. ureae. Recently, it was found that the norSY gene encodes for a nitric-oxide reductase with copper in N. communis strain Nm2 and Nitrosomonas AL212.
Carbon fixation genes
Nitrosomonas uses the Calvin-Benson cycle as a pathway for Carbon fixation. For this reason, all of the species have an operon that encodes for the RuBisCO enzyme. A peculiarity is found in N. sp Is79 in which the two copies of the operon encode for two different forms of the RuBisCO enzyme: the IA form and the IC form, where the first one has a major affinity with the Carbon dioxide. Other species present different copies of this operon that encodes only for the IA form. In N. europaea, an operon is characterized by five genes (ccbL, ccbS, ccbQ, ccbO, and ccbN) that encode for the RuBisCO enzyme. ccbL encodes for the major subunit while ccbS encodes for the minor subunit; these genes are also the most expressed within the operon. ccbQ and ccbO genes encode for a number of proteins involved in the mechanisms of processing, folding, assembling, activation, and regulation of the RuBisCO enzyme. Instead, ccbN encodes for a protein of 101 amino acids, whose function is not known yet. A putative regulatory gene, cbbR, was found 194 bases upstream of the start codon of cbbL and is transcribed in the opposite direction of other genes).
Transporter genes
Since Nitrosomonas are part of the ammonia-oxidizing bacteria (AOB), ammonia carriers are important to them. Bacteria adapted to high concentrations of ammonia can absorb it passively by simple diffusion. Indeed, N. eutropha, that is adapted to high levels of ammonia, does not present genes that encode for an ammonia transporter. Bacteria adapted to low concentrations of ammonia have a transporter (transmembrane protein) for this substrate. In Nitrosomonas, two different carriers for ammonia have been identified, differing in structure and function. The first transporter is the Amt protein (amtB type) encoded by amt genes and was found in Nitrosomonas sp. Is79. The activity of this ammonia carrier depends on the membrane potential. The second was found in N. europaea, wherein the rh1 gene encodes an Rh-type ammonia carrier. Its activity is independent from the membrane potential. Recent research has also linked the Rh transmembrane proteins with CO2transport, but this is not clear yet.
Metabolism
Nitrosomonas is one of the genera included in AOB and use ammonia as an energy source and carbon dioxide as the main source of carbon. The oxidation of ammonia is a rate-limiting step in nitrification and plays a fundamental role in the nitrogen cycle, because it transforms ammonia, which is usually extremely volatile, into less volatile forms of nitrogen.
Ammonia-oxidation
Nitrosomonas oxidizes ammonia into nitrite in a metabolic process, known as nitritation (a step of nitrification). This process occurs with the accompanying reduction of an oxygen molecule to water (which requires four electrons), and the release of energy. The oxidation of ammonia to hydroxylamine is catalyzed by ammonia monooxygenase (AMO), which is a membrane-bound, multisubstrate enzyme. In this reaction, two electrons are required to reduce an oxygen atom to water:
NH3 + O2 + 2 H+ + 2 e– → NH2OH + H2O
Since an ammonia molecule only releases two electrons when oxidized, it has been assumed that the other two necessary electrons come from the oxidation of hydroxylamine to nitrite, which occurs in the periplasm and it is catalyzed by hydroxylamine oxidoreductase (HAO), a periplasm associated enzymes.
NH2OH + H2O → NO2– + 5 H+ + 4 e–
Two of the four electrons released by the reaction, return to the AMO to convert the ammonia in hydroxylamine. 1,65 of the two remaining electrons are available for the assimilation of nutrients and the generation of the proton gradient. They pass through the cytochrome c552 to the cytochrome caa3, then to O2, which is the terminal acceptor; here they are reduced to form water. The remaining 0,35 electrons are used to reduce NAD+ to NADH, to generate the proton gradient.
Nitrite is the major nitrogen oxide produced in the process, but it has been observed that, when oxygen concentrations are low, nitrous oxide and nitric oxide can also form, as by-products from the oxidation of hydroxylamine to nitrite.
The species N. europaea has been identified as being able to degrade a variety of halogenated compounds including trichloroethylene, benzene, and vinyl chloride.
Ecology
Habitat
Nitrosomonas is generally found in highest numbers in all habitats in which there is abundance of ammonia (environment with plentiful protein decomposition or in wastewater treatment), thrive in a pH range of 6.0–9.0, and a temperature range of . Some species can live and proliferate on a monuments’ surface or on stone buildings’ walls, contributing to erosion of those surfaces.
It is usually found in all types of waters, globally distributed in both eutrophic and oligotrophic freshwater and saltwater, emerging especially in shallow coastal sediments and under the upwelling zones, such as the Peruvian coast and the Arabian Sea, but can also be found in fertilized soils.
Some Nitrosomonas species, such as N.europaea, possess the enzyme urease (which catalyzes the conversion of urea into ammonia and carbon dioxide) and have been shown to assimilate the carbon dioxide released by the reaction to make biomass via the Calvin cycle, and harvest energy by oxidizing ammonia (the other product of urease) to nitrite. This feature may explain enhanced growth of AOB in the presence of urea in acidic environments.
Leaching of soil
In agriculture, nitrification made by Nitrosomonas represents a problem because the oxidized nitrite by ammonia can persist in the soil, leaching and making it less available for plants.
Nitrification can be slowed down by some inhibitors that are able to slow down the oxidation process of ammonia to nitrites by inhibiting the activity of Nitrosomonas and other ammonia-oxidizing bacteria thereby minimizing or preventing the loss of nitrate. (Read more about inhibitors in the section 'Inhibitors of nitrification' on this page Nitrification)
Application
Nitrosomonas is used in activated sludge in aerobic wastewater treatment; the reduction of nitrogen compounds in the water is given by nitrification treatment in order to avoid environmental issues, such as ammonia toxicity and groundwater contamination. Nitrogen, if present in high quantities can cause algal development, leading to eutrophication with degradation of oceans and lakes.
Employing as wastewater treatment, biological removal of nitrogen is obtained at a lower economic expense and with less damage caused to the environment compared to physical-chemical treatments.
Nitrosomonas has also a role in biofilter systems, typically in association and collaboration with other microbes, to consume compounds such as NH4+ or CO2 and recycle nutrients. These systems are used for various purposes but mainly for the elimination of odors from waste treatment.
Other uses
Potential cosmetic benefits
N. europaea is a non-pathogenic bacteria studied in connection with probiotic therapies. In this context, it may give aesthetic benefits in terms of reducing the appearance of wrinkles. The effectiveness of probiotic products has been studied to explore why N. eutropha, which is a highly mobile bacterium, has become extinct from the normal flora of our skin. It has been studied in connection with the idea of having benefits through the repopulation and reintroduction of N. eutropha to the normal flora of human skin.
See also
Nitrate
Nitrite
Nitrobacter
Nitrobacteraceae
Nitrogen cycle
Nitrospira
Nitrospirota
References
George M. Garrity: Bergey's manual of systematic bacteriology. 2. Auflage. Springer, New York, 2005, Vol. 2: The Proteobacteria Part C: The Alpha-, Beta-, Delta-, and Epsilonproteobacteria
Soil biology
Nitrosomonadaceae
Bacteria genera | Nitrosomonas | [
"Biology"
] | 3,189 | [
"Soil biology"
] |
2,135,947 | https://en.wikipedia.org/wiki/Feeding%20frenzy | In ecology, a feeding frenzy is a type of animal group activity that occurs when predators are overwhelmed by the amount of prey available. The term is also used as an idiom in the English language.
Examples in nature
For example, a large school of fish can cause nearby sharks, such as the lemon shark, to enter into a feeding frenzy. This can cause the sharks to go wild, biting anything that moves, including each other or anything else within biting range. Another functional explanation for feeding frenzy is competition amongst predators. This term is most often used when referring to sharks or piranhas.
English language uses
It has also been used as a term within journalism.
The term is occasionally used to describe a plethora of something. For instance, a 2016 Bloomberg News article is entitled: "March Madness Is a Fantasy Sports Feeding Frenzy."
In economics the term can be used to describe the economics of the music industry, as large music companies acquired smaller music companies.
See also
Bait ball
Adage
Comprehension of idioms
Idiom in English language
Media feeding frenzy
Phrasal verb
Metaphor
References
Eating behaviors
Idioms
Adages
fr:Attaque de requin#La frénésie alimentaire | Feeding frenzy | [
"Biology"
] | 242 | [
"Biological interactions",
"Eating behaviors",
"Behavior"
] |
2,135,962 | https://en.wikipedia.org/wiki/Snapshot%20%28computer%20storage%29 | In computer systems, a snapshot is the state of a system at a particular point in time. The term was coined as an analogy to that in photography.
Rationale
A full backup of a large data set may take a long time to complete. On multi-tasking or multi-user systems, there may be writes to that data while it is being backed up. This prevents the backup from being atomic and introduces a version skew that may result in data corruption. For example, if a user moves a file into a directory that has already been backed up, then that file would be completely missing on the backup media, since the backup operation had already taken place before the addition of the file. Version skew may also cause corruption with files which change their size or contents underfoot while being read.
One approach to safely backing up live data is to temporarily disable write access to data during the backup, either by stopping the accessing applications or by using the locking API provided by the operating system to enforce exclusive read access. This is tolerable for low-availability systems (on desktop computers and small workgroup servers, on which regular downtime is acceptable). High-availability 24/7 systems, however, cannot bear service stoppages.
To avoid downtime, high-availability systems may instead perform the backup on a snapshot—a read-only copy of the data set frozen at a point in time—and allow applications to continue writing to their data. Most snapshot implementations are efficient and can create snapshots in O(1). In other words, the time and I/O needed to create the snapshot does not increase with the size of the data set; by contrast, the time and I/O required for a direct backup is proportional to the size of the data set. In some systems once the initial snapshot is taken of a data set, subsequent snapshots copy the changed data only, and use a system of pointers to reference the initial snapshot. This method of pointer-based snapshots consumes less disk capacity than if the data set was repeatedly cloned.
Implementations
Volume managers
Some Unix systems have snapshot-capable logical volume managers. These implement copy-on-write on entire block devices by copying changed blocksjust before they are to be overwritten within "parent" volumesto other storage, thus preserving a self-consistent past image of the block device. Filesystems on such snapshot images can later be mounted as if they were on a read-only media.
Some volume managers also allow creation of writable snapshots, extending the copy-on-write approach by disassociating any blocks modified within the snapshot from their "parent" blocks in the original volume. Such a scheme could be also described as performing additional copy-on-write operations triggered by the writes to snapshots.
On Linux, Logical Volume Manager (LVM) allows creation of both read-only and read-write snapshots. Writable snapshots were introduced with the LVM version 2 (LVM2).
File systems
Some file systems, such as WAFL, fossil for Plan 9 from Bell Labs, and ODS-5, internally track old versions of files and make snapshots available through a special namespace. Others, like UFS2, provide an operating system API for accessing file histories. In NTFS, access to snapshots is provided by the Volume Shadow-copying Service (VSS) in Windows XP and Windows Server 2003 and Shadow Copy in Windows Vista. Melio FS provides snapshots via the same VSS interface for shared storage. Snapshots have also been available in the NSS (Novell Storage Services) file system on NetWare since version 4.11, and more recently on Linux platforms in the Open Enterprise Server product.
EMC's Isilon OneFS clustered storage platform implements a single scalable file system that supports read-only snapshots at the file or directory level. Any file or directory within the file system can be snapshotted and the system will implement a copy-on-write or point-in-time snapshot dynamically based on which method is determined to be optimal for the system.
On Linux, the Btrfs and OCFS2 file systems support creating snapshots (cloning) of individual files. Additionally, Btrfs also supports the creation of snapshots of subvolumes. On AIX, JFS2 also support snapshots.
See also
Application checkpointing
Persistence (computer science)
Sandbox (computer security)
Storage Hypervisor
System image
Virtual machine
Notes
References
External links
Backup
Fault-tolerant computer systems
Persistence | Snapshot (computer storage) | [
"Technology",
"Engineering"
] | 962 | [
"Fault-tolerant computer systems",
"Reliability engineering",
"Computer systems",
"Backup"
] |
2,136,049 | https://en.wikipedia.org/wiki/Explosimeter | An explosimeter is a gas detector which is used to measure the amount of combustible gases present in a sample. When a percentage of the lower explosive limit (LEL) of an atmosphere is exceeded, an alarm signal on the instrument is activated.
The device, also called a combustible gas detector, operates on the principle of resistance proportional to heat—a wire is heated, and a sample of the gas is introduced to the hot wire. Combustible gases burn in the presence of the hot wire, thus increasing the resistance and disturbing a Wheatstone bridge, which gives the reading.
A flashback arrestor is installed in the device to avoid the explosimeter igniting the sample external to the device.
Note, that the detection readings of an explosimeter are only accurate if the gas being sampled has the same characteristics and response as the calibration gas. Most explosimeters are calibrated to methane or hydrogen.
References
External links
https://web.archive.org/web/20050910075254/http://www.marineengineering.org.uk/testequipment/explosimeter.htm (select explosimeter from the left frame)
Explosimetry
Explosion protection
Gas technologies
Measuring instruments | Explosimeter | [
"Chemistry",
"Technology",
"Engineering"
] | 268 | [
"Explosion protection",
"Combustion engineering",
"Explosions",
"Measuring instruments"
] |
2,136,757 | https://en.wikipedia.org/wiki/Alarm%20signal | In animal communication, an alarm signal is an antipredator adaptation in the form of signals emitted by social animals in response to danger. Many primates and birds have elaborate alarm calls for warning conspecifics of approaching predators. For example, the alarm call of the blackbird is a familiar sound in many gardens. Other animals, like fish and insects, may use non-auditory signals, such as chemical messages. Visual signs such as the white tail flashes of many deer have been suggested as alarm signals; they are less likely to be received by conspecifics, so have tended to be treated as a signal to the predator instead.
Different calls may be used for predators on the ground or from the air. Often, the animals can tell which member of the group is making the call, so that they can disregard those of little reliability.
Evidently, alarm signals promote survival by allowing the receivers of the alarm to escape from the source of peril; this can evolve by kin selection, assuming the receivers are related to the signaller. However, alarm calls can increase individual fitness, for example by informing the predator it has been detected.
Alarm calls are often high-frequency sounds because these sounds are harder to localize.
Selective advantage
This cost/benefit tradeoff of alarm calling behaviour has sparked many interest debates among evolutionary biologists seeking to explain the occurrence of such apparently "self-sacrificing" behaviour. The central question is this: "If the ultimate purpose of any animal behaviour is to maximize the chances that an organism's own genes are passed on, with maximum fruitfulness, to future generations, why would an individual deliberately risk destroying itself (their entire genome) for the sake of saving others (other genomes)?".
Altruism
Some scientists have used the evidence of alarm-calling behaviour to challenge the theory that "evolution works only/primarily at the level of the gene and of the gene's 'interest' in passing itself along to future generations." If alarm-calling is truly an example of altruism, then human understanding of natural selection becomes more complicated than simply "survival of the fittest gene".
Other researchers, generally those who support the selfish gene theory, question the authenticity of this "altruistic" behaviour. For instance, it has been observed that vervets sometimes emit calls in the presence of a predator, and sometimes do not. Studies show that these vervets may call more often when they are surrounded by their own offspring and by other relatives who share many of their genes. Other researchers have shown that some forms of alarm calling, for example, "aerial predator whistles" produced by Belding's ground squirrels, do not increase the chances that a caller will get eaten by a predator; the alarm call is advantageous to both caller and recipient by frightening and warding off the predator.
Predator-directed signaling
Another theory suggests that alarm signals function to attract further predators, which fight over the prey organism, giving it a better chance of escape. Others still suggest they are a deterrent to predators, communicating the prey's alertness to the predator. One such case is the western swamphen (Porphyrio porphyrio), which gives conspicuous visual tail flicks (see also aposematism, handicap principle and stotting).
Further research
Considerable research effort continues to be directed toward the purpose and ramifications of alarm-calling behaviour, because, to the extent that this research has the ability to comment on the occurrence or non-occurrence of altruistic behaviour, these findings can be applied to the understanding of altruism in human behaviour.
Monkeys with alarm calls
Vervet monkeys
Vervet monkeys (Chlorocebus Pygerythus) are some of the most studied monkeys when it comes to vocalization and alarm calls within the nonhuman primates. They are most known for making alarm calls in the presence of their most common predators (leopards, eagles, and snakes). Alarm calls of the vervet monkey are considered arbitrary in relation to the predator that they signify, in the sense that while the calls may be distinct to the threat that the monkeys are perceiving, the calls do not mimic the actual sounds of the predatorit is like yelling "Danger!" when seeing an angry dog rather than making barking sounds. This type of alarm calls is seen as the earliest example of symbolic communication (the relationship between signifier and signified is arbitrary and purely conventional) in nonhuman primates.
However, there is much debate on whether the vervet monkeys alarm calls are actual "words" in the sense of purposely manipulating sounds to communicate specific meaning or are unintentional sounds that are made when interacting with an outside stimulus. Like small children who cannot communicate words effectively make random noises when being played with or are stimulated by something in their immediate environment. As children grow and begin learning how to communicate the noises, they make are very broad in relation to their environment. They begin to recognize the things in their environment but there more things than known words or noises so a certain sound may reference multiple things. As children get older, they can become more specific about the noises and words made in relation to the things in their environment. It is thought that as Vervet monkeys get older they are able to learn and break the broad categories into more specific sub categories to a specific context.
In an experiment conducted by Dr. Tabitha Price, they used custom software to gather the acoustic sounds of male and female Vervet monkeys from East Africa and male Vervet monkey from South Africa. The point of the experiment was to gather the acoustic sounds of these monkeys when stimulated by the presence of snakes (mainly Python), raptors, terrestrial animals (mostly Leopards), and aggression. Then to determine if the calls could be distinguished with a known context.
The experiment determined that while the Vervet monkeys were able to categorize different predators and members of different social groups, however their ability to communicate specific threats is not proven. The chirps and barks that Vervet monkeys make as an eagle swoops in are the same chirps and barks that are made in moments of high arousal. Similarly, the barks made for leopards are the same that are made during aggressive interactions. The environment that they exist in is too complex for their ability to communicate about everything in their environment specifically.
In an experiment conducted by Dr. Julia Fischer, a drone was flown over Vervet monkeys and recorded the sounds produced. The Vervet monkeys made alarm calls that were almost identical to the eagle calls of East African Vervets. When a sound recording of the drone was played back a few days later to a monkey that was alone and away from the main group it looked up and scanned the sky. Dr. Fischer concluded that Vervet monkeys can be exposed to a new threat once and understand what it means.
It is still debated whether or not Vervet monkeys are actually aware of what the alarm calls mean. One side of the argument is that the monkeys give alarm calls because they are simply excited. The other side of the argument is that the alarm calls create mental representation of predators in the listeners minds. The common middle ground argument is that they give alarm calls because they want others to elicit a certain response, not necessarily because they want the group to think that there is a specific threat near.
Ultimately there is not enough evidence to support whether or not the calls are simply identifying a threat or calling for specific action due to the threat.
Campbell's mona monkeys
Campbell's mona monkeys also generate alarm calls, but in a different way than vervet monkeys. Instead of having discrete calls for each predator, Campbell monkeys have two distinct types of calls which contain different calls which consist in an acoustic continuum of affixes which change meaning. It has been suggested that this is a homology to human morphology. Similarly, the cotton-top tamarin is able to use a limited vocal range of alarm calls to distinguish between aerial and land predators. Both the Campbell monkey and the cotton-top tamarin have demonstrated abilities similar to vervet monkeys' ability to distinguish likely direction of predation and appropriate responses.
That these three species use vocalizations to warn others of danger has been called by some proof of proto-language in primates. However, there is some evidence that this behavior does not refer to the predators themselves but to threat, distinguishing calls from words.
Barbary macaque
Another species that exhibits alarm calls is the Barbary macaque. Barbary macaque mothers are able to recognize their own offspring's calls and behave accordingly.
Diana monkeys
Diana monkeys also produce alarm signals. Adult males respond to each other's calls, showing that calling can be contagious. Their calls differ based on signaller sex, threat type, habitat, and caller ontogenetic or lifetime predator experience.
Diana monkeys emit different alarm calls as a result of their sex. Male alarm calls are primarily used for resource defence, male–male competition, and communication between groups of conspecifics. Female alarm calls are mainly used for communication within groups of conspecifics to avoid predation.
Alarm calls are also predator-specific. In Taï National Park, Côte d'Ivoire, Diana monkeys are preyed on by leopards, eagles, and chimpanzees, but only emit alarm calls for leopards and eagles. When threatened by chimpanzees, they use silent, cryptic behaviour and when threatened by leopards or eagles, they emit predator-specific alarm signals. When researchers play recordings of alarm calls produced by chimpanzees in response to predation by leopards, about fifty per cent of nearby Diana monkeys switch from a chimpanzee antipredator response to a leopard antipredator response. The tendency to switch responses is especially prominent among Diana monkey populations that live within the main range of the chimpanzee community. This shift in antipredator response suggests that the monkeys interpret chimpanzee-produced, leopard-induced alarm calls as evidence for the presence of a leopard. When the same monkeys are then played recordings of leopard growls, their reactions confirm that they had anticipated the presence of a leopard. There are three possible cognitive mechanisms explaining how Diana monkeys recognize chimpanzee-produced, leopard-induced alarm calls as evidence for a nearby leopard: associative learning, causal reasoning, or a specialized learning programme driven by adaptive antipredator behaviour necessary for survival.
In Taï National Park and Tiwai Island, Sierra Leone, specific acoustic markers in the alarm calls of Diana monkeys convey both threat type and caller familiarity information to a receiver. In Taï National Park, males respond to eagle alarm signals based on predator type and caller familiarity. When the caller is unfamiliar to the receiver, the response call is a 'standard' eagle alarm call, characterized by a lack of frequency transition at the onset of the call. When the caller is familiar, the response call is an atypical eagle alarm call, characterized by a frequency transition at onset, and the response is faster than to that of an unfamiliar caller. On Tiwai Island, males respond in the opposite way to eagle alarm signals. When the caller is familiar, the response call is a 'standard' eagle alarm call, without a frequency transition at onset. When the caller is unfamiliar, the response call is an atypical eagle alarm call, with a frequency transition at onset.
The differences in alarm call responses are due to differences in habitat. In Taï National Park, there is a low predation risk from eagles, high primate abundance, strong intergroup competition, and a tendency for group encounters to result in high levels of aggression. Therefore, even familiar males are a threat to whom males respond with aggression and an atypical eagle alarm call. Only unfamiliar males, who are likely to be solitary and non-threatening, do not receive an aggressive response and receive only a typical alarm call. On Tiwai Island, there is a high predation risk from eagles, low primate abundance, a tendency for group encounters to result in peaceful retreats, low resource competition, and frequent sharing of foraging areas. Therefore, there is a lack of aggression towards familiar conspecifics to whom receivers respond with a 'standard' eagle call. There is only aggression towards unfamiliar conspecifics, to whom receivers respond with an atypical call. Simply put, a response with a typical eagle alarm call prioritizes the risk of predation, while a response with an atypical alarm call prioritizes social aggression.
Diana monkeys also display a predisposition for flexibility in acoustic variation of alarm call assembly related to caller ontogenetic or lifetime predator experience. In Taï National Park and on Tiwai Island, monkeys have a predisposition to threat-specific alarm signals. In Taï National Park, males produce three threat-specific calls in response to three threats: eagles, leopards, and general disturbances. On Tiwai Island, males produce two threat-specific calls in response to two groups of threats: eagles, and leopards or general disturbances. The latter are likely grouped together because leopards have not been present on the island for at least 30 years. Other primates, such as Guereza monkeys and putty-nosed monkeys, also have two main predator-specific assemblies of alarm calls. Predator-specific alarm signals differ based on call sequence assembly. General disturbances in Taï National Park and both general disturbances and leopards on Tiwai Island result in alarm calls assembled into long sequences. Conversely, leopards in Taï National Park result in alarm calls that typically begin with voiced inhalations followed by a small number of calls. These differences in alarm call arrangement between habitats are due to ontogenetic experience; specifically, a lack of experience with leopards on Tiwai Island causes them to be classified in the same predator category as general disturbances, and accordingly, leopards receive the same type of alarm call arrangement.
Sexual selection for predator-specific alarm signals
In guenons, selection is responsible for the evolution of predator-specific alarm calls from loud calls. Loud calls travel long distances, greater than that of the home range, and can be used as beneficial alarm calls to warn conspecifics or showcase their awareness of and deter a predator. A spectrogram of a subadult male call shows that the call is a composition of elements from a female alarm call and male loud call, suggesting the transition from the latter to the former during puberty and suggesting that alarm calls gave rise to loud calls through sexual selection. Evidence of sexual selection in loud calls includes structural adaptations for long-range communication, co-incidence of loud calls and sexual maturity, and sexual dimorphism in loud calls.
Controversy over the semantic properties of alarm calls
Not all scholars of animal communication accept the interpretation of alarm signals in monkeys as having semantic properties or transmitting "information". Prominent spokespersons for this opposing view are Michael Owren and Drew Rendall, whose work on this topic has been widely cited and debated. The alternative to the semantic interpretation of monkey alarm signals as suggested in the cited works is that animal communication is primarily a matter of influence rather than information, and that vocal alarm signals are essentially emotional expressions influencing the animals that hear them. In this view monkeys do not designate predators by naming them, but may react with different degrees of vocal alarm depending on the nature of the predator and its nearness on detection, as well as by producing different types of vocalization under the influence of the monkey's state and movement during the different types of escape required by different predators. Other monkeys may learn to use these emotional cues along with the escape behaviour of the alarm signaller to help make a good decision about the best escape route for themselves, without there having been any naming of predators.
Chimpanzees with alarm calls
Chimpanzees emit alarm calls in response to predators, such as leopards and snakes. They produce three types of alarm calls: acoustically-variable 'hoos', 'barks', and 'SOS screams'. Alarm signalling is impacted by receiver knowledge and caller age, can be coupled with receiver monitoring, and is important to the understanding of the evolution of hominoid communication.
Receiver knowledge
Alarm signalling varies depending on the receiver's knowledge of a certain threat. Chimpanzees are significantly more likely to produce an alarm call when conspecifics are unaware of a potential threat or were not nearby when a previous alarm call was emitted. When judging if conspecifics are unaware of potential dangers, chimpanzees do not solely look for behavioural cues, but also assess receiver mental states and use this information to target signalling and monitoring. In a recent experiment, caller chimpanzees were shown a fake snake as a predator and were played pre-recorded calls from receivers. Some receivers emitted calls that were snake-related, and therefore represented receivers with knowledge of the predator, while other receivers emitted calls that were not snake-related, and therefore represented receivers without knowledge of the predator. In response to the non-snake-related calls from receivers, the signallers increased their vocal and nonvocal signalling and coupled it with increased receiver monitoring.
Caller age
Chimpanzee age impacts the frequency of alarm signalling. Chimpanzees over 80 months of age are more likely to produce an alarm call than those less than 80 months of age. There are several hypotheses for this lack of alarm calling in infants zero to four years of age. The first hypothesis is a lack of motivation to produce alarm calls because of mothers in close proximity that minimize the infant's perception of a threat or that respond to a threat before the infant can. Infants may also be more likely to use distress calls to catch their mother's attention in order for her to produce an alarm call. Infants might also lack the physical ability to produce alarm calls or lack the necessary experience to classify unfamiliar objects as dangerous and worthy of an alarm signal. Therefore, alarm calling may require advanced levels of development, perception, categorization, and social cognition.
Other factors
Other factors, such as signaller arousal, receiver identity, or increased risk of predation from calling, do not have a significant effect on the frequency of alarm call production.
Receiver monitoring
However, while alarm signals can be coupled with receiver monitoring, there is a lack of consensus on the definition, starting age, and purpose of monitoring. It is either defined as the use of three subsequent gaze alternations, from a threat to a nearby conspecific and back to the threat, or as the use of two gaze alternations. Moreover, while some studies only report gaze alternation as starting in late juveniles, other studies report gaze alternation in infants as early as five months of age. In infants and juveniles, it is potentially a means of social referencing or social learning through which younger chimpanzees check the reactions of more experienced conspecifics in order to learn about new situations, such as potential threats. It has also been proposed to be a communicative behaviour or simply the result of shifts in attention between different environmental elements.
Evolution of hominoid communication
The evolution of hominoid communication is evident through chimpanzee 'hoo' vocalizations and alarm calls. Researchers propose that communication evolved as natural selection diversified 'hoo' vocalizations into context-dependent 'hoos' for travel, rest, and threats. Context-dependent communication is beneficial and likely maintained by selection as it facilities cooperative activities and social cohesion between signallers and receivers that can increase the likelihood of survival. Alarm calls in chimpanzees also point to the evolution of hominoid language. Callers assess conspecifics' knowledge of threats, fill their need for information, and, in doing so, use social cues and intentionality to inform communication. Filling a gap in information and incorporating social cues and intentionality into communication are all components of human language. These shared elements between chimpanzee and human communication suggest an evolutionary basis, most likely that the last common human ancestor with chimpanzees also possessed these linguistic abilities.
False alarm calls
Deceptive alarm calls are used by male swallows (Hirundo rustica). Males give these false alarm calls when females leave the nest area during the mating season, and are thus able to disrupt extra-pair copulations. As this is likely to be costly to females, it can be seen as an example of sexual conflict.
Counterfeit alarm calls are also used by thrushes to avoid intraspecific competition. By sounding a bogus alarm call normally used to warn of aerial predators, they can frighten other birds away, allowing them to eat undisturbed.
Vervets seem to be able to understand the referent of alarm calls instead of merely the acoustic properties, and if another species' specific alarm call (terrestrial or aerial predator, for instance) is used incorrectly with too high of a regularity, the vervet will learn to ignore the analogous vervet call as well.
Alarm pheromones
Alarm signals need not be communicated only by auditory means. For example, many animals may use chemosensory alarm signals, communicated by chemicals known as pheromones. Minnows and catfish release alarm pheromones (Schreckstoff) when injured, which cause nearby fish to hide in dense schools near the bottom. At least two species of freshwater fish produce chemicals known as disturbance cues, which initiates a coordinated antipredator defence by increasing group cohesion in response to fish predators. Chemical communication about threats is also known among plants, though it is debated to what extent this function has been reinforced by actual selection. Lima beans release volatile chemical signals that are received by nearby plants of the same species when infested with spider mites. This 'message' allows the recipients to prepare themselves by activating defense genes, making them less vulnerable to attack, and also attracting another mite species that is a predator of spider mites (indirect defence). Although it is conceivable that other plants are only intercepting a message primarily functioning to attract "bodyguards", some plants spread this signal on to others themselves, suggesting an indirect benefit from increased inclusive fitness.
Deceptive chemical alarm signals are also employed. For example, the wild potato, Solanum berthaultii, emits the aphid alarm-pheromone, (E)-β-farnesene, from its leaves, which functions as a repellent against the green peach aphid, Myzus persicae.
See also
Group selection
Kin selection
Mobbing call
References
External links
Chickadees' alarm-call carry information about size, threat of predator
The Trek of the Pika "A story complete with sounds of pika and marmot calls" 2002-10-30
Characteristics of arctic ground squirrel alarm calls Oecologia Volume 7, Number 2 / June, 1971
Why do Yellow-bellied Marmots Call? Daniel T. Blumstein & Kenneth B. Armitage
Department of Systematics and Ecology, University of Kansas
Alarm calls of Belding's ground squirrels to aerial predators: nepotism or self-preservation?
Signalling theory
Animal communication
Antipredator adaptations
Emergency communication
Survival skills
Articles containing video clips
Chemical ecology | Alarm signal | [
"Chemistry",
"Biology"
] | 4,763 | [
"Antipredator adaptations",
"Biochemistry",
"Chemical ecology",
"Biological defense mechanisms"
] |
2,136,842 | https://en.wikipedia.org/wiki/Sardine%20run | The KwaZulu-Natal sardine run of southern Africa occurs from May through July when billions of sardines – or more specifically the Southern African pilchard Sardinops sagax – spawn in the cool waters of the Agulhas Bank and move northward along the east coast of South Africa. Their sheer numbers create a feeding frenzy along the coastline.
The run, containing millions of individual sardines, occurs when a current of cold water heads north from the Agulhas Bank up to Mozambique where it then leaves the coastline and goes further east into the
Fisherman are sometimes observed singing songs while hauling in the fishing nets in typical South African style. It is estimated that the sardine run is the biggest Biomass migration in terms of numbers.
In terms of biomass, researchers estimate the sardine run could rival East Africa's great wildebeest migration. However, little is known of the phenomenon. It is believed that the water temperature has to drop below 21 °C in order for the migration to take place. In 2003, the sardines failed to 'run' for the third time in 23 years. While 2005 saw a good run, 2006 marked another non-run.
The shoals are often more than 7 km long, 1.5 km wide and 30 metres deep and are clearly visible from spotter planes or from the surface.
Sardines group together when they are threatened. This instinctual behaviour is a defence mechanism, as lone individuals are more likely to be eaten than when in large groups.
Causes
The sardine run is still poorly understood from an ecological point of view.
There have been various hypotheses, sometimes contradictory, that try to explain why and how the run occurs.
A recent interpretation of the causes is that the sardine run is most likely a seasonal reproductive migration of a genetically distinct subpopulation of sardine that moves along the coast from the eastern Agulhas Bank to the coast of KwaZulu-Natal in most years if not in every year.
Genomic and transcriptomic data indicate that the sardines participating in the run originate from South Africa's cool-temperate Atlantic coast. These are attracted to temporary cold-water upwelling off the south-east coast, and eventually find themselves trapped in subtropical habitat that is too warm for them.
The migration is restricted to the inshore waters by the preference of sardine for cooler water and the strong and warm offshore Agulhas Current, which flows in the opposite direction to the migration, and is strongest just off the continental shelf.
A band of cooler coastal water and the occurrence of Natal Pulses and break-away eddies make it possible for sardine shoals to overcome their habitat constraints. The importance of these enabling factors is greatest where the continental shelf is narrowest.
The presence of eggs off the KwaZulu-Natal coast suggests that sardine stay there for several months and their return migration during late winter to spring is nearly always unnoticeable because it probably occurs at depths where the water is cooler than at the surface.
In some years there does not appear to be a sardine run. This may be because it is not detected by coastal observers either because it actually does not occur due to high water temperatures and/or other hydrographic barriers, or the migration may occur farther offshore and possibly deeper due to unusual conditions.
Oceanographic influences
Sardine prefer water temperatures between 14 and 20 °C. Each southern winter the nearshore sea temperature along the South African south east coast drops to within this range. Along the KwaZulu-Natal coast, sardine may be found in water warmer than 20 °C.
It was hypothesized that factors beside temperature may influence the movement of sardine along the KwaZulu-Natal coastline, One of these factors may be predation pressure.
Oceanographic regions of the KwaZulu-Natal coast
The KwaZulu-Natal coast includes varied oceanographic regions, each influenced by distinct environmental forces.
The continental shelf waters of the KwaZulu-Natal Mid to Lower South coasts are dominated by the warm Agulhas Current which flows toward the south west. This water has a mean winter temperature of 23 °C and the current speed is often more than 1 m/s within 5 km of the coast.
The Agulhas Current follows a very constant path. The main stream is just offshore of the continental shelf break most of the time, which suggests that conditions are normally unsuitable for sardines along that part of the coast.
Local winds do not appear to have much effect on the currents.
Sardine move closer to shore as they travel northwards along the coast, but it is not known whether this is due to environmental conditions or biological conditions.
There is a persistent cyclonic gyre known as the Durban Eddy, where warm Agulhas Current water flows onto the shelf and the resulting inshore current direction is from south to north. This section of coast may be considered a transition from the wind-dominated section of the continental shelf to the north, to the Agulhas Current dominated section of shelf to the south.
The North Coast section of continental shelf is considerably wider (>40 km) than that of the south coast (roughly 15 km). This causes the Agulhas Current to flow farther offshore, and current conditions over the shelf are more variable. Wind appears to be a dominant influence in the region. Longshore north-easterly or south-westerly winds precede currents of similar direction by roughly 18 hours. Sea temperature is often lower and nutrients higher than along the South Coast.
The North Coast would seem to be more suitable habitat for sardine, but it is not known to what extent they use it.
These distinct regions may affect sardine distribution and movement.
Oceanographic variables and sardine presence
Some oceanographic variables have been found useful for describing conditions influencing sardine presence.
Water temperature has an inverse and highly significant influence. This is consistent with the preferred temperature range of sardine.
Sea currents have a significant effect, with calm current conditions most favourable for sardine presence and moderate current speeds from north to south most detrimental. As sardine movement during the run is northwards, this counter-current effect is expected.
Other conditions associated with sardine presence are:
Increasing atmospheric pressure: sardine presence appears to be higher during periods between the cold fronts along the KwaZulu-Natal coast. These periods have calm atmospheric conditions and slow nearshore currents.
Large swells and low water clarity associated with cold fronts have a negative effect on sardine presence.
Wind direction, wind speed, current direction, air temperature and rainfall all significantly affect sea surface temperature and consequently sardine presence. *Current and wind direction effects dominated, with north-easterly wind and currents from north to south resulting in cooler sea surface temperatures.
North-easterly winds cause the surface water layer to move away from shore (Ekman veering), allowing the cool water to reach the surface, and south-westerly winds push warm Agulhas Current surface water towards the shore causing inshore temperatures to increase, which would negatively impact upon sardine presence.
Increasing maximum air temperature, south-easterly (onshore) winds, wind speeds in excess of 6 m/s, and rainfall, all result in warmer sea surface temperatures.
Strong south-easterly winds and rainfall are associated with the passage of frontal systems, which would push warm surface waters shoreward resulting in warmer sea surface temperatures.
Frequent light north-westerly land breezes: When north-westerly land breezes are the strongest winds of the day they have a cooling effect on sea surface temperature. This cooling should be greatest in the vicinity of the surf zone where mixing is most effective. Sardine are often sighted close inshore during early mornings, suggesting that they could be attracted by cooler conditions found there.
Summary: Oceanographic predictors of sardine presence
Favourable:
Decreasing sea surface temperature
Calm current conditions
Light north-westerly land breezes
Stable atmospheric conditions.
Unfavourable:
Increasing sea surface temperature
Moderate north to south currents
Large swells
Turbid water
North-easterly and north-westerly winds and north to south currents have a cooling effect upon nearshore sea surface temperatures, but south-easterly winds and increasing air temperatures cause nearshore sea surface temperature warming.
Predators
Dolphins (estimated as being up to 18,000 in number, mostly the common dolphin (Delphinus capensis)) are largely responsible for rounding up the sardines into bait balls. These bait balls can be 10–20 metres in diameter and extend to a depth of 10 metres. The bait balls are short lived and seldom last longer than 10 minutes. Once the sardines are rounded up, sharks (primarily the bronze whaler), and birds (like the Cape gannet), and Bryde's whales take advantage of the opportunity. Other whale species, regardless of whether they do or not join the run, may appear in the vicinity such as humpback, southern right, and minke whales.
Predators as predictors of sardine presence
The Cape gannet is the predator species most closely associated with sardine presence along the Eastern Cape and KwaZulu-Natal coastline and is the most useful indicator of sardine run activity.
Sharks and large gamefish presence is also strongly associated with sardine presence during the run, but as they are not as easily observed from the surface they are not as useful a predictor of sardine presence.
The presence of common dolphins inshore along the east coast during winter is significantly associated with sardine presence, and the common dolphin can be considered the third most useful species for predicting sardine presence.
The resident population of bottlenose dolphin does not appear to associate with the sardine run, whereas the migrant stock does. This may explain why the bottlenose dolphin is less likely to predict sardine presence.
Record of predators
2005 records:
In June and July 2005 the avian and mammal predators included Bryde’s whale (Balaenoptera edeni), African penguin (Spheniscus demersus), Cape cormorant (Phalacrocorax capensis), which were predominantly found in the cooler southern part of the region.
Peak sardine run activity occurred within 4 km of shore at the northward limit of a strip of cool water (<21 °C) stretching along the East Coast. The principal predators at this stage were common dolphins (Delphinus capensis) and Cape gannets (Morus capensis).
Economic importance
Tourism
The recent interest in the sardine run has had significant impact on the local economy. International and domestic divers join local tour operators on sardine run diving expeditions. Such expeditions run from Eastern Cape towns, including East London, Port Saint Johns, and Port Elizabeth. The run has become important to tourism and is considered to be one of the main attractions in KwaZulu-Natal during the winter holiday period. Both local and international tourists are attracted to the spectacle and are provided with opportunities to participate in activities such as dive charters and boat based predator viewing tours.
The KwaZulu-Natal Sharks Board and East Coast Radio, facilitate a ‘Sardine Run Hotline’, which provides information on the position and movement of sardine shoals. Information is also provided on the internet.
The Sardine Run Association (www.thesardinerunassociation.org) has been formed to provide a link between tour operators, tourists, non-governmental organisations,
scientists, and local and national governments.
Fishery
The sardine run also supports a small-scale, seasonal beach seine fishery.
History
The oldest known record of the run is a mention in the Natal Mercury newspaper of 4 August 1853.
More recently, the run has been the subject of natural history documentaries (e.g., the BBC’s Nature's Great Events) and printed popular media
(e.g., National Geographic).
The 2011 run
Pilot shoals were netted at Hibberdene on 20 June 2011, while the main shoal was sighted near Port St. Johns. Small pockets of sardines were seen between Mfazazana and Margate. About 25 crates of sardines were hauled out from the first netting at Hibberdene. A further 33 crates of sardines were netted and were sold at R700 per crate or R30 per dozen sardines. The 58 crates were sold "within minutes". An attempt was also made to net sardines at Banana Beach. About 500 common dolphins and numerous sharks were noted near Margate. Shark nets had been removed between Umgababa and Port Edward.
Sardines were netted at Park Rynie on 21 June 2011. Some large nets of 200–300 baskets of sardines were taken. The baskets sold at R600 each. A large gathering of sardine predators was seen off Port Grosvenor on the Wild Coast. Thousands of Cape gannets and dolphins were seen in a continuous line of about 6 km between Brazen Head and just north of the Umtata River. It is suspected that this year's shoal is "massive", and will produce a "bumper run". Shark nets have been removed to the south of Durban. The first shoals were expected to reach Amanzimtoti on 23 June 2011. The main shoal was still near Port St Johns.
On 22 June 2011, a "few" baskets were netted at Umgababa beach, and a "handful" of baskets were netted at Warner Beach in the afternoon. Sardines were also netted at Isipingo, where 14 baskets were hauled out. The sardines therefore reached the Amanzimtoti area a day earlier than predicted.
Rough seas (with waves up to 4.7 m) caused by strong winds associated with a cold front kept the sardines from the shore on 23 June 2011. Pockets of sardines were seen far out to sea off the Bluff. The rough water and far distance of the sardines from shore made it impossible for the fish to be netted. No dolphin or bird activity was seen in the Durban area associated with the sardines. The main shoal was still suspected to be off the Eastern Cape coastline, with a report of some sardines still seen near Port St Johns on 22 and 23 June 2011.
Durban beaches were the scene of most netting activity on 27 June 2011. "Hundreds of baskets" of sardines were hauled onto the beaches in 13 nets. The price per basket was R350 in the morning, but later in the afternoon the price had dropped to R120 per basket. Each net contained in excess of 300 baskets of sardines, with one net containing around 500 baskets. Sardines were also netted at Umhlanga, Port Shepstone, Margate, Umgababa, and Port Edward. Cape gannets and other seabirds were seen "plunging from considerable heights" to catch the sardines, especially on the South Coast. Most of the sardines were netted along the Durban beaches as this was the area of calmest waters; swells along the KwaZulu-Natal coastline were around 2.5 m. Shark nets had been removed from Salt Rock to Port Edward, and bathers were requested to consult with lifeguards before entering the water. Meanwhile, a baby dolphin washed up on the beach at Scottburgh, with a gash behind its "flipper" (the photo showed a gash between the dorsal fin and the tail) that exposed the spine. The "weeks old" dolphin was taken to a nearby paddling pool, but authorities later euthanased it due to the severity of the injuries. Speculation was that the dolphin had been injured by a shark, or by a boat propeller; possibly related to the sardine run.
Swells dropped to 1–1.5 m on 28 June 2011, allowing more netting of sardines. Sardines were netted at Amanzimtoti; on the main beach and at Chain Rocks. A 22-year-old American marine biology student (research diver) named Paulo Edward Stanchi was attacked by a large dusky shark while diving at Aliwal Shoal Marine Protected Area. The group of divers had encountered a pocket of sardines when a 3 m long dusky shark bit Mr Stanchi on his left leg and hands. Mr Stanchi managed to free himself from the shark, and was treated on the diving boat before being transported to Rocky Bay, where medics stabilised him. He was then airlifted to Nkosi Albert Luthuli Hospital, where he underwent surgery. Dusky sharks generally live offshore, but come closer to the shore during the sardine run. The annual sardine run allowed more dusky sharks in the Aliwal Shoal MPA than usual, but there was no reason for them to show any more interest in divers than usual. Mr Stanchi had been wearing split fins with black and grey stripes, and this may have looked like a small shoal of fish to the shark. Meanwhile, a woman in her 40s broke her leg in the frenzy at Amanzimtoti when the sardines were netted. The woman is believed to have been trying to get some of the sardines when she "stepped wrong" and fractured her leg. Paramedics stabilized her before transporting her to hospital.
5 July 2011 was a "quiet day" for the sardine run. "Plenty of birds" were seen diving at Karridene close to the shore. 50 crates of sardines were taken at Umgababa in the early afternoon, while a net of sardines pulled in at Karridene contained some Garrick. More Garrick were caught by fishermen at Karridene, but in general there was little other game fish activity. There was reported to be a "massive shoal" of sardines off Coffee Bay in the Eastern Cape.
On 15 July 2011, 100 baskets were netted at Pennington. It was difficult to predict the sardines' movements as they were staying offshore.
On 20 July 2011, 300 baskets of sardines were netted at Pennington in the morning. There were many gannets off Ballito, and "quite a bit of fish" between Park Rynie and Mtwalume.
A strong cold front hit South Africa towards the end of July, causing land surface temperatures to drop below 10 °C over much of the country. Heavy snow falls were experienced in high lying areas, including Nottingham Road, Mooi River and Newcastle in the Midlands, while Van Reenen’s Pass was snowed in. The cold front caused swells of up to 4 meters on the KwaZulu-Natal coast and a 25 to 30 knot wind with rough sea conditions. A ship called the Phoenix ran aground at Salt Rock, Ballito on 26 July 2011 because of the rough conditions. This cold front may have put an end to the 2011 Sardine Run.
The 2023 run
The 2023 run has been estimated as being the biggest on observed records to date.
See also
Agulhas Current
Fish migration
Forage fish
Salmon run
Shoaling and schooling
The Blue Planet
Wild Ocean (film)
References
External links
Aquatic ecology
Marine biology
Fish migrations
.
. | Sardine run | [
"Biology"
] | 3,922 | [
"Aquatic ecology",
"Ecosystems",
"Marine biology"
] |
2,136,854 | https://en.wikipedia.org/wiki/1%3A5%3A200 | In the construction industry, the 1:5:200 rule (or 1:5:200 ratio) is a rule of thumb that states that:
Rule
The rule originated in a Royal Academy of Engineering paper by Evans et al.
Sometimes the ratios are given as 1:10:200. The figures are averages and broad generalizations, since construction costs will vary with land costs, building type, and location, and staffing costs will vary with business sector and local economy. The RAE paper started a number of arguments about the basis for the figures: whether they were credible: whether they should be discounted; what is included in each category. These arguments overshadow the principal message of the paper that concentration on first capital cost is not optimising use value: support to the occupier and containment of operating-cost. Study by the Constructing Excellence Be Valuable Task Group, chaired by Richard Saxon, came to the view that there is merit in knowing more about key cost ratios as benchmarks and that we can expect wide variation between building types and even individual examples of the same type.
Hughes et al, of the University of Reading School of Construction Management and Engineering, observed that the "Evans ratio" is merely a passing remark in the paper's introduction (talking of "commercial office buildings" and stating that "similar ratios might well apply in other types of building") forming part of a pitch that the proportion of a company's expenditure on a building that is spent directly on the building itself (rather than upon staffing it) is around 3%, and that no data are given to support the ratio and no defence of it is given in the remainder of the paper. In attempting to determine this ratio afresh, from published data on real buildings, they found it impossible to reproduce the 1:5:200 ratio, in part because the data and methodology employed by Evans et al. were not published and in part because the definitions employed in the original paper could not be applied. The ratios that they determined were different by an order of magnitude from the 1:5:200 ratio, being approximately 1:0.4:12. They observed that "everyone else who deals with real numbers" pitches the percentage somewhere between 10% and 30%, and that their data support 12%.
They note (as does Clements-Croome) that the three costs for every individual building are affected by a plethora of factors, yielding a wide variation in ratios. They suggest that "[p]erhaps the original 1:5:200 ratio was simply meant to be a statement to focus clients' attention" on the importance of considering the higher staffing costs of a building relative to its operating and construction costs, and to encourage people to not be too concerned with higher initial build costs to improve build quality and reduce later lifetime costs. They state that if this is so "then subsequent users of the ratio have misused it", and that the frequency of use of the ratio is not problematic, but that the authority and gravitas that are assigned to it is. They conclude that "perhaps the most worrying feature of this whole discussion is how this passing introductory remark in the paper by Evans et al has gained the status of a finding from research carried out by the Royal Academy of Engineering, which it most certainly is not!".
In a paper "Re-examining the costs and value ratios of owning and occupying buildings", Graham Ive notes how widely the 1:5:200 ratio has been cited among policy makers and practitioners, and goes on to use published data about whole-life economic costs and the total costs of occupancy to re-assess the ratios. The paper finds that 1:5:200 is both an exaggeration and an over simplification. It reports that the assumed costs in the original paper are unrepresentative, identifies flaws in the original definition of terms and method, offers 'economic cost' estimates for Central London offices, extends the approach to include measurement of value added, and finally discusses the problems of measurement that need to be overcome to produce realistic ratios. It notes a key error in the original ratio is that all costs are summed regardless of when they arise, whereas from an economic perspective future costs and values need to be discounted to their equivalent present cost or value. It shows the implications of introducing discounting. The paper concludes that the best available data suggests a 1:3 undiscounted ratio for Central London offices (for construction:maintenance) and a 1:1.5 ratio when cash flows are discounted at 7%. As for the 200 operation figure, the paper concludes that a representative ratio would be of the order of 1:30 or 1:15 when cash flows are discounted at 7%.
See also
Facility management
Activity relationship chart
Physical plant
Building information modeling
Computerized maintenance management system
Property maintenance
Property management
References
Construction
Rules of thumb | 1:5:200 | [
"Engineering"
] | 1,000 | [
"Construction"
] |
2,136,925 | https://en.wikipedia.org/wiki/Lip | The lips are a horizontal pair of soft appendages attached to the jaws and are the most visible part of the mouth of many animals, including humans. Vertebrate lips are soft, movable and serve to facilitate the ingestion of food (e.g. suckling and gulping) and the articulation of sound and speech. Human lips are also a somatosensory organ, and can be an erogenous zone when used in kissing and other acts of intimacy.
Structure
The upper and lower lips are referred to as the labium superius oris and labium inferius oris, respectively. The juncture where the lips meet the surrounding skin of the mouth area is the vermilion border, and the typically reddish area within the borders is called the vermilion zone. The vermilion border of the upper lip is known as the Cupid's bow. The fleshy protuberance located in the center of the upper lip is a tubercle known by various terms including the procheilon (also spelled prochilon), the "tuberculum labii superioris", and the "labial tubercle". The vertical groove extending from the procheilon to the nasal septum is called the philtrum.
The skin of the lip, with three to five cellular layers, is very thin compared to typical face skin, which has up to 16 layers. With light skin color, the lip skin contains fewer melanocytes (cells which produce melanin pigment, which give skin its color). Because of this, the blood vessels appear through the skin of the lips, which leads to their notable red coloring. With darker skin color this effect is less prominent, as in this case the skin of the lips contains more melanin and thus is visually darker. The skin of the lip forms the border between the exterior skin of the face, and the interior mucous membrane of the inside of the mouth.
The lip skin is not hairy and does not have sweat glands. Therefore, it does not have the usual protection layer of sweat and body oils which keep the skin smooth, inhibit pathogens, and regulate warmth. For these reasons, the lips dry out faster and become chapped more easily.
The lower lip is formed from the mandibular prominence, a branch of the first pharyngeal arch. The lower lip covers the anterior body of the mandible. It is lowered by the depressor labii inferioris muscle and the orbicularis oris borders it inferiorly.
The upper lip covers the anterior surface of the body of the maxilla. Its upper half is of usual skin color and has a depression at its center, directly under the nasal septum, called the philtrum, which is Latin for "lower nose", while its lower half is a markedly different, red-colored skin tone more similar to the color of the inside of the mouth, and the term vermillion refers to the colored portion of either the upper or lower lip.
It is raised by the levator labii superioris and is connected to the lower lip by the thin lining of the lip itself.
Thinning of the vermilion of the upper lip and flattening of the philtrum are two of the facial characteristics of fetal alcohol syndrome, a lifelong disability caused by the mother's consumption of alcohol during pregnancy.
Microanatomy
The skin of the lips is stratified squamous epithelium. The mucous membrane is represented by a large area in the sensory cortex, and is therefore highly sensitive. The frenulum labii inferioris is the frenulum of the lower lip. The frenulum labii superioris is the frenulum of the upper lip.
Nerve supply
Trigeminal nerve
The infraorbital nerve is a branch of the maxillary branch. It supplies not only the upper lip but also much of the skin of the face between the upper lip and the lower eyelid, except for the bridge of the nose.
The mental nerve is a branch of the mandibular branch (via the inferior alveolar nerve). It supplies the skin and mucous membrane of the lower lip and labial gingiva (gum) anteriorly.
Blood supply
The facial artery is one of the six non-terminal branches of the external carotid artery.
This artery supplies both lips by its superior and inferior labial branches. Each of the two branches bifurcate and anastomose with their companion branch from the other terminal.
Muscles
The muscles acting on the lips are considered part of the muscles of facial expression. All muscles of facial expression are derived from the mesoderm of the second pharyngeal arch and are therefore supplied (motor supply) by the nerve of the second pharyngeal arch, the facial nerve (7th cranial nerve). The muscles of facial expression are all specialized members of the panniculus carnosus, which attach to the dermis and so wrinkle or dimple the overlying skin. Functionally, the muscles of facial expression are arranged in groups around the orbits, nose, and mouth.
The muscles acting on the lips:
Buccinator
Orbicularis oris (a complex of muscles, formerly thought to be a single sphincter or ring of muscle)
Anchor point for several muscles
Modiolus
Lip elevation
Levator labii superioris
levator labii superioris alaeque nasi
Levator anguli oris
Zygomaticus minor
Zygomaticus major
Lip depression
Risorius
Depressor anguli oris
Depressor labii inferioris
Mentalis
Functions
Food intake
Because they have their own muscles and bordering muscles, the lips are easily movable. Lips are used for eating functions, like holding food or to get it in the mouth. In addition, lips serve to close the mouth airtight shut, to hold food and drink inside, and to keep out unwanted objects. Through making a narrow funnel with the lips, the suction of the mouth is increased. This suction is essential for babies to breast feed. Lips can also be used to suck in other contexts, such as sucking on a straw to drink liquids.
Articulation
The lips serve for creating different sounds—mainly labial, bilabial, and labiodental consonant sounds as well as vowel rounding—and thus are an important part of the speech apparatus. The lips enable whistling and the performing of wind instruments such as the trumpet, clarinet, flute, and saxophone. People who have hearing loss may unconsciously or consciously lip read to understand speech without needing to perceive the actual sounds, and visual cues from the lips affect the perception of what sounds have been heard, for example the McGurk effect.
Tactile organ
The lip has many nerve endings and reacts as part of the tactile (touch) senses. Lips are very sensitive to touch, warmth, and cold. It is therefore an important aid for exploring unknown objects for babies and toddlers.
Erogenous zone
Because of their high number of nerve endings, the lips are an erogenous zone. The lips therefore play a crucial role in kissing and other acts of intimacy.
A woman's lips are also a visible expression of her fertility. In studies performed on the science of human attraction, psychologists have concluded that a woman's facial and sexual attractiveness is closely linked to the makeup of her hormones during puberty and development. Contrary to the effects of testosterone on a man's facial structure, the effects of a woman's oestrogen levels serve to maintain a relatively "childlike" and youthful facial structure during puberty and during final maturation. It has been shown that the more oestrogen a woman has, the larger her eyes and the fuller her lips, characteristics which are perceived as more feminine. Surveys performed by sexual psychologists have also found that universally, men find a woman's full lips to be more sexually attractive than lips that are less so. A woman's lips are therefore sexually attractive to males because they serve as a biological indicator of a woman's health and fertility. A woman's lipstick (or collagen lip enhancement) attempts to take advantage of this fact by creating the illusion that a woman has more oestrogen than she actually has and thus that she is more fertile and attractive.
Lip size is linked to sexual attraction in both men and women. Women are attracted to men with masculine lips that are more middle size and not too big or too small; they are to be rugged and sensual. In general, the researchers found that a small nose, big eyes and voluptuous lips are sexually attractive both in men and women. The lips may temporarily swell during sexual arousal due to engorgement with blood.
Facial expression
The lips contribute substantially to facial expressions. The lips visibly express emotions such as a smile or frown, iconically by the curve of the lips forming an up-open or down-open arc, respectively. Lips can also be made pouty when whining or perky to be provocative.
Open questions
The function of the abrupt change in skin structure between the lips and surrounding face (in particular, the function of the less keratinized vermillion and the white roll) is not completely understood. Possible reasons for the difference may include advantages to somatosensory function, better communication of facial expressions, and/or emphasis of the lips' slight sexual dimorphism as a secondary sex characteristic.
Clinical significance
As an organ of the body, the lip can be a focus of disease or show symptoms of a disease:
One of the most frequent changes of the lips is a blue coloring due to cyanosis; the blood contains less oxygen and thus has a dark red to blue color, which shows through the thin skin. Cyanosis is the reason why corpses sometimes have blue lips. In cold weather cyanosis can appear, so especially in the winter, blue lips may not be an uncommon sight.
Inflammation of the lips is termed cheilitis. This can be in several forms such as chapped lips (dry, peeling lips), angular cheilitis (inflammation of the corners of the mouth), herpes labialis (cold sore, a form of herpes simplex) and actinic cheilitis (chronically sun damaged lips).
Cleft lip is a type of birth defect that can be successfully treated with surgery.
Carcinoma (a malignant cancer that arises from epithelial cells) at the lips is caused predominantly by using tobacco and overexposure of sunlight. Alcohol appears to increase the carcinoma risk associated with tobacco use. It is most often a diffuse and often hyperkeratinised lesion, occasionally has the form of nodules and grows infiltratively, and can also be a combination of the two types. It more often occurs at the lower lip, where it is also much more malign. Lower lip carcinoma is exclusively planocellular carcinoma, whereas at the upper lip, it can also be basocellular carcinoma.
Society and culture
Lips are often viewed as a symbol of sensuality and sexuality. This has many origins; above all, the lips are a very sensitive erogenous and tactile organ. Furthermore, in many cultures of the world, a woman's mouth and lips are veiled because of their representative association with the vulva, and because of their role as a woman's secondary sexual organ.
As part of the mouth, the lips are also associated with the symbolism associated with the mouth as orifice by which food is taken in. The lips are also linked symbolically to neonatal psychology (see for example oral stage of the psychology according to Sigmund Freud).
Lip piercing or lip augmentation is sometimes carried out for cosmetic reasons. Products designed for use on the lips include lipstick, lip gloss and lip balm.
Other animals
In most vertebrates, the lips are relatively unimportant folds of tissue lying just outside the jaws. However, in mammals, they become much more prominent, being separated from the jaws by a deep cleft (a notable exception being the naked mole-rat, whose lips close behind the front teeth). They are also more mobile in mammals than in other groups since it is only in this group that they have any attached muscles. In some teleost fish, the lips may be modified to carry sensitive barbels. In birds and turtles, the lips are hard and keratinous, forming a solid beak. Clevosaurids like Clevosaurus are notable for the presence of bone "lips"; in these species the tooth-like jaw projections common to all sphenodontians form a beak-like edge around the jaws, protecting the teeth within.
See also
Stiff upper lip
References
Further reading
External links
Anatomy at oralhealth.dent.umich.edu
Facial features
Mouth
Lips
Human head and neck
Speech organs
Human mouth anatomy
Digestive system | Lip | [
"Biology"
] | 2,667 | [
"Digestive system",
"Organ systems"
] |
2,136,997 | https://en.wikipedia.org/wiki/Claudio%20Ciborra | Claudio Ciborra (1951 – 13 February 2005) was an Italian organizational theorist, and Professor of Information Systems and PWC Chair in Risk Management in the London School of Economics. Prior to the LSE, he was professor at the Theseus International Management Institute.
Work
Ciborra was an original thinker in his field: the Social Study of Information Systems. His contribution ranks among that of the top names in this and related fields such as Shoshana Zuboff, Wanda Orlikowski, Steve Barley, M. Lynne Markus, Lucas Introna, Jannis Kallinikos, Geoff Walsham, Rob Kling, Daniel Robey, Chrisanthi Avgerou and Richard Boland. He collaborated widely, including with such scholars as Ole Hanseth (University of Oslo) and Giovan Francesco Lanzara (University of Bologna).
Ciborra contributed to the following areas.
The relationship between technology and organizations
Transaction cost theory and IS
Organizational learning, bricolage and improvisation
IS infrastructures.
Improvisation
Ciborra goes beyond the typical characterisation of improvisation as situated, pragmatic and contingent action by referring to the existential condition of the actor (his “moods feelings, affectations and fundamental attunement with the situation”). By eschewing the notion of the actor as a “robot” adapting to changing circumstances he reintroduces the personal human aspects that shape our encounters with the world and shows how our affectations define the situation at hand and so shape action.
Bricolage
As expounded by Ciborra, bricolage can be seen as the constant re-ordering of people and resources, the constant "trying out" and experimentation that is the true hallmark of organisational change. But bricolage is not a random trying out: Ciborra emphasises that it is a trying out based on leveraging the world "as defined by the situation".
Hospitality (xenia)
Hospitality is Claudio's attempt to present an alternative conception of how IT/IS is implemented. He rejects the scientific explanations of IS implementation (planning, design, goals, targets, methods, procedures) and instead views technology as an alien embodying and exemplifying its alien culture and affordances. Successful implementation is achieved when the "host" organisation (i.e. that implementing the technology) is able to extend courtesy and to absorb and appropriate/assimilate the alien culture where it offers advantages such as new ways of working. Claudio also warns that the host must beware that the guest can quickly become hostile.
Crisis
Ciborra claims that much of the IS and IT world (particularly their strategic management, marketing, academia and training organisations) are in crisis. He teaches that this is because IS and IT are treated as scientific disciplines when in fact they are social disciplines and hence thinking about them is based in an inappropriate paradigm which we might call "Positivism" (although Ciborra does not use this term).
Formative context
Ciborra drew on the work of Roberto Unger and showed how IS can embody and so be enacted as Formative Context.
Drift
Caring
The Platform Organisation
Gestell
Ciborra analyses Information System infrastructure using Heidegger's concept of Gestell.
For further information see Labyrinths of Information, OUP, 2002.
References
External links
A review of Labyrinths of Information
Memorial to Claudio
List of selected publications
1951 births
2005 deaths
Polytechnic University of Milan alumni
Italian business theorists
Academics of the London School of Economics
Philosophers of technology
Organizational theorists
Information systems researchers | Claudio Ciborra | [
"Technology"
] | 722 | [
"Information systems",
"Information systems researchers"
] |
2,137,226 | https://en.wikipedia.org/wiki/E0%20%28cipher%29 | E0 is a stream cipher used in the Bluetooth protocol. It generates a sequence of pseudorandom numbers and combines it with the data using the XOR operator. The key length may vary, but is generally 128 bits.
Description
At each iteration, E0 generates a bit using four shift registers of differing lengths (25, 31, 33, 39 bits) and two internal states, each 2 bits long. At each clock tick, the registers are shifted and the two states are updated with the current state, the previous state and the values in the shift registers. Four bits are then extracted from the shift registers and added together. The algorithm XORs that sum with the value in the 2-bit register. The first bit of the result is output for the encoding.
E0 is divided in three parts:
Payload key generation
Keystream generation
Encoding
The setup of the initial state in Bluetooth uses the same structure as the random bit stream generator. We are thus dealing with two combined E0 algorithms. An initial 132-bit state is produced at the first stage using four inputs (the 128-bit key, the Bluetooth address on 48 bits and the 26-bit master counter). The output is then processed by a polynomial operation and the resulting key goes through the second stage, which generates the stream used for encoding. The key has a variable length, but is always a multiple of 2 (between 8 and 128 bits). 128 bit keys are generally used. These are stored into the second stage's shift registers. 200 pseudorandom bits are then produced by 200 clock ticks, and the last 128 bits are inserted into the shift registers. It is the stream generator's initial state.
Cryptanalysis
Several attacks and attempts at cryptanalysis of E0 and the Bluetooth protocol have been made, and a number of vulnerabilities have been found.
In 1999, Miia Hermelin and Kaisa Nyberg showed that E0 could be broken in 264 operations (instead of 2128), if 264 bits of output are known. This type of attack was subsequently improved by Kishan Chand Gupta and Palash Sarkar. Scott Fluhrer, a Cisco Systems employee, found a theoretical attack with a 280 operations precalculation and a key search complexity of about 265 operations. He deduced that the maximal security of E0 is equivalent to that provided by 65-bit keys, and that longer keys do not improve security. Fluhrer's attack is an improvement upon earlier work by Golic, Bagini and Morgari, who devised a 270 operations attack on E0.
In 2000, the Finn Juha Vainio showed problems related to misuse of E0 and more generally, possible vulnerabilities in Bluetooth.
In 2004, Yi Lu and Serge Vaudenay published a statistical attack requiring the 24 first bits of 235 Bluetooth frames (a frame is 2745 bits long). The final complexity to retrieve the key is about 240 operations. The attack was improved to 237 operations for precomputation and 239 for the actual key search.
In 2005, Lu, Meier and Vaudenay published a cryptanalysis of E0 based on a conditional correlation attack. Their best result required the first 24 bits of 223.8 frames and 238 computations to recover the key. The authors assert that "this is clearly the fastest and only practical known-plaintext attack on Bluetooth encryption compare with all existing attacks".
See also
A5/1
RC4
References
External links
Slides.
Broken stream ciphers
Bluetooth | E0 (cipher) | [
"Technology"
] | 720 | [
"Wireless networking",
"Bluetooth"
] |
2,137,246 | https://en.wikipedia.org/wiki/Dual%20format | Dual format is a technique used to allow software for two systems which would normally require different disk formats to be recorded on the same floppy disk.
In the late 1980s, the term was used to refer to disks that could be used to boot either an Amiga or Atari ST computer. The layout of the first track of the disk was specially laid out to contain an Amiga and an Atari ST boot sector at the same time by fooling the operating system to think that the track resolved into the format it expected. The technique was used for some commercially available games, and also for the disks covermounted on ST/Amiga Format magazine. Other games came on Amiga and PC dual-format disks, or even "tri-format" disks, which contained the Amiga, Atari ST and PC versions of the game.
Most dual and tri-format disks were implemented using technology developed by Rob Computing.
Later, the term was used for disks containing both Windows and Macintosh versions.
Examples
Action Fighter (Amiga/PC dual-format disk)
Lethal Xcess - Wings of Death II (Amiga/Atari ST dual-format disks)
Monster Business (Amiga/Atari ST dual-format disk)
Populous: The Promised Lands (Amiga/Atari ST dual-format disk)
Rick Dangerous (Amiga/PC dual-format disk)
Rick Dangerous 2 (Amiga/PC dual-format disk)
Stone Age (Amiga/Atari ST dual-format disk)
Street Fighter (Amiga/PC dual-format disk)
StarGlider 2 (Amiga/Atari ST dual-format disk)
3D Pool (Amiga/Atari ST/PC tri-format disk)
Stunt Car Racer (Amiga/PC dual-format disk)
Bionic Commando (Amiga/PC dual-format disk)
Carrier Command (Amiga/PC dual-format disk)
Blasteroids (Amiga//PC dual-format disk)
E-Motion (Amiga//PC dual-format disk)
Indiana Jones and the Last Crusade Action (Amiga//PC dual-format disk)
Out Run (Amiga/PC dual-format disk)
World Class Leader Board (Amiga/PC dual-format disk)
International Soccer Challenge (Amiga/PC dual-format disk)
MicroProse Soccer (Amiga/PC dual-format disk)
See also
References
Amiga
Atari ST
IBM PC compatibles
Macintosh platform
Rotating disc computer storage media
Software distribution
Video game distribution | Dual format | [
"Technology"
] | 473 | [
"Computing platforms",
"Macintosh platform"
] |
2,137,248 | https://en.wikipedia.org/wiki/Starsem | Starsem is a French-Russian company that was created in 1996 to commercialise the Soyuz launcher internationally. Starsem is headquartered in Évry, France (near Paris) and has the following shareholders:
ArianeGroup (35%)
Arianespace (15%)
Roscosmos (25%)
Progress Rocket Space Centre (25%)
References
External links
Starsem, the Soyuz company website
Commercial launch service providers
Space industry companies of Russia | Starsem | [
"Astronomy"
] | 91 | [
"Rocketry stubs",
"Astronomy stubs"
] |
2,137,251 | https://en.wikipedia.org/wiki/Jean%20van%20Heijenoort | Jean Louis Maxime van Heijenoort ( , , ; July 23, 1912 – March 29, 1986) was a historian of mathematical logic. He was also a personal secretary to Leon Trotsky from 1932 to 1939, and an American Trotskyist until 1947.
Life
Van Heijenoort was born in Creil, France. His parents had immigrated from the Netherlands before his birth. When van Heijenoort was only two years old, his father passed away, leaving his family in financial hardship. Despite these challenges, he pursued his education and became proficient in French. Throughout his life, he maintained strong connections with his extended family and friends in France, making biannual visits after he obtained American citizenship in 1958.
Political views
In 1932, van Heijenoort was recruited by Yvan Craipeau to join the Trotskyist movement. He joined the Communist League in the same year. After Trotsky was exiled, he hired van Heijenoort as a secretary and bodyguard, primarily because of his fluency in French, Russian, German, and English. Van Heijenoort spent seven years in Trotsky's household, during which he served as a translator, helped Trotsky write several books and carried on an extensive intellectual and political correspondence in several languages.
In 1939, van Heijenoort moved to New York City to be with his second wife, Beatrice "Bunny" Guyer. He was not involved in the circumstances leading to Trotsky's murder in 1940. In New York, he worked for the Socialist Workers Party (US) (SWP) and wrote a number of articles for the American Trotskyist press and other radical outlets. He was elected to the secretariat of the Fourth International in 1940 but resigned when Felix Morrow and Albert Goldman, with whom he had sided, were expelled from the SWP. (Goldman subsequently joined the US Workers Party while Morrow did not join any other party or grouping.) In 1947, van Heijenoort too was expelled from the SWP. In 1948, he published an article, entitled "A Century's Balance Sheet", in which he criticized that part of Marxism which saw the "proletariat" as the revolutionary class. He continued to hold other parts of Marxism as true.
Van Heijenoort was spared the ordeal of McCarthyism as everything he published in Trotskyist publications appeared under one of over a dozen pen names he used. According to Feferman (1993), Van Heijenoort the logician was quite reserved about his Trotskyist youth, and did not discuss politics. Nevertheless, he contributed to the Trotskyist movement until the last decade of his life, when he wrote his monograph With Trotsky in Exile (1978), and an edition of Trotsky's correspondence (1980). He advised and collaborated with the archivists at the Houghton Library in Harvard University, which holds many of Trotsky's papers from his years in exile.
Academic work
After completing a Ph.D. in mathematics at New York University in 1949 under the supervision of J. J. Stoker, Van Heijenoort began to teach mathematics at New York University, but moved to logic and philosophy of mathematics, largely under the influence of Georg Kreisel. He started teaching philosophy, first part-time at Columbia University, then full-time at Brandeis University from 1965 to 1977. He spent much of his last decade at Stanford University, writing and editing eight books, including parts of the Collected Works of Kurt Gödel.
From Frege to Gödel: A Source Book in Mathematical Logic (1967) is an anthology of translations on the history of logic and the foundations of mathematics. It begins with the first complete translation of Frege's 1879 Begriffsschrift, followed by 45 short pieces on mathematical logic and axiomatic set theory, originally published between 1889 and 1931. The anthology ends with Gödel's landmark paper on the incompleteness of Peano arithmetic.
Nearly all the content of From Frege to Gödel: A Source Book in Mathematical Logic had only been available in a few North American university libraries (e.g., even the Library of Congress did not acquire a copy of the Begriffsschrift until 1964), and all but four pieces had to be translated from one of six continental European languages. When possible, the authors of the original texts reviewed the translations, and suggested corrections and amendments. Each piece was supplied with editorial footnotes and an introduction (mostly by Van Heijenoort but some by Willard Quine and Burton Dreben); its references were combined into a comprehensive bibliography, and misprints, inconsistencies, and errors were corrected.
From Frege to Gödel: A Source Book in Mathematical Logic contributed to advancing the view that modern logic begins with, and builds on, the Begriffsschrift. Grattan-Guinness (2000) argues that this perspective on the history of logic is mistaken, because Frege employed an idiosyncratic notation and was significantly less read than Peano. Ironically, van Heijenoort (1967) is often cited by those who prefer the alternative model theoretic stance on logic and mathematics. Much of the history of that stance, whose leading lights include George Boole, Charles Sanders Peirce, Ernst Schröder, Leopold Löwenheim, Thoralf Skolem, Alfred Tarski, and Jaakko Hintikka, is covered in Brady (2000). From Frege to Gödel: A Source Book in Mathematical Logic underrated the algebraic logic of De Morgan, Boole, Peirce, and Schröder, but devoted more pages to Skolem than to anyone other than Frege, and included Löwenheim (1915), the founding paper on model theory.
Personal life
Van Heijenoort had children with two of his four wives. While living with Trotsky in Coyoacán, van Heijenoort's first wife left him after an argument with Trotsky's spouse. In 1986, he visited his estranged fourth wife, Anne-Marie Zamora, in Mexico City where she murdered him before taking her own life.
Van Heijenoort was also one of Frida Kahlo's lovers; in the film Frida, he is played by Felipe Fulop.
Selected works
Books which Van Heijenoort edited alone or with others:
References
Bibliography
External links
Perspectives on the History and Philosophy of Modern Logic: Van Heijenoort Centenary special issue of Logica Universalis for Jean Van Heijenoort Centenary with papers by Feferman, Feferman, Hintikka, Jan Wolenski etc.
The Lubitz TrotskyanaNet provides a biographical sketch and a selective bibliography [more complete than Feferman's] of Jean Van Heijenoort
A Guide to the Jean Van Heijenoort papers, 1946–1988
How the Fourth International Was Conceived by Jean Van Heijenoort, August 1944
Jean van Heijenoort Internet Archive
1912 births
1986 deaths
New York University alumni
American Trotskyists
American logicians
American historians of mathematics
French Trotskyists
People murdered in Mexico
American people murdered abroad
French people murdered abroad
Philosophers of mathematics
French people of Dutch descent
French logicians
French expatriates in the United States | Jean van Heijenoort | [
"Mathematics"
] | 1,516 | [
"Philosophers of mathematics"
] |
2,137,292 | https://en.wikipedia.org/wiki/Drag%20%28physics%29 | In fluid dynamics, drag, sometimes referred to as fluid resistance, is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid. This can exist between two fluid layers, two solid surfaces, or between a fluid and a solid surface. Drag forces tend to decrease fluid velocity relative to the solid object in the fluid's path.
Unlike other resistive forces, drag force depends on velocity. Drag force is proportional to the relative velocity for low-speed flow and is proportional to the velocity squared for high-speed flow. This distinction between low and high-speed flow is measured by the Reynolds number.
Examples
Examples of drag include:
Net aerodynamic or hydrodynamic force: Drag acting opposite to the direction of movement of a solid object such as cars, aircraft, and boat hulls.
Viscous drag of fluid in a pipe: Drag force on the immobile pipe decreases fluid velocity relative to the pipe.
In the physics of sports, drag force is necessary to explain the motion of balls, javelins, arrows, and frisbees and the performance of runners and swimmers. For a top sprinter, overcoming drag can require 5% of their energy output.
Types
Types of drag are generally divided into the following categories:
form drag due to the size and shape of a body
skin friction drag or viscous drag due to the friction between the fluid and a surface which may be the outside of an object, or inside such as the bore of a pipe
The effect of streamlining on the relative proportions of skin friction and form drag is shown for two different body sections: An airfoil, which is a streamlined body, and a cylinder, which is a bluff body. Also shown is a flat plate illustrating the effect that orientation has on the relative proportions of skin friction, and pressure difference between front and back.
A body is known as bluff or blunt when the source of drag is dominated by pressure forces, and streamlined if the drag is dominated by viscous forces. For example, road vehicles are bluff bodies. For aircraft, pressure and friction drag are included in the definition of parasitic drag. Parasite drag is often expressed in terms of a hypothetical.
Parasitic drag
This is the area of a flat plate perpendicular to the flow. It is used when comparing the drag of different aircraft For example, the Douglas DC-3 has an equivalent parasite area of and the McDonnell Douglas DC-9, with 30 years of advancement in aircraft design, an area of although it carried five times as many passengers.
lift-induced drag appears with wings or a lifting body in aviation and with semi-planing or planing hulls for watercraft
wave drag (aerodynamics) is caused by the presence of shockwaves and first appears at subsonic aircraft speeds when local flow velocities become supersonic. The wave drag of the supersonic Concorde prototype aircraft was reduced at Mach 2 by 1.8% by applying the area rule which extended the rear fuselage on the production aircraft.
wave resistance (ship hydrodynamics) or wave drag occurs when a solid object is moving along a fluid boundary and making surface waves
boat-tail drag on an aircraft is caused by the angle with which the rear fuselage, or engine nacelle, narrows to the engine exhaust diameter.
Lift-induced drag and parasitic drag
Lift-induced drag
Lift-induced drag (also called induced drag) is drag which occurs as the result of the creation of lift on a three-dimensional lifting body, such as the wing or propeller of an airplane. Induced drag consists primarily of two components: drag due to the creation of trailing vortices (vortex drag); and the presence of additional viscous drag (lift-induced viscous drag) that is not present when lift is zero. The trailing vortices in the flow-field, present in the wake of a lifting body, derive from the turbulent mixing of air from above and below the body which flows in slightly different directions as a consequence of creation of lift.
With other parameters remaining the same, as the lift generated by a body increases, so does the lift-induced drag. This means that as the wing's angle of attack increases (up to a maximum called the stalling angle), the lift coefficient also increases, and so too does the lift-induced drag. At the onset of stall, lift is abruptly decreased, as is lift-induced drag, but viscous pressure drag, a component of parasite drag, increases due to the formation of turbulent unattached flow in the wake behind the body.
Parasitic drag
Parasitic drag, or profile drag, is drag caused by moving a solid object through a fluid. Parasitic drag is made up of multiple components including viscous pressure drag (form drag), and drag due to surface roughness (skin friction drag). Additionally, the presence of multiple bodies in relative proximity may incur so called interference drag, which is sometimes described as a component of parasitic drag.
In aviation, induced drag tends to be greater at lower speeds because a high angle of attack is required to maintain lift, creating more drag. However, as speed increases the angle of attack can be reduced and the induced drag decreases. Parasitic drag, however, increases because the fluid is flowing more quickly around protruding objects increasing friction or drag. At even higher speeds (transonic), wave drag enters the picture. Each of these forms of drag changes in proportion to the others based on speed. The combined overall drag curve therefore shows a minimum at some airspeed - an aircraft flying at this speed will be at or close to its optimal efficiency. Pilots will use this speed to maximize endurance (minimum fuel consumption), or maximize gliding range in the event of an engine failure.
The drag equation
Drag depends on the properties of the fluid and on the size, shape, and speed of the object. One way to express this is by means of the drag equation:
where
is the drag force,
is the density of the fluid,
is the speed of the object relative to the fluid,
is the cross sectional area, and
is the drag coefficient – a dimensionless number.
The drag coefficient depends on the shape of the object and on the Reynolds number
where
is some characteristic diameter or linear dimension. Actually, is the equivalent diameter of the object. For a sphere, is the D of the sphere itself.
For a rectangular shape cross-section in the motion direction, , where a and b are the rectangle edges.
is the kinematic viscosity of the fluid (equal to the dynamic viscosity divided by the density ).
At low , is asymptotically proportional to , which means that the drag is linearly proportional to the speed, i.e. the drag force on a small sphere moving through a viscous fluid is given by the Stokes Law:
At high , is more or less constant, but drag will vary as the square of the speed varies. The graph to the right shows how varies with for the case of a sphere. Since the power needed to overcome the drag force is the product of the force times speed, the power needed to overcome drag will vary as the square of the speed at low Reynolds numbers, and as the cube of the speed at high numbers.
It can be demonstrated that drag force can be expressed as a function of a dimensionless number, which is dimensionally identical to the Bejan number. Consequently, drag force and drag coefficient can be a function of Bejan number. In fact, from the expression of drag force it has been obtained:
and consequently allows expressing the drag coefficient as a function of Bejan number and the ratio between wet area and front area :
where is the Reynolds number related to fluid path length L.
At high velocity
As mentioned, the drag equation with a constant drag coefficient gives the force moving through fluid a relatively large velocity, i.e. high Reynolds number, Re > ~1000. This is also called quadratic drag.
The derivation of this equation is presented at .
The reference area A is often the orthographic projection of the object, or the frontal area, on a plane perpendicular to the direction of motion. For objects with a simple shape, such as a sphere, this is the cross sectional area. Sometimes a body is a composite of different parts, each with a different reference area (drag coefficient corresponding to each of those different areas must be determined).
In the case of a wing, the reference areas are the same, and the drag force is in the same ratio as the lift force. Therefore, the reference for a wing is often the lifting area, sometimes referred to as "wing area" rather than the frontal area.
For an object with a smooth surface, and non-fixed separation points (like a sphere or circular cylinder), the drag coefficient may vary with Reynolds number Re, up to extremely high values (Re of the order 107).
For an object with well-defined fixed separation points, like a circular disk with its plane normal to the flow direction, the drag coefficient is constant for Re > 3,500.
The further the drag coefficient Cd is, in general, a function of the orientation of the flow with respect to the object (apart from symmetrical objects like a sphere).
Power
Under the assumption that the fluid is not moving relative to the currently used reference system, the power required to overcome the aerodynamic drag is given by:
The power needed to push an object through a fluid increases as the cube of the velocity increases. For example, a car cruising on a highway at may require only to overcome aerodynamic drag, but that same car at requires . With a doubling of speeds, the drag/force quadruples per the formula. Exerting 4 times the force over a fixed distance produces 4 times as much work. At twice the speed, the work (resulting in displacement over a fixed distance) is done twice as fast. Since power is the rate of doing work, 4 times the work done in half the time requires 8 times the power.
When the fluid is moving relative to the reference system, for example, a car driving into headwind, the power required to overcome the aerodynamic drag is given by the following formula:
Where is the wind speed and is the object speed (both relative to ground).
Velocity of a falling object
Velocity as a function of time for an object falling through a non-dense medium, and released at zero relative-velocity v = 0 at time t = 0, is roughly given by a function involving a hyperbolic tangent (tanh):
The hyperbolic tangent has a limit value of one, for large time t. In other words, velocity asymptotically approaches a maximum value called the terminal velocity vt:
For an object falling and released at relative-velocity v = vi at time t = 0, with vi < vt, is also defined in terms of the hyperbolic tangent function:
For vi > vt, the velocity function is defined in terms of the hyperbolic cotangent function:
The hyperbolic cotangent also has a limit value of one, for large time t. Velocity asymptotically tends to the terminal velocity vt, strictly from above vt.
For vi = vt, the velocity is constant:
These functions are defined by the solution of the following differential equation:
Or, more generically (where F(v) are the forces acting on the object beyond drag):
For a potato-shaped object of average diameter d and of density ρobj, terminal velocity is about
For objects of water-like density (raindrops, hail, live objects—mammals, birds, insects, etc.) falling in air near Earth's surface at sea level, the terminal velocity is roughly equal to with d in metre and vt in m/s.
For example, for a human body ( ≈0.6 m) ≈70 m/s, for a small animal like a cat ( ≈0.2 m) ≈40 m/s, for a small bird ( ≈0.05 m) ≈20 m/s, for an insect ( ≈0.01 m) ≈9 m/s, and so on. Terminal velocity for very small objects (pollen, etc.) at low Reynolds numbers is determined by Stokes law.
In short, terminal velocity is higher for larger creatures, and thus potentially more deadly. A creature such as a mouse falling at its terminal velocity is much more likely to survive impact with the ground than a human falling at its terminal velocity.
Low Reynolds numbers: Stokes' drag
The equation for viscous resistance or linear drag is appropriate for objects or particles moving through a fluid at relatively slow speeds (assuming there is no turbulence). Purely laminar flow only exists up to Re = 0.1 under this definition. In this case, the force of drag is approximately proportional to velocity. The equation for viscous resistance is:
where:
is a constant that depends on both the material properties of the object and fluid, as well as the geometry of the object; and
is the velocity of the object.
When an object falls from rest, its velocity will be
where:
is the density of the object,
is density of the fluid,
is the volume of the object,
is the acceleration due to gravity (i.e., 9.8 m/s), and
is mass of the object.
The velocity asymptotically approaches the terminal velocity . For a given , denser objects fall more quickly.
For the special case of small spherical objects moving slowly through a viscous fluid (and thus at small Reynolds number), George Gabriel Stokes derived an expression for the drag constant:
where is the Stokes radius of the particle, and is the fluid viscosity.
The resulting expression for the drag is known as Stokes' drag:
For example, consider a small sphere with radius = 0.5 micrometre (diameter = 1.0 μm) moving through water at a velocity of 10 μm/s. Using 10−3 Pa·s as the dynamic viscosity of water in SI units,
we find a drag force of 0.09 pN. This is about the drag force that a bacterium experiences as it swims through water.
The drag coefficient of a sphere can be determined for the general case of a laminar flow with Reynolds numbers less than using the following formula:
For Reynolds numbers less than 1, Stokes' law applies and the drag coefficient approaches !
Aerodynamics
In aerodynamics, aerodynamic drag, also known as air resistance, is the fluid drag force that acts on any moving solid body in the direction of the air's freestream flow.
From the body's perspective (near-field approach), the drag results from forces due to pressure distributions over the body surface, symbolized .
Forces due to skin friction, which is a result of viscosity, denoted .
Alternatively, calculated from the flow field perspective (far-field approach), the drag force results from three natural phenomena: shock waves, vortex sheet, and viscosity.
Overview of aerodynamics
When the airplane produces lift, another drag component results. Induced drag, symbolized , is due to a modification of the pressure distribution due to the trailing vortex system that accompanies the lift production. An alternative perspective on lift and drag is gained from considering the change of momentum of the airflow. The wing intercepts the airflow and forces the flow to move downward. This results in an equal and opposite force acting upward on the wing which is the lift force. The change of momentum of the airflow downward results in a reduction of the rearward momentum of the flow which is the result of a force acting forward on the airflow and applied by the wing to the air flow; an equal but opposite force acts on the wing rearward which is the induced drag. Another drag component, namely wave drag, , results from shock waves in transonic and supersonic flight speeds. The shock waves induce changes in the boundary layer and pressure distribution over the body surface.
Therefore, there are three ways of categorizing drag.
Pressure drag and friction drag
Profile drag and induced drag
Vortex drag, wave drag and wake drag
The pressure distribution acting on a body's surface exerts normal forces on the body. Those forces can be added together and the component of that force that acts downstream represents the drag force, . The nature of these normal forces combines shock wave effects, vortex system generation effects, and wake viscous mechanisms.
Viscosity of the fluid has a major effect on drag. In the absence of viscosity, the pressure forces acting to hinder the vehicle are canceled by a pressure force further aft that acts to push the vehicle forward; this is called pressure recovery and the result is that the drag is zero. That is to say, the work the body does on the airflow is reversible and is recovered as there are no frictional effects to convert the flow energy into heat. Pressure recovery acts even in the case of viscous flow. Viscosity, however results in pressure drag and it is the dominant component of drag in the case of vehicles with regions of separated flow, in which the pressure recovery is infective.
The friction drag force, which is a tangential force on the aircraft surface, depends substantially on boundary layer configuration and viscosity. The net friction drag, , is calculated as the downstream projection of the viscous forces evaluated over the body's surface. The sum of friction drag and pressure (form) drag is called viscous drag. This drag component is due to viscosity.
History
The idea that a moving body passing through air or another fluid encounters resistance had been known since the time of Aristotle. According to Mervyn O'Gorman, this was named "drag" by Archibald Reith Low. Louis Charles Breguet's paper of 1922 began efforts to reduce drag by streamlining. Breguet went on to put his ideas into practice by designing several record-breaking aircraft in the 1920s and 1930s. Ludwig Prandtl's boundary layer theory in the 1920s provided the impetus to minimise skin friction. A further major call for streamlining was made by Sir Melvill Jones who provided the theoretical concepts to demonstrate emphatically the importance of streamlining in aircraft design.
In 1929 his paper 'The Streamline Airplane' presented to the Royal Aeronautical Society was seminal. He proposed an ideal aircraft that would have minimal drag which led to the concepts of a 'clean' monoplane and retractable undercarriage. The aspect of Jones's paper that most shocked the designers of the time was his plot of the horse power required versus velocity, for an actual and an ideal plane. By looking at a data point for a given aircraft and extrapolating it horizontally to the ideal curve, the velocity gain for the same power can be seen. When Jones finished his presentation, a member of the audience described the results as being of the same level of importance as the Carnot cycle in thermodynamics.
Power curve in aviation
The interaction of parasitic and induced drag vs. airspeed can be plotted as a characteristic curve, illustrated here. In aviation, this is often referred to as the power curve, and is important to pilots because it shows that, below a certain airspeed, maintaining airspeed counterintuitively requires more thrust as speed decreases, rather than less. The consequences of being "behind the curve" in flight are important and are taught as part of pilot training. At the subsonic airspeeds where the "U" shape of this curve is significant, wave drag has not yet become a factor, and so it is not shown in the curve.
Wave drag in transonic and supersonic flow
Wave drag, sometimes referred to as compressibility drag, is drag that is created when a body moves in a compressible fluid and at the speed that is close to the speed of sound in that fluid. In aerodynamics, wave drag consists of multiple components depending on the speed regime of the flight.
In transonic flight, wave drag is the result of the formation of shockwaves in the fluid, formed when local areas of supersonic (Mach number greater than 1.0) flow are created. In practice, supersonic flow occurs on bodies traveling well below the speed of sound, as the local speed of air increases as it accelerates over the body to speeds above Mach 1.0. However, full supersonic flow over the vehicle will not develop until well past Mach 1.0. Aircraft flying at transonic speed often incur wave drag through the normal course of operation. In transonic flight, wave drag is commonly referred to as transonic compressibility drag. Transonic compressibility drag increases significantly as the speed of flight increases towards Mach 1.0, dominating other forms of drag at those speeds.
In supersonic flight (Mach numbers greater than 1.0), wave drag is the result of shockwaves present in the fluid and attached to the body, typically oblique shockwaves formed at the leading and trailing edges of the body. In highly supersonic flows, or in bodies with turning angles sufficiently large, unattached shockwaves, or bow waves will instead form. Additionally, local areas of transonic flow behind the initial shockwave may occur at lower supersonic speeds, and can lead to the development of additional, smaller shockwaves present on the surfaces of other lifting bodies, similar to those found in transonic flows. In supersonic flow regimes, wave drag is commonly separated into two components, supersonic lift-dependent wave drag and supersonic volume-dependent wave drag.
The closed form solution for the minimum wave drag of a body of revolution with a fixed length was found by Sears and Haack, and is known as the Sears-Haack Distribution. Similarly, for a fixed volume, the shape for minimum wave drag is the Von Karman Ogive.
The Busemann biplane theoretical concept is not subject to wave drag when operated at its design speed, but is incapable of generating lift in this condition.
d'Alembert's paradox
In 1752 d'Alembert proved that potential flow, the 18th century state-of-the-art inviscid flow theory amenable to mathematical solutions, resulted in the prediction of zero drag. This was in contradiction with experimental evidence, and became known as d'Alembert's paradox. In the 19th century the Navier–Stokes equations for the description of viscous flow were developed by Saint-Venant, Navier and Stokes. Stokes derived the drag around a sphere at very low Reynolds numbers, the result of which is called Stokes' law.
In the limit of high Reynolds numbers, the Navier–Stokes equations approach the inviscid Euler equations, of which the potential-flow solutions considered by d'Alembert are solutions. However, all experiments at high Reynolds numbers showed there is drag. Attempts to construct inviscid steady flow solutions to the Euler equations, other than the potential flow solutions, did not result in realistic results.
The notion of boundary layers—introduced by Prandtl in 1904, founded on both theory and experiments—explained the causes of drag at high Reynolds numbers. The boundary layer is the thin layer of fluid close to the object's boundary, where viscous effects remain important even when the viscosity is very small (or equivalently the Reynolds number is very large).
See also
Added mass
Aerodynamic force
Angle of attack
Atmospheric density
Automobile drag coefficient
Boundary layer
Coandă effect
Drag crisis
Drag coefficient
Drag equation
Gravity drag
Keulegan–Carpenter number
Lift (force)
Morison equation
Nose cone design
Parasitic drag
Projectile motion#Trajectory of a projectile with air resistance
Ram pressure
Reynolds number
Satellite drag
Stall (fluid mechanics)
Stokes' law
Terminal velocity
Wave drag
Windage
References
'Improved Empirical Model for Base Drag Prediction on Missile Configurations, based on New Wind Tunnel Data', Frank G Moore et al. NASA Langley Center
'Computational Investigation of Base Drag Reduction for a Projectile at Different Flight Regimes', M A Suliman et al. Proceedings of 13th International Conference on Aerospace Sciences & Aviation Technology, ASAT- 13, May 26 – 28, 2009
'Base Drag and Thick Trailing Edges', Sighard F. Hoerner, Air Materiel Command, in: Journal of the Aeronautical Sciences, Oct 1950, pp 622–628
Bibliography
L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London.
Anderson, John D. Jr. (2000); Introduction to Flight, Fourth Edition, McGraw Hill Higher Education, Boston, Massachusetts, USA. 8th ed. 2015, .
External links
Educational materials on air resistance
Aerodynamic Drag and its effect on the acceleration and top speed of a vehicle.
Vehicle Aerodynamic Drag calculator based on drag coefficient, frontal area and speed.
Smithsonian National Air and Space Museum's How Things Fly website
Effect of dimples on a golf ball and a car
Articles containing video clips
Force | Drag (physics) | [
"Physics",
"Chemistry",
"Mathematics"
] | 5,055 | [
"Drag (physics)",
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
2,137,332 | https://en.wikipedia.org/wiki/Boomerang%20attack | In cryptography, the boomerang attack is a method for the cryptanalysis of block ciphers based on differential cryptanalysis. The attack was published in 1999 by David Wagner, who used it to break the COCONUT98 cipher.
The boomerang attack has allowed new avenues of attack for many ciphers previously deemed safe from differential cryptanalysis.
Refinements on the boomerang attack have been published: the amplified boomerang attack, and the rectangle attack.
Due to the similarity of a Merkle–Damgård construction with a block cipher, this attack may also be applicable to certain hash functions such as MD5.
The attack
The boomerang attack is based on differential cryptanalysis. In differential cryptanalysis, an attacker exploits how differences in the input to a cipher (the plaintext) can affect the resultant difference at the output (the ciphertext). A high probability "differential" (that is, an input difference that will produce a likely output difference) is needed that covers all, or nearly all, of the cipher. The boomerang attack allows differentials to be used which cover only part of the cipher.
The attack attempts to generate a so-called "quartet" structure at a point halfway through the cipher. For this purpose, say that the encryption action, E, of the cipher can be split into two consecutive stages, E0 and E1, so that E(M) = E1(E0(M)), where M is some plaintext message. Suppose we have two differentials for the two stages; say,
for E0, and
for E1−1 (the decryption action of E1).
The basic attack proceeds as follows:
Choose a random plaintext and calculate .
Request the encryptions of and to obtain and
Calculate and
Request the decryptions of and to obtain and
Compare and ; when the differentials hold, .
Application to specific ciphers
One attack on KASUMI, a block cipher used in 3GPP, is a related-key rectangle attack which breaks the full eight rounds of the cipher faster than exhaustive search (Biham et al., 2005). The attack requires 254.6 chosen plaintexts, each of which has been encrypted under one of four related keys and has a time complexity equivalent to 276.1 KASUMI encryptions.
References
(Slides in PostScript)
External links
Boomerang attack — explained by John Savard
Cryptographic attacks | Boomerang attack | [
"Technology"
] | 505 | [
"Cryptographic attacks",
"Computer security exploits"
] |
2,137,376 | https://en.wikipedia.org/wiki/HMGN | HMGN (High Mobility Group Nucleosome-binding) proteins are members of the broader class of high mobility group (HMG) chromosomal proteins that are involved in regulation of transcription, replication, recombination, and DNA repair.
HMGN1 and HMGN2 (initially designated HMG-14 and HMG-17 respectively) were discovered by E.W. Johns research group in the early 1970s. HMGN3, HMGN4, and HMGN5 were discovered later and are less abundant. HMGNs are nucleosome binding proteins that help in transcription, replication, recombination, and DNA repair. They can also alter the chromatin epigenetic landscape, helping to stabilize cell identity. There is still relatively little known about their structure and function. HMGN proteins are found in all vertebrates, and play a role in chromatin structure and histone modification. HMGNs come in long chains of amino acids, containing around 100 for HMGN1-4, and roughly 200 in HMGN5. Recent research on the HMGN family is focused on their effect on cell identity, and how reduction of HMGNs relates to induced reprogramming of mouse embryonic fibroblasts (MEFs).
Function
Much of the research that has been done HMGN proteins have been done in vitro, while there is relatively little on the in vivo function and roles of HMGN proteins.
Due to these proteins being predominantly found in higher eukaryotes, the use of microorganisms and other lower eukaryotes has deemed insufficient to determine the in vivo roles of HMGN proteins. A study was done with knockout mice to see the effect if any that HMGN proteins play on a full organism level. This resulted in the mice showing increasing sensitivity to UV radiation when having less than normal levels of HMGN(2). This would indicate that HMGN might facilitate repair of UV damage. The same increase in sensitivity was observed in mice when exposed to gamma radiation, however the cellular processes that repair DNA in either case are drastically different, leading to an inconclusive state whether HMGN proteins facilitate DNA repair in vivo.
HMGN1 and HMGN2 do not co-localize within living cells. This is indication of possible different roles of each HMGN.
Family
HMGN proteins are part of broader group of proteins referred to as High Mobility group chromosomal (HMG) proteins. This larger group was named this for their high electrophoretic mobility in polyacrylamide gels and is differentiated into 3 distinct but related groups, one of them being HMGN proteins. HMGN family can be further divided into specific proteins, these being HMGN1, HMGN2, HMGN3, HMGN4, and HMGN5. The overall sizes of the proteins vary to each specific one, but HMGN1-4 average 100 amino acids. Whereas the larger HMGN5 proteins are 300+ amino acids long in mice and roughly 200 in length for humans.
HMGN 1 and HMGN 2
HMGN1 and HMGN2 are among the most common of the HMGN proteins. The main purpose and function are reducing the compaction of the cellular chromatin by nucleosome binding. NMR evidence shows that reducing compaction occurs when the proteins targets the main elements that are responsible for the compactions of the chromatin. These have an expression rates that correlate to the differentiation of the cells it is present in. Areas that have experienced differentiation have reduced expression levels in comparison to undifferentiated areas, where HMGN1 and HMGN2 are highly expressed.
HMGN 3
HMGN3 has two variants, HMGN3a and HMGN3b. Unlike the HMGN1 and HMGN2 proteins, both forms of HMGN3 tend to be tissue and development specific. They are only expressed in certain tissues at specific developmental stages. There is no preference to a certain tissue given by the two variants of the HMGN3 proteins. There is equal likelihood that either be present in a certain highly expressed HMGN3 tissue. The brain and the eyes in particular are areas that HMGN3 is heavily expressed as well as in adult pancreatic islet cells. It has been shown that the loss of HMGN3 in mice has led to a mild onset of diabetes due to ineffective insulin secretion.
HMGN 4
The discovery of HMGN4 was done by GenBank during a database search and identified it as a "new HMGN2 like transcript", indicating that HMGN4 is closely related to HMGN2. There has been very little research done on HMGN4 proteins. The gene associated with the production of the HMGN4 is located in a region associated with schizophrenia on chromosome 6. Until this point every kind of HMGN has been identified in the vertebrates, but HMGN4 has only been seen and identified in primates. Within humans, HMGN4 has shown high levels of expression in the thyroid, thymus and the lymph nodes.
HMGN 5
The most recent addition to the HMGN protein family is of HMGN5. It is larger than the previous HMGNs, containing 300+ amino acids, due to a long C-terminal domain that varies with species, explaining why mice and humans have a different size of HMGN5. Its biological function is unknown but has shown expression in placental development. There have also been cases where HMGN5 was present in human tumors including, prostate cancer, breast cancer, lung cancer, etc. For this reason, it is thought that HMGN5 might have some link to cancer and might be a potential target for cancer therapy in the future.
Binding of HMGN proteins to chromatin
The location of HMGN during mitosis is the subject of several studies. It is very difficult to date their intra-nuclear organization during the various stages of cell cycle. There is a superfamily of abundance and ubiquitous nuclear proteins that bind to chromatin without any known DNA sequence, which is composed of HMGA, HMBG, and HMGN families. HMGA is associated with chromatin throughout the cell cycle, located in the scaffold of the metaphase chromosome. Both HMGB and HMGN are associated with the mitotic chromosome. The interactions of all HMGs with chromatin is highly dynamic, proteins move constantly throughout the nucleus.
The sample nucleosomes for potential binding sites in a "stop and go" manner, with the "stop" step being longer than the "go" step. Through the use of immunofluorescence studies, live cell imaging, gel mobility shift assays, and bimolecular fluorescence complementation, the above was determined and also by comparing the chromatin binding properties of wild-type and HMGN mutant proteins. In conclusion, HMGNs can associate with mitotic chromatin. However, the binding of HMGN to mitotic chromatin is not dependent on a functional HMGN nucleosomal binding domain, and weaker than the binding to interphase nucleosomes in which HMGNs form specific complexes with nucleosomes.
H1 competition and chromatin remodeling
Nucleosomes serve as the protein core (made from 8 histones) for DNA to wrap around, functioning as a foundation for the larger and more condensed chromatin structures of chromosomes. HMGN proteins compete with Histone H1 (linker histone not part of the core nucleosome) for nucleosome binding sites. Once occupied one protein cannot displace the other. However both proteins are not permanently associated to the nucleosomes and can be removed via post transcriptional modifications. In the case of HMGN proteins, Protein kinase C (PKC) can phosphorylate the serine amino acids in the nucleosome binding domain present in all HMGN variants. This gives HMGNs a mobile character as they are continuously able to bind and unbind to nucleosomes depending on the intracellular environment and signaling.
Active competition between HMGNs and H1 serve an active role in chromatin remodeling and as result play a role in the cell cycle and cellular differentiation where chromatin compaction and de-compaction determine if certain genes are expressed or not. Histone acetylation is usually associated with open chromatin, and histone methylation is usually associated with closed chromatin.
With use of ChIP-sequencing it is possible to study DNA paired with proteins to determine what kind of histone modifications are present when the nucleosomes are bound to either H1 or HMGNs. Using this method it was found that H1 presence corresponded to high levels of H3K27me3 and H3K4me3, which means that the H3 histone is heavily methylated suggesting that the chromatin structure is closed. It was also found that HMGN presence corresponded to high levels of H3K27ac and H3K4me1, conversely meaning that the H3 histone methylation is greatly reduced suggesting the chromatin structure is open.
Transcriptional activity and cellular differentiation
Functional compensation
While the role of HMGNs are still being researched, it is clear that the absence of HMGNs in knock out (KO) and knock down (KD) studies result in a significant difference of a cell's total transcriptional activity. Several transcriptome studies have been conducted which show various other genes are either unregulated or down regulated due to HMGN absence.
Interestingly in the case of HMGN1&2 only knocking out HMGN1 or HMGN2 results in changes for just few genes. But when you knock out both HMGN1&2 there is far more pronounced effect with regard to changes in gene activity. For example, in mice brain when only HMGN1 was knocked out only 1 gene was up-regulated, when only HMGN2 was knocked out 19 genes were up-regulated and 29 down-regulated. But when both HMGN1&2 are knocked out 50 genes were up-regulated and 41 down-regulated. If you simply tallied the totals for the HMGN1 and HMGN2 knock outs you would not get the same results as an HMGN1&2 DKO (double knock out).
This is described as functional compensation since both HMGN1 and HMGN2 are only slightly different in terms of protein structure and essentially do the same thing. They have largely the same affinity for nucleosomal binding sites. That means a lot of times if HMGN1 is absent, HMGN2 can fill in and vis versa. Using ChIP-seq it was found in mice chromosomes there were 16.5K sites were both HMGN1&2 could bind, 14.6K sites that had HMGN1 preference and only 6.4K sites that had HMGN2 preference. Differences in HMGN1 and HMGN2 activity are pronounced in the brain, thymus, liver, and spleen suggesting HMGN variants also have specialized roles in addition to their overlapping functionality.
Eye development
This overlapping functionality may seem redundant or even deleterious, however these proteins are integral to various cellular processes, especially differentiation and embryogenesis as it provides a means for dynamic chromatin modeling. For example, in mice embryo, during ocular development HMGN1,2&3. HMGN1 expression is elevated during initial stages of eye development in progenitor cells, but is decreased in newly formed and fated cells, such as lens fiber cells. HMGN2 in contrast stays elevated in both embryonic and adult eye cells. HMGN3 was found to be especially elevated at 2 weeks (for an adult mouse) in the inner nuclear and ganglion cells. This shows there is an uneven distribution of HMGNs in pre-fated and adult cells.
Brain / CNS development
In human brain development HMGNs have been shown to be a critical component of neural differentiation and are elevated in neural stem cells (neural progenitor cells). For example, in a knock down study, loss of HMGN1,2&3 resulted in lower population of astrocyte cells and higher population of neural progenitor cells.
In oligodendrocyte differentiation HMGNs are critical, since when HMGN1&2 are both knocked out the population of oligodendrocytes in spinal tissue was reduced 65%. However, due to functional compensation this effect is not observed when only HMGN1 or HMGN2 are knocked. This observation if not just correlation. With ChIP-seq analysis it is shown that chromatin modeling at the OLIG1&2 genes (transcription factors involved in oligodendrocyte differentiation) is in an open conformation and has HMGNs bound to the nucleosomes.
It can be inferred that this redundancy is actually beneficial as the presence of at least one HMGN variant vastly improves tissue differentiation and development. These findings are summarized in the figure to the right.
See also
High mobility group
References
External links
Transcription factors | HMGN | [
"Chemistry",
"Biology"
] | 2,697 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
2,137,509 | https://en.wikipedia.org/wiki/Perfect%20fluid | In physics, a perfect fluid or ideal fluid is a fluid that can be completely characterized by its rest frame mass density and isotropic pressure . Usually, "perfect fluid" is reserved for relativistic models and "ideal fluid" for classical inviscid flow. Real fluids are "sticky" and contain (and conduct) heat. Perfect fluids are idealized models in which these possibilities are ignored. Specifically, perfect fluids have no shear stresses, viscosity, or heat conduction.
A quark–gluon plasma
and graphene are examples of nearly perfect fluids that could been studied in a laboratory.
D'Alembert paradox
In classical mechanics, ideal fluids are described by Euler equations. Ideal fluids produce no drag according to d'Alembert's paradox.
Relativistic formulation
In space-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form
where U is the 4-velocity vector field of the fluid and where is the metric tensor of Minkowski spacetime.
In time-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form
where is the 4-velocity of the fluid and where is the metric tensor of Minkowski spacetime.
This takes on a particularly simple form in the rest frame
where is the energy density and is the pressure of the fluid.
Perfect fluids admit a Lagrangian formulation, which allows the techniques used in field theory, in particular, quantization, to be applied to fluids.
Perfect fluids are used in general relativity to model idealized distributions of matter, such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the perfect fluid may be used in Friedmann–Lemaître–Robertson–Walker equations to describe the evolution of the universe.
In general relativity, the expression for the stress–energy tensor of a perfect fluid is written as
where is the 4-velocity vector field of the fluid and where is the inverse metric, written with a space-positive signature.
See also
Equation of state
Ideal gas
Fluid solutions in general relativity
Potential flow
References
Further reading
, (pbk.)
Topical review.
Fluid mechanics
Superfluidity
Physics | Perfect fluid | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 462 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Superfluidity",
"Civil engineering",
"Condensed matter physics",
"Exotic matter",
"Fluid mechanics",
"Matter",
"Fluid dynamics"
] |
2,137,523 | https://en.wikipedia.org/wiki/Prewellordering | In set theory, a prewellordering on a set is a preorder on (a transitive and reflexive relation on ) that is strongly connected (meaning that any two points are comparable) and well-founded in the sense that the induced relation defined by is a well-founded relation.
Prewellordering on a set
A prewellordering on a set is a homogeneous binary relation on that satisfies the following conditions:
Reflexivity: for all
Transitivity: if and then for all
Total/Strongly connected: or for all
for every non-empty subset there exists some such that for all
This condition is equivalent to the induced strict preorder defined by and being a well-founded relation.
A homogeneous binary relation on is a prewellordering if and only if there exists a surjection into a well-ordered set such that for all if and only if
Examples
Given a set the binary relation on the set of all finite subsets of defined by if and only if (where denotes the set's cardinality) is a prewellordering.
Properties
If is a prewellordering on then the relation defined by
is an equivalence relation on and induces a wellordering on the quotient The order-type of this induced wellordering is an ordinal, referred to as the length of the prewellordering.
A norm on a set is a map from into the ordinals. Every norm induces a prewellordering; if is a norm, the associated prewellordering is given by
Conversely, every prewellordering is induced by a unique regular norm (a norm is regular if, for any and any there is such that ).
Prewellordering property
If is a pointclass of subsets of some collection of Polish spaces, closed under Cartesian product, and if is a prewellordering of some subset of some element of then is said to be a -prewellordering of if the relations and are elements of where for
is said to have the prewellordering property if every set in admits a -prewellordering.
The prewellordering property is related to the stronger scale property; in practice, many pointclasses having the prewellordering property also have the scale property, which allows drawing stronger conclusions.
Examples
and both have the prewellordering property; this is provable in ZFC alone. Assuming sufficient large cardinals, for every and
have the prewellordering property.
Consequences
Reduction
If is an adequate pointclass with the prewellordering property, then it also has the reduction property: For any space and any sets and both in the union may be partitioned into sets both in such that and
Separation
If is an adequate pointclass whose dual pointclass has the prewellordering property, then has the separation property: For any space and any sets and disjoint sets both in there is a set such that both and its complement are in with and
For example, has the prewellordering property, so has the separation property. This means that if and are disjoint analytic subsets of some Polish space then there is a Borel subset of such that includes and is disjoint from
See also
– a graded poset is analogous to a prewellordering with a norm, replacing a map to the ordinals with a map to the natural numbers
References
Descriptive set theory
Order theory
Wellfoundedness | Prewellordering | [
"Mathematics"
] | 704 | [
"Mathematical induction",
"Order theory",
"Wellfoundedness"
] |
2,137,672 | https://en.wikipedia.org/wiki/European%20Association%20for%20Theoretical%20Computer%20Science | The European Association for Theoretical Computer Science (EATCS) is an international organization with a European focus, founded in 1972. Its aim is to facilitate the exchange of ideas and results among theoretical computer scientists as well as to stimulate cooperation between the theoretical and the practical community in computer science.
The major activities of the EATCS are:
Organization of ICALP, the International Colloquium on Automata, Languages and Programming;
Publication of the Bulletin of the EATCS;
Publication of a series of monographs and texts on theoretical computer science;
Publication of the journal Theoretical Computer Science;
Publication of the journal Fundamenta Informaticae.
EATCS Award
Each year, the EATCS Award is awarded in recognition of a distinguished career in theoretical computer science. The first award was assigned to Richard Karp in 2000; the complete list of the winners is given below:
Presburger Award
Starting in 2010, the European Association of Theoretical Computer Science (EATCS) confers each year at the conference ICALP the Presburger Award to a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. The award is named after Mojzesz Presburger who accomplished his path-breaking work on decidability of the theory of addition (which today is called Presburger arithmetic) as a student in 1929. The complete list of the winners is given below:
EATCS Fellows
The EATCS Fellows Program has been established by the Association to recognize outstanding EATCS Members for their scientific achievements in the field of Theoretical Computer Science. The Fellow status is conferred by the EATCS Fellows-Selection Committee upon a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be “model citizens” of the TCS community, helping to develop the standing of TCS beyond the frontiers of the community.
Texts in Theoretical Computer Science
EATCS Bulletin
The EATCS Bulletin is a newsletter of the EATCS, published online three times annually in February, June, and October respectively. The Bulletin is a medium for rapid publication and wide distribution of material such as:
EATCS matters;
information about the current ICALP;
technical contributions;
columns;
surveys and tutorials;
reports on conferences;
calendar of events;
reports on computer science departments and institutes;
listings of technical reports and publications;
book reviews;
open problems and solutions;
abstracts of PhD Theses;
information on visitors at various institutions; and
entertaining contributions and pictures related to computer science.
Since 2021 its editor-in-chief has been Stefan Schmid (TU Berlin).
EATCS Young Researchers Schools
Beginning in 2014, the European Association for Theoretical Computer Science (EATCS) established a series of Young Researcher Schools on TCS topics. A brief history of the schools follows below.
See also
List of computer science awards
References
External links
European Association for Theoretical Computer Science website
Computer science societies
Information technology organizations based in Europe
International organisations based in Italy
Organizations established in 1972
Theoretical computer science | European Association for Theoretical Computer Science | [
"Mathematics"
] | 612 | [
"Theoretical computer science",
"Applied mathematics"
] |
2,137,681 | https://en.wikipedia.org/wiki/Pauson%E2%80%93Khand%20reaction | The Pauson–Khand (PK) reaction is a chemical reaction, described as a [2+2+1] cycloaddition. In it, an alkyne, an alkene, and carbon monoxide combine into a α,β-cyclopentenone in the presence of a metal-carbonyl catalyst
Ihsan Ullah Khand (1935–1980) discovered the reaction around 1970, while working as a postdoctoral associate with Peter Ludwig Pauson (1925–2013) at the University of Strathclyde in Glasgow. Pauson and Khand's initial findings were intermolecular in nature, but the reaction has poor selectivity. Some modern applications instead apply the reaction for intramolecular ends.
The traditional reaction requires a stoichiometric amounts of dicobalt octacarbonyl, stabilized by a carbon monoxide atmosphere. Catalytic metal quantities, enhanced reactivity and yield, or stereoinduction are all possible with the right chiral auxiliaries, choice of transition metal (Ti, Mo, W, Fe, Co, Ni, Ru, Rh, Ir and Pd), and additives.
Mechanism
While the mechanism has not yet been fully elucidated, Magnus' 1985 explanation is widely accepted for both mono- and dinuclear catalysts, and was corroborated by computational studies published by Nakamura and Yamanaka in 2001. The reaction starts with dicobalt hexacarbonyl acetylene complex. Binding of an alkene gives a metallacyclopentene complex. CO then migratorily inserts into an M-C bond. Reductive elimination delivers the cyclopentenone. Typically, the dissociation of carbon monoxide from the organometallic complex is rate limiting.
Selectivity
The reaction works with both terminal and internal alkynes, although internal alkynes tend to give lower yields. The order of reactivity for the alkene is(strained cyclic) > (terminal) > (disubstituted) > (trisubstituted). Tetrasubstituted alkenes and alkenes with strongly electron-withdrawing groups are unsuitable.
With unsymmetrical alkenes or alkynes, the reaction is rarely regioselective, although some patterns can be observed.
For mono-substituted alkenes, alkyne substituents typically direct: larger groups prefer the C2 position, and electron-withdrawing groups prefer the C3 position.
But the alkene itself struggles to discriminate between the C4 and C5 position, unless the C2 position is sterically congested or the alkene has a chelating heteroatom.
The reaction's poor selectivity is ameliorated in intramolecular reactions. For this reason, the intramolecular Pauson-Khand is common in total synthesis, particularly the formation of 5,5- and 6,5-membered fused bicycles.
Generally, the reaction is highly syn-selective about the bridgehead hydrogen and substituents on the cyclopentane.
Appropriate chiral ligands or auxiliaries can make the reaction enantioselective (see ). BINAP is commonly employed.
Additives
Typical Pauson-Khand conditions are elevated temperatures and pressures in aromatic hydrocarbon (benzene, toluene) or ethereal (tetrahydrofuran, 1,2-dichloroethane) solvents. These harsh conditions may be attenuated with the addition of various additives.
Absorbent surfaces
Adsorbing the metallic complex onto silica or alumina can enhance the rate of decarbonylative ligand exchange as exhibited in the image below. This is because the donor posits itself on a solid surface (i.e. silica). Additionally using a solid support restricts conformational movement (rotamer effect).
Lewis bases
Traditional catalytic aids such as phosphine ligands make the cobalt complex too stable, but bulky phosphite ligands are operable.
Lewis basic additives, such as n-BuSMe, are also believed to accelerate the decarbonylative ligand exchange process. However, an alternative view holds that the additives make olefin insertion irreversible instead. Sulfur compounds are typically hard to handle and smelly, but n-dodecyl methyl sulfide and tetramethylthiourea do not suffer from those problems and can improve reaction performance.
Amine N-oxides
The two most common amine N-oxides are N-methylmorpholine N-oxide (NMO) and trimethylamine N-oxide (TMANO). It is believed that these additives remove carbon monoxide ligands via nucleophilic attack of the N-oxide onto the CO carbonyl, oxidizing the CO into CO2, and generating an unsaturated organometallic complex. This renders the first step of the mechanism irreversible, and allows for more mild conditions. Hydrates of the aforementioned amine N-oxides have similar effect.
N-oxide additives can also improve enantio- and diastereoselectivity, although the mechanism thereby is not clear.
Alternative catalysts
(Co)4(CO)12 and Co3(CO)9(μ3-CH) also catalyze the PK reaction although Takayama et al detail a reaction catalyzed by dicobalt octacarbonyl.
One stabilization method is to generate the catalyst in situ. Chung reports that Co(acac)2 can serve as a precatalyst, activated by sodium borohydride.
Other metals
catalyst requires a silver triflate co-catalyst to effect the Pauson–Khand reaction:
Molybdenum hexacarbonyl is a carbon monoxide donor in PK-type reactions between allenes and alkynes with dimethyl sulfoxide in toluene. Titanium, nickel, and zirconium complexes admit the reaction. Other metals can also be employed in these transformations.
Substrate tolerance
In general allenes, support the Pauson–Khand reaction; regioselectivity is determined by the choice of metal catalyst. Density functional investigations show the variation arises from different transition state metal geometries.
Heteroatoms are also acceptable: Mukai et al's total synthesis of physostigmine applied the Pauson–Khand reaction to a carbodiimide.
Cyclobutadiene also lends itself to a [2+2+1] cycloaddition, although this reactant is too active to store in bulk. Instead, ceric ammonium nitrate cyclobutadiene is generated in situ from decomplexation of stable cyclobutadiene iron tricarbonyl with (CAN).
An example of a newer version is the use of the chlorodicarbonylrhodium(I) dimer, [(CO)2RhCl]2, in the synthesis of (+)-phorbol by Phil Baran. In addition to using a rhodium catalyst, this synthesis features an intramolecular cyclization that results in the normal 5-membered α,β-cyclopentenone as well as 7-membered ring.
Carbon monoxide generation in situ
The cyclopentenone motif can be prepared from aldehydes, carboxylic acids, and formates. These examples typically employ rhodium as the catalyst, as it is commonly used in decarbonylation reactions. The decarbonylation and PK reaction occur in the same reaction vessel.
See also
Nicholas reaction
Further reading
For Khand and Pauson's perspective on the reaction:
For a modern perspective:
References
Cycloadditions
Multiple component reactions
Name reactions | Pauson–Khand reaction | [
"Chemistry"
] | 1,649 | [
"Name reactions"
] |
2,137,708 | https://en.wikipedia.org/wiki/Wheel%20speed%20sensor | A wheel speed sensor (WSS) or vehicle speed sensor (VSS) is a type of tachometer. It is a sender device used for reading the speed of a vehicle's wheel rotation. It usually consists of a toothed ring and pickup.
Automotive wheel speed sensor
Purpose
The wheel speed sensor was initially used to replace the mechanical linkage from the wheels to the speedometer, eliminating cable breakage and simplifying the gauge construction by eliminating moving parts. These sensors also produce data that allows automated driving aids like ABS to function.
Construction
The most common wheel speed sensor system consists of a ferromagnetic toothed reluctor ring (tone wheel) and a sensor (which can be passive or active).
The tone wheel is typically made of steel and may be an open-air design, or sealed (as in the case of unitized bearing assemblies). The number of teeth is chosen as a trade-off between low-speed sensing/accuracy and high-speed sensing/cost. Greater numbers of teeth will require more machining operations and (in the case of passive sensors) produce a higher frequency output signal which may not be as easily interpreted at the receiving end, but give a better resolution and higher signal update rate.
In more advanced systems, the teeth can be asymmetrically shaped to allow the sensor to distinguish between forward and reverse rotation of the wheel.
A passive sensor typically consists of a ferromagnetic rod which is oriented to project radially from the tone wheel with a permanent magnet at the opposite end. The rod is wound with fine wire which experiences an induced alternating voltage as the tone wheel rotates, as the teeth interfere with the magnetic field. Passive sensors output a sinusoidal signal which grows in magnitude and frequency with wheel speed.
A variation of the passive sensor does not have a magnet backing it, but rather a tone wheel which consists of alternating magnetic poles produce the alternating voltage. The output of this sensor tends to resemble a square wave, rather than a sinusoid, but still increases in magnitude as wheels speed increases.
An active sensor is a passive sensor with signal conditioning circuitry built into the device. This signal conditioning may be amplifying the signal's magnitude; changing the signal's form to PWM, square wave, or others; or encoding the value into a communication protocol before transmission.
Variations
The vehicle speed sensor (VSS) may be, but is not always, a true wheel speed sensor. For example, in the Ford AOD transmission, the VSS is mounted to the tailshaft extension housing and is a self-contained tone ring and sensor. Though this does not give wheel speed (as each wheel in an axle with a differential is able to turn at differing speeds, and neither is solely dependent on the driveshaft for its final speed), under typical driving conditions this is close enough to provide the speedometer signal, and was used for the rear wheel ABS systems on 1987 and newer Ford F-Series, the first pickups with ABS.
Special purpose speed sensors
Road vehicles
Wheel speed sensors are a critical component of anti-lock braking systems.
Rotary speed sensors for rail vehicles
Many of the subsystems in a rail vehicle, such as a locomotive or multiple unit, depend on a reliable and precise rotary speed signal, in some cases as a measure of the speed or changes in the speed. This applies in particular to traction control, but also to wheel slide protection, registration, train control, door control and so on. These tasks are performed by a number of rotary speed sensors that may be found in various parts of the vehicle.
Speed sensor failures are frequent, and are mainly due to the extremely harsh operating conditions encountered in rail vehicles. The relevant standards specify detailed test criteria, but in practical operation the conditions encountered are often even more extreme (such as shock/vibration and especially electromagnetic compatibility (EMC)).
Rotary speed sensors for motors
Although rail vehicles occasionally do use drives without sensors, most need a rotary speed sensor for their regulator system. The most common type is a two-channel sensor that scans a toothed wheel on the motor shaft or gearbox which may be dedicated to this purpose or may be already present in the drive system.
Modern Hall effect sensors of this type make use of the principle of magnetic field modulation and are suitable for ferromagnetic target wheels with a module between m =1 and m = 3.5 (D.P.=25 to D.P.=7). The form of the teeth is of secondary importance; target wheels with involute or rectangular toothing can be scanned. Depending on the diameter and teeth of the wheel it is possible to get between 60 and 300 pulses per revolution, which is sufficient for drives of lower and medium traction performance.
This type of sensor normally consists of two hall effect sensors, a rare-earth magnet and appropriate evaluation electronics. The field of the magnet is modulated by the passing target teeth. This modulation is registered by the Hall sensors, converted by a comparator stage to a square wave signal and amplified in a driver stage.
The Hall effect varies greatly with temperature. The sensors’ sensitivity and also the signal offset therefore depend not only on the air gap but also on the temperature. This also very much reduces the maximum permissible air gap between the sensor and the target wheel. At room temperature an air gap of 2 to 3 mm can be tolerated without difficulty for a typical target wheel of module m = 2, but in the required temperature range of from −40 °C to 120 °C the maximum gap for effective signal registration drops to 1.3 mm.
Smaller pitch target wheels with module m = 1 are often used to get a higher time resolution or to make the construction more compact. In this case the maximum possible air gap is only 0.5 to 0.8 mm.
For the design engineer, the visible air gap that the sensor ends up with is primarily the result of the specific machine design, but is subject to whatever constraints are needed to register the rotary speed. If this means that the possible air gap has to lie within a very small range, then this will also restrict the mechanical tolerances of the motor housing and target wheels to prevent signal dropouts during operation. This means that in practice there may be problems, particularly with smaller pitched target wheels of module m = 1 and disadvantageous combinations of tolerances and extreme temperatures. From the point of view of the motor manufacturer, and even more so the operator, it is therefore better to look for speed sensors with a wider range of air gap.
The primary signal from a Hall sensor loses amplitude sharply as the air gap increases. For Hall sensor manufacturers this means that they need to provide maximum possible compensation for the Hall signal's physically induced offset drift. The conventional way of doing this is to measure the temperature at the sensor and use this information to compensate the offset, but this fails for two reasons: firstly because the drift does not vary linearly with the temperature, and secondly because not even the sign of the drift is the same for all sensors.
Some sensors now offer an integrated signal processor that attempts to correct the offset and amplitude of the Hall sensor signals. This correction enables a larger maximum permissible air gap at the speed sensor. On a module m = 1 target wheel these new sensors can tolerate an air gap of 1.4 mm, which is wider than that for conventional speed sensors on module m = 2 target wheels. On a module m = 2 target wheel the new speed sensors can tolerate gap of as much as 2.2 mm. It has also been possible to markedly increase the signal quality. Both the duty cycle and the phase displacement between the two channels is at least three times as stable in the face of fluctuating air gap and temperature drift. In addition, in spite of the complex electronics it has also been possible to increase the mean time between failures for the new speed sensors by a factor of three to four. So they not only provide more precise signals, their signal availability is also significantly better.
An alternative to Hall effect sensors with gears are sensors or encoders which use [magnetoresistance]. Because the target wheel is an active, multipole magnet, air gaps can be even larger, up to 4.0 mm. Because magnetoresistive sensors are angle-sensitive and amplitude-insensitive, signal quality is increased over Hall sensors in fluctuating gap applications. Also the signal quality is much higher, enabling [interpolation] within the sensor/encoder or by an external circuit.
Motor encoders with integrated bearings
There is a limit on the number of pulses achievable by Hall sensors without integrated bearings: with a 300 mm diameter target wheel it is normally not possible to get beyond 300 pulses per revolution. But many locomotives and electric multiple units (EMUs) need higher numbers of pulses for proper operation of the traction converter, for instance when there are tight constraints on the traction regulator at low speeds.
Such Hall effect sensor applications may benefit from built-in bearings, which can tolerate an air gap many orders of magnitude smaller because of the greatly reduced play on the actual sensor as opposed to that of the motor bearing. This makes it possible to choose a much smaller pitch for the measuring scale, right down to module m = 0.22. Likewise, the magnetoresistive sensors offer even higher resolution and accuracy than Hall sensors when implemented in motor encoders with integrated bearings.
For even greater signal accuracy a precision encoder can be used.
The functional principles of the two encoders are similar: a multichannel magneto-resistive sensor scans a target wheel with 256 teeth, generating sine and cosine signals. Arctangent interpolation is used to generate rectangular pulses from the sine/cosine signal periods. The precision encoder also possesses amplitude and offset correction functions. This makes it possible to further improve the signal quality, which greatly improves traction regulation.
Speed sensors on the wheelset
Bearingless wheelset speed sensors
Bearingless speed sensors may be found in almost every wheelset of a rail vehicle. They are principally used for wheel slide protection and usually supplied by the manufacturer of the wheel slide protection system. These sensors require a sufficiently small air gap and need to be particularly reliable.
One special feature of rotary speed sensors that are used for wheel slide protection is their integrated monitoring functions. Two-wire sensors with a current output of 7 mA/14 mA are used to detect broken cables. Other designs provide for an output voltage of around 7 V as soon as the signal frequency drops below 1 Hz. Another method used is to detect a 50 MHz output signal from the sensor when the power supply is periodically modulated at 50 MHz. It is also common for two-channel sensors to have electrically isolated channels.
Occasionally it is necessary to take off the wheel slide protection signal at the traction motor, and the output frequency is then often too high for the wheel slide protection electronics. For this application a speed sensor with an integrated frequency divider or encoder can be utilized.
Wheelset pulse generator with integrated bearing
A rail vehicle, particularly a locomotive, possesses numerous subsystems that require separate, electrically isolated speed signals. There usually are neither enough mounting places nor is there sufficient space where separate pulse generators could be installed. Multi-channel pulse generators that are flange-mounted onto the bearing shells or covers of wheelsets offer a solution. Using a number of bearingless speed sensors would also involve additional cables, which should preferably be avoided for outdoor equipment because they are so susceptible to damage, for instance from flying track ballast.
Optical sensor
From one to four channels can be implemented, each channel having a photosensor that scans one of at most two signal tracks on a slotted disk. Experience shows that the possible number of channels achievable by this technique is still not enough. A number of subsystems therefore have to make do with looped-through signals from the wheel slide protection electronics and are therefore forced to accept, for instance, the available number of pulses, although a separate speed signal might well have some advantages.
The use of optical sensors is widespread in industry. They have two fundamental problems in functioning reliably for years, the optical components are extremely susceptible to dirt, and the light source ages too quickly.
Traces of dirt greatly reduce the amount of light that passes through the lens and can cause signal dropout. These encoders are therefore required to be very well sealed. Further problems are encountered when the pulse generators are used in environments in which the dew point is passed: the lenses fog and the signal is frequently interrupted.
The light sources used are light-emitting diodes (LEDs). But LEDs are always subject to aging, which over a few years leads to a noticeably reduced beam. Attempts are made to compensate for this by using special regulators that gradually increase the current through the LED, but unfortunately this further accelerates the aging process.
Magnetic sensor
The principle used in scanning a ferromagnetic measuring scale magnetically does not exhibit these deficiencies. During many years’ experience of using magnetic encoders there have been occasions when a seal has failed and a pulse generator has been found to be completely covered in a thick layer of brake dust and other dirt, but such pulse generators still functioned perfectly.
Historically, magnetic sensor systems cost more than optical systems, but this difference is narrowing rapidly. Magnetic Hall and magnetoresistive sensor systems can be imbedded in plastic or potting material, which increases mechanical reliability and eliminates damage from water and grease.
Wheel speed sensors can also include hysteresis. This suppresses any extraneous pulses while the vehicle is at a standstill.
Pulse generators constructed in accordance with this principle have been successfully field tested by several rail operators since the beginning of 2005. The type test specified in EN 50155 has also been successfully completed, so that these pulse generators can now be delivered.
Wheelset pulse generators with integrated bearings for inside-journal bogies
Inside-journal bogies make particular demands on the pulse generator designer because they have no bearing cover on the end to serve as the basis from which the rotation of the wheelset shaft could be registered. In this case the pulse generator has to be mounted on a shaft stub attached to the wheelset and fitted with a torque converter connected to the bogie frame to prevent it from rotating.
The extreme vibration in this location leads to a considerable load on the pulse generator bearing, which, with this method of installation has to carry not only the relatively small mass of the pulse generator shaft but that of the entire pulse generator. When we consider that bearing life reduces with at least the third power of the load we can see that a reliable and durable pulse generator for such a situation cannot merely be adapted from the more common standard pulse generator for outside-journal bogies merely by fitting and intermediate flange or similar construction. It really is necessary to have a pulse generator with a modified design adapted to the requirements of such a location.
Speed sensors for non-magnetic target wheels or applications that produce swarf
Some transport companies are faced with a special problem: the circulating air that keeps the motors cool carries swarf abraded from the wheels and rails. This collects on the heads of magnetic sensors.
There are also increasingly motors in which sensors have to scan aluminium target wheels, for instance because the impellers are made of an aluminium alloy and the manufacturer does not wish to have to shrink on a separate ferromagnetic gear rim.
For these applications there are speed sensors available that do not require a target magnet. A number of transmitting and receiving coils are used to generate an alternating electric field with a frequency of the order of 1 MHz and the modulation of the coupling between senders and receivers is then evaluated. This sensor is installation and signal compatible to the magnetic sensors; for most common target wheel modules the units can simply be replaced without any other measures being necessary.
Speed sensors with interpolation
Customers often want a higher number of pulses per revolution than can be achieved in the space available and with the smallest module m = 1. To achieve this goal, sensors are available which offer interpolation. These offer output of 2-64X the original number of gear teeth or magnetic poles on the target wheel. Accuracy is dependent on the quality of sensor input: Hall sensors are lower cost, but lower accuracy, magnetoresistive sensors are higher cost, but higher accuracy.
References
External links
Wheel speed sensors in motor vehicles: Function, Diagnosis, and Troubleshooting, Hella
Vehicle Safety Equipment "Drive Safer America"
Vehicle safety technologies
Railway safety
Speed sensors | Wheel speed sensor | [
"Technology",
"Engineering"
] | 3,378 | [
"Speed sensors",
"Measuring instruments"
] |
2,137,712 | https://en.wikipedia.org/wiki/Connection-oriented%20communication | In telecommunications and computer networking, connection-oriented communication is a communication protocol where a communication session or a semi-permanent connection is established before any useful data can be transferred. The established connection ensures that data is delivered in the correct order to the upper communication layer. The alternative is called connectionless communication, such as the datagram mode communication used by Internet Protocol (IP) and User Datagram Protocol (UDP), where data may be delivered out of order, since different network packets are routed independently and may be delivered over different paths.
Connection-oriented communication may be implemented with a circuit switched connection, or a packet-mode virtual circuit connection. In the latter case, it may use either a transport layer virtual circuit protocol such as the Transmission Control Protocol (TCP) protocol, allowing data to be delivered in order. Although the lower-layer switching is connectionless, or it may be a data link layer or network layer switching mode, where all data packets belonging to the same traffic stream are delivered over the same path, and traffic flows are identified by some connection identifier reducing the overhead of routing decisions on a packet-by-packet basis for the network.
Connection-oriented protocol services are often, but not always, reliable network services that provide acknowledgment after successful delivery and automatic repeat request functions in case of missing or corrupted data. Asynchronous Transfer Mode (ATM), Frame Relay and Multiprotocol Label Switching (MPLS) are examples of connection-oriented unreliable protocols. Simple Mail Transfer Protocol (SMTP) is an example of a connection-oriented protocol in which, if a message is not delivered, an error report is sent to the sender, making it a reliable protocol. Because they can keep track of a conversation, connection-oriented protocols are sometimes described as stateful.
Circuit switching
Circuit switched communication, for example the public switched telephone network, ISDN, SONET/SDH and optical mesh networks, are intrinsically connection-oriented communication systems. Circuit-mode communication provides guarantees that constant bandwidth will be available, and bit stream or byte stream data will arrive in order with constant delay. The switches are reconfigured during a circuit establishment phase.
Virtual circuit switching
Packet switched communication may also be connection-oriented, which is called virtual circuit mode communication. Due to the packet switching, the communication may suffer from variable bit rate and delay, due to varying traffic load and packet queue lengths. Connection-oriented communication does not necessarily imply reliability.
Transport layer
Connection-oriented transport-layer protocols provide connection-oriented communications over connectionless communication systems. A connection-oriented transport layer protocol, such as TCP, may be based on a connectionless network-layer protocol such as IP, but still achieves in-order delivery of a byte-stream by means of segment sequence numbering on the sender side, packet buffering, and data packet reordering on the receiver side.
Datalink and network layer
In a connection-oriented packet-switched data-link or network-layer protocol, all data is sent over the same path during a communication session. Rather than using complete routing information for each packet (source and destination addresses) as in connectionless datagram switching such as conventional IP routers, a connection-oriented protocol identifies traffic flows only by a channel or data stream number, often denoted virtual circuit identifier (VCI). Routing information may be provided to the network nodes during the connection establishment phase, where the VCI is defined in tables within each node. Thus, the actual packet switching and data transfer can be taken care of by fast hardware, as opposed to slower software-based routing. Typically, this connection identifier is a small integer (for example, 10 bits for Frame Relay and 24 bits for ATM). This makes network switches substantially faster.
ATM and Frame Relay, for example, are both examples of connection-oriented, unreliable data link layer protocols. Reliable connectionless protocols exist as well, for example AX.25 network layer protocol when it passes data in I-frames, but this combination is rare, and reliable-connectionless is uncommon in modern networks.
Some connection-oriented protocols have been designed or altered to accommodate both connection-oriented and connectionless data.
Examples
Examples of connection-oriented packet-mode communication, i.e. virtual circuit mode communication:
Asynchronous Transfer Mode
Connection-oriented Ethernet
Datagram Congestion Control Protocol
Frame Relay
General Packet Radio Service
IPX/SPX
Multiprotocol Label Switching
Stream Control Transmission Protocol
Transmission Control Protocol
Transparent Inter-process Communication
X.25
References
Computer networking
Internet architecture
Internet protocols
Network protocols | Connection-oriented communication | [
"Technology",
"Engineering"
] | 940 | [
"Computer networking",
"IT infrastructure",
"Internet architecture",
"Computer engineering",
"Computer science"
] |
2,137,772 | https://en.wikipedia.org/wiki/Air%20suspension | Air suspension is a type of vehicle suspension powered by an electric or engine-driven air pump or compressor. This compressor pumps the air into a flexible bellows, usually made from textile-reinforced rubber. Unlike hydropneumatic suspension, which offers many similar features, air suspension does not use pressurized liquid, but pressurized air. The air pressure inflates the bellows, and raises the chassis from the axle.
Overview
Air suspension is used in place of conventional steel springs in heavy vehicle applications such as buses and trucks, and in some passenger cars. It is widely used on semi trailers and trains (primarily passenger trains).
The purpose of air suspension is to provide a smooth, constant ride quality, but in some cases is used for sports suspension. Modern electronically controlled systems in automobiles and light trucks almost always feature self-leveling along with raising and lowering functions. Although traditionally called air bags or air bellows, the correct term is air spring (although these terms are also used to describe just the rubber bellows element with its end plates).
History
On 7 January 1901 the British engineer Archibald Sharp patented a method for making a seal allowing pneumatic or hydraulic apparatus described as a "rolling mitten seal", and on 11 January 1901 he applied for a patent for the use of the device to provide air suspension on bicycles. Further developments using this 1901 seal followed. A company called Air Springs Ltd started producing the A.S.L. motorcycle in 1909. This was unusual in having pneumatic suspension at front and rear - rear suspension being unusual in any form of motorcycle at that time. The suspension units were similar to the normal girder forks with the spring replaced by a telescopic air unit which could be pressurised to suit the rider. Production of the motorcycles ceased in 1914.
On 22 January 1901 an American, William W. Humphreys, patented an idea - a 'Pneumatic Spring for Vehicles'. The design consisted of a left and right air spring longitudinally channeled nearly the length of the vehicle. The channels were concaved to receive two long pneumatic cushions. Each one was closed at one end and provided with an air valve at the other end.
From 1920, Frenchman George Messier provided aftermarket pneumatic suspension systems. His own 1922-1930 Messier automobiles featured a suspension "to hold the car aloft on four gas bubbles."
During World War II, the U.S. developed the air suspension for heavy aircraft in order to save weight with compact construction. Air systems were also used in heavy trucks and aircraft to attain self-levelling suspension. With adjustable air pressure, the axle height was independent of vehicle load.
In 1946, American William Bushnell Stout built a non-production prototype Stout Scarab that featured numerous innovations, including a four-wheel independent air suspension system.
In 1950, Air Lift Company patented a rubber air spring that is inserted into a car's factory coil spring. The air spring expanded into the spaces in the coil spring, keeping the factory spring from fully compressing, and the vehicle from sagging. The air springs were also commonly used on NASCAR race cars for many years.
In 1954, Frenchman Paul Magès developed a functioning air/oil hydropneumatic suspension, incorporating the advantages of earlier air suspension concepts, but with hydraulic fluid rather than air under pressure. Citroën replaced the conventional steel springs on the rear axle of their top-of-range model, the Traction Avant 15 Hydraulique. In 1955, the Citroën DS incorporated four wheel hydropneumatic suspension. This combined a very soft, comfortable suspension, with controlled movements, for sharp handling, together with a self-levelling suspension.
In 1956 air suspension was used on EMD's experimental Aerotrain.
In the U.S., General Motors built on its World War II experience with air suspension for trucks and airplanes. It introduced air suspension as standard equipment on the new 1957 Cadillac Eldorado Brougham. An "Air Dome" assembly at each wheel included sensors to compensate for uneven road surfaces and to automatically maintain the car's height. For 1958 and 1959, the system continued on the Eldorado Brougham, and was offered as an extra cost option on other Cadillacs.
In 1958, Buick introduced an optional "Air-Poised Suspension" with four cylinders of air (instead of conventional coil springs) for automatic leveling, as well as a "Bootstrap" control on the dashboard to raise the car for use on steep ramps or rutted country roads, as well as for facilitating tire changes or to clean the whitewall tires. For 1959, Buick offered an optional "Air Ride" system on all models that combined "soft-rate" steel coil springs in the front with air springs in the rear.
An optional air suspension system was available on the 1958 and 1959 Rambler Ambassadors, as well as on all American Motors "Cross Country" station wagon models. The "Air-Coil Ride" utilized an engine-driven compressor, reservoir, air bags within the coil springs, and a ride-height control, but the $99 optional system was not popular among buyers and American Motors (AMC) discontinued it for 1960.
Only Cadillac continued to offer air suspension through the 1960 model year, where it was standard equipment on the Eldorado Seville, Biarritz, and Brougham.
In 1960, the Borgward P 100 was the first German car with self-levelling air suspension.
In 1962, the Mercedes-Benz W112 platform featured an air suspension on the 300SE models. The system used a Bosch main valve with two axle valves on the front and one on the rear. These controlled a cone-shaped air spring on each wheel axle. The system maintained a constant ride height utilizing an air reservoir that was filled by a single-cylinder air compressor powered by the engine. In 1964, the Mercedes-Benz 600 used larger air springs and the compressed air system also powered the brake servo.
Rolls-Royce incorporated self-levelling suspension on the 1965 Rolls-Royce Silver Shadow, a system built under license from Citroën.
In 1975, the Mercedes-Benz 450SEL 6.9 incorporated a hydropneumatic suspension when the patents on the technology had expired. This design replaced the expensive, complex, and problematic compressed air system that was still used on the 600 models until 1984.
Air suspension was not included in standard production American-built cars between 1960 and 1983. In 1984, Ford Motor Company incorporated a new design as a feature on the Lincoln Continental Mark VII.
In 1986, Nissan installed an airbag modification to MacPherson Struts on the Cedric and Gloria.
Dunlop Systems Coventry UK were also pioneers of Electronically Controlled Air Suspension (ECAS) for off-road vehicles - the term ECAS was successfully trade marked. The system was first fitted to the 1993 model year Land Rover Range Rover.
In 2005 the GM Hummer H2 featured an optional rear air suspension system with a dual compressor control system from Dunlop to support tire inflation for off-road applications.
Modern automobiles
Vehicle marques that have used air suspension on their models include: Audi, Acura, Bentley, BMW, Cadillac, Citroën, Ford, Genesis, Hummer, Hyundai, Jaguar, Jeep, Land Rover, Lamborghini, Lexus, Lincoln, Mercedes-Benz, Mercedes--Maybach, Porsche, Ram, Rivian, Rolls-Royce, SsanYong, Subaru, Tesla, Volkswagen, Volvo, and more.
Companies such as Jaguar and Porsche have introduced systems on some of their models that change the spring rate and damping settings of the suspension, among other changes, for their sport/track modes. The Lincoln Mark VIII had suspension settings which were linked to the memory seat system, meaning that the car would automatically adjust the suspension to individual drivers.
Most air suspension designs are height adjustable, making it easier to enter the vehicle, clear bumps, or clear rough terrain. Since a car with lower ground clearance has different aerodynamic characteristics, automakers can use active suspension technology to improve efficiency or handling. Tesla, for instance, uses "Active Air Suspension" on the Model S and Model X to lower or raise the vehicle for aerodynamics and increased range.
In 2014 the new Mercedes S-Class Coupe introduced an update to Magic Body Control, called Active Curve Tilting. This new system allows the vehicle to lean up to 2.5 degrees into a turn, similar to a tilting train. The leaning is intended to counter the effect of centrifugal force on the occupants and is available only on rear-wheel drive models.
Custom applications
Air suspension has become popular in the custom automobile culture: street rods, trucks, cars, and even motorcycles may have air springs. They are used in these applications to provide an adjustable suspension which allows vehicles to sit extremely low, yet be able rise to a level high enough to maneuver over obstacles and inconsistencies on paved surfaces. These systems generally employ small, electric or engine-driven air compressors which sometimes fill an on-board air receiver tank which stores compressed air for use in the future without delay. It is important that the tank is sized for the task and can be calculated using a specific formula involving the compressor output, standard atmospheric pressure and compressed pressure.
High-pressured industrial gas bottles (such as nitrogen or carbon dioxide tanks used to store shielding gases for welding) are sometimes used in more radical air suspension setups. Either of these reservoir systems may be fully adjustable, being able to adjust each wheel's air pressure individually. This allows the user to tilt the vehicle side-to-side, front-to-back, in some instances "hit a 3-wheel" (contort the vehicle so one wheel lifts up from the ground) or even "hop" the entire vehicle into the air. When a pressure reservoir is present, the flow of air or gas is commonly controlled with pneumatic solenoid valves. This allows the user to make adjustments by simply pressing a momentary-contact electric button or switch.
The installation and configuration of these systems varies for different makes and models but the underlying principle remains the same. The metal spring (coil or leaf) is removed, and an air bag, also referred to as an air spring, is inserted or fabricated to fit in the place of the factory spring. When air pressure is supplied to the air bag, the suspension can be adjusted either up or down (lifted or lowered).
For vehicles with leaf spring suspension such as pickup trucks, the leaf spring is sometimes eliminated and replaced with a multiple-bar linkage. These bars are typically in a trailing arm configuration and the air spring may be situated vertically between a link bar or the axle housing and a point on the vehicle's frame. In other cases, the air bag is situated on the opposite side of the axle from the main link bars on an additional cantilever member. If the main linkage bars are oriented parallel to the longitudinal (driving) axis of the car, the axle housing may be constrained laterally with either a Panhard rod or Watt's linkage. In some cases, two of the link bars may be combined into a triangular shape which effectively constrains the vehicles axle laterally.
Often, owners may desire to lower their vehicle to such an extent that they must cut away portions of the frame for more clearance. A reinforcement member commonly referred to as a C-notch is then bolted or welded to the vehicle frame in order to maintain structural integrity. Specifically on pickup trucks, this process is termed "notching" because a portion (notch) of the cargo bed may also be removed, along with the wheel wells, to provide maximum axle clearance. For some, it is desirable to have the vehicle so low that the frame rests on the ground when the air bags are fully deflated. Owners generally choose between having their cars 'tuck' their wheels into the arches when their air suspension is fully lowered or alternatively they can choose to go for 'fitment' which in partnership with stretched tyres sees the arch itself fit in between the tyre and rim.
Air suspension is also a common suspension upgrade for those who tow or haul heavy loads with their pick-up truck, SUV, van or car. Air springs, also called "air helper springs," are placed on existing suspension components on the rear or front of the vehicle in order to increase the load capacity. One of the advantages of using air suspension as a load support enhancement is the air springs can be deflated when not towing or hauling and therefore maintaining the factory ride quality.
Electronic Air Suspension
Electronic Controlled Air Suspension (ECAS) is the name of the air suspension system installed on the Range Rover Classic in 1993 and later on the Range Rover P38A. It was developed in the early 1990s by the company now known as Dunlop Systems and Components Ltd in Coventry, UK.
ECAS provides variable-height suspension for on- and off-road applications. The five suspension heights typically offered by ECAS are (from lowest to highest in terms of height) "Loading," "Highway," "Standard," "Off-Road," and "Off-Road Extended." Height is controlled automatically based on speed and undercarriage sensors, but a manual ride height switch allows control over the suspension by the driver. The "Loading" and "Off-Road" heights are available only at speeds typically less than . The "Highway" setting is not available manually; it is set when the vehicle moves at over typically for over 30 seconds. Unlike a mechanical spring system (where deflection is proportional to load), height may be varied independently from the load by altering the pressure in the air springs.
The air springs were designed to provide a smooth ride, with the additional ability to raise the body of the vehicle for off-road clearance and lower it for higher-speeds road driving. Mechanical springs, for which deflection is proportional to load, cannot do this; with ECAS height is largely independent of load. The developers of ECAS also designed LoadSafe, a related system to ascertain load and change in load on an LCV type vehicle fitted with air springs.
Components
The system comprises:
a vulcanised rubber air spring at each wheel
an air compressor, which is typically located in the trunk (boot) or under the bonnet
a compressed air storage tank may be included for rapid "kneel", storing air at ~150psi (1000 kPa), note (1psi=6.89kPa)
a valve block which routes air from the storage tank to the four air springs via a series of solenoids, valves and many o-rings
an ECAS computer which communicates with the car's main computer the BeCM and decides where to route air pressure
a series of 6 mm air pipes which channel air throughout the system (mainly from the storage tank to the air springs via the valve block)
an air drier canister containing desiccant
height sensors ideally on all 4 vehicle corners based, typically, on resistive contact sensing to give an absolute height reference for each corner of the vehicle.
Dunlop Systems and Components Ltd have continued to develop the products to the point where the Electronic Control Unit (ECU) is now able to fit under the vehicle floor. The control valves are much smaller and lighter and they produce their own range of compressors.
Multi-Chamber air suspension
The Multi-Chamber air suspension is a suspension capable of controlling the spring characteristics of the air suspension step by step.
Application
Genesis G90
Multi-Chamber air suspension applicated on the Genesis G90 consists of three chambers. Three chambers are used for a smooth ride, and one chamber is used for a dynamic driving feeling. A solenoid valve located between each chamber and a separate electronic control unit oversees the control process. In addition, the basic minimum ground height of 148mm is divided into four stages: high, normal, low, and ultra-low according to the driving mode, driving speed, and driving environment. depending on the driving mode, driving speed, and driving environment. And it informs the driver of the garage control through the infotainment screen. The speed bump control, the hump control, the slope control, and the high-speed driving control functions are activated under the air suspension control.
Common air suspension problems
Air bag or air strut failure is usually caused by wet rust, due to old age, or moisture within the air system that damages it from the inside. Air ride suspension parts may fail because rubber dries out. Punctures to the air bag may be caused from debris on the road. With custom applications, improper installation may cause the air bags to rub against the vehicle's frame or other surrounding parts, damaging it. The over-extension of an air spring which is not sufficiently constrained by other suspension components, such as a shock absorber, may also lead to the premature failure of an air spring through the tearing of the flexible layers. Failure of an air spring may also result in complete immobilization of the vehicle, since the vehicle will rub against the ground or be too high to move. However, most modern automotive systems have overcome many of these problems.
Air line failure is a failure of the tubing which connects the air bags or struts to the rest of the air system, and is typically DOT-approved nylon air brake line. This usually occurs when the air lines, which must be routed to the air bags through the chassis of the vehicle, rub against a sharp edge of a chassis member or a moving suspension component, causing a hole to form. This mode of failure will typically take some time to occur after the initial installation of the system, as the integrity of a section of air line is compromised to the point of failure due to the rubbing and resultant abrasion of the material. An air line failure may also occur if a piece of road debris hits an air line and punctures or tears it, although this is unlikely to occur in normal road use. It does occur in harsh off-road conditions but it still not common if correctly installed.
Air fitting failure usually occurs when they are first fitted or very rarely in use. Cheap low quality components tend to be very unreliable. Air fittings are used to connect components such as bags, valves, and solenoids to the air line that transfers the air. They are screwed into the component and for the most part push-in or push-to-fit DOT line is then inserted into the fitting.
Compressor failure is primarily due to leaking air springs or air struts. The compressor will burn out trying to maintain the correct air pressure in a leaking air system. Compressor burnout may also be caused by moisture from within the air system coming into contact with its electronic parts. This is far more likely to occur with low specification compressors with insufficient duty cycle which are often purchased due to low cost. For redundancy in the system two compressors are often a better option.
In Dryer failure the dryer, which functions to remove moisture from the air system, eventually becomes saturated and unable to perform that function. This causes moisture to build up in the system and can result in damaged air springs and/or a burned out compressor.
ECAS problems
The ECAS computer can, using pre-programmed criteria to detect a fault, disable the system into "Hard Fault Mode" which lowers the vehicle to the suspension bump-stops, leaving it usable with radically reduced performance until repaired.
Many enthusiasts use diagnostic devices such as laptop and hand computers running specially developed software to clear spurious faults and avoid the need for repair. Some manipulate the sensors to set the vehicle to a particular ride height at all times by adjusting the lever ratio on the height-sensing devices, or a supplementary ECU to "fool" the system.
Leaks in the system, often due to main seal wear caused by excessive duty cycle, can cause premature compressor failure.
Use on coaches and buses
Air springs are used in bus suspensions due to a wide range of advantages over mechanical springs. Compared to a mechanical spring, air suspension can adjust to different vehicle weights by increasing the pressure in the air bag, allowing vehicle height to be maintained at a particular value. Standard coaches also have a system called ferry lift, which raises the vehicle and increases its breakover angle. This system aids loading and unloading the coach on and off ferries due to their steep ramps and risk of grounding out, but can also be used on rough ground or on steep crests. Although the ferry lift may be installed on some buses, the Kneel Down facility is more common on public transport buses. This helps reduce the step height for easy passenger ingress. The Kneel Down facility is also used when using the built-in wheelchair ramps. Due to several advantages, air suspension has been extensively used in commercial vehicles since 1980.
See also
Active suspension
Arnott Air Suspension Products
Automotive suspension design
Coil spring
Dashpot
Double wishbone suspension
Height adjustable suspension
Hydropneumatic suspension
Leaf spring
Self-levelling suspension
Strut bar
Sway bar
Torsion bar suspension
References
External links
Automotive suspension technologies
Automotive safety technologies
Auto parts
Mechanical power control | Air suspension | [
"Physics"
] | 4,295 | [
"Mechanics",
"Mechanical power control"
] |
2,137,828 | https://en.wikipedia.org/wiki/Cold%20inflation%20pressure | Cold inflation pressure is the inflation pressure of tires as measured before a car is driven and the tires warmed up. Recommended cold inflation pressure is displayed in the owner's manual and on the Tire Information Placard attached to the vehicle door edge, pillar, glovebox door or fuel filler flap.
Cold inflation pressure is a gauge pressure and not an absolute pressure.
This article focuses on cold inflation pressures for passenger vehicles and trucks. The general principles are, of course, applicable to bicycle tires, tractor tires, and any other kind of tire with an internal structure that gives it a defined size and shape (as opposed to something that might resemble a very flexible balloon).
A 2001 NHTSA study found that 40% of passenger cars have at least one tire under-inflated by or more. The number one cause of tire failure was determined to be under-inflation. Drivers are encouraged to make sure their tires are adequately inflated at all times.
Under-inflated tires can greatly reduce fuel economy, increase emissions, cause increased wear on the edges of the tread surface, and can lead to overheating and premature failure of the tire.
Excessive pressure, on the other hand, will lead to impact-breaks, decreased braking performance, and increased wear on the center part of the tread surface.
Tire pressure is commonly measured in psi in the imperial and US customary systems, bar, which is deprecated but accepted for use with SI, or the kilopascal (kPa), which is an SI unit.
Variation of tire pressure with temperature
Daily temperature fluctuations can result in appreciable changes in tire pressure. Cold inflation pressure should therefore be measured in the morning, as this is the coldest time of day. This will ensure a tire meets or exceeds the required inflation pressure at any time of day.
Seasonal temperature fluctuations can also result in appreciable changes in tire pressure, and a tire that is properly inflated in the summer is likely to become underinflated in the winter. Because of this, it is important to check tire pressures whenever the local seasons change.
Variation of tire pressure with altitude
Atmospheric pressure will decrease around 0.5 psi for every 1000 feet above sea level. As a vehicle descends from a high altitude location, the absolute pressure inside the tire remains the same, but the atmospheric pressure increases; therefore the gauge pressure will decrease.
Take for example a vehicle which had its cold inflation tire pressure set near Denver (altitude 5300 feet), and is descending towards Los Angeles (altitude 300 feet). The tires could become underinflated by as much as 2.5 psi.
Cold inflation pressure should therefore be readjusted after any significant changes in altitude.
See also
Direct TPMS
Tire-pressure gauge
Tire-pressure monitoring system
References
Tire inflation
Pressure
Motor vehicle maintenance | Cold inflation pressure | [
"Physics"
] | 565 | [
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Pressure",
"Wikipedia categories named after physical quantities"
] |
2,137,836 | https://en.wikipedia.org/wiki/Cornering%20force | Cornering force or side force is the lateral (i.e., parallel to wheel axis) force produced by a vehicle tire during cornering.
Cornering force is generated by tire slip and is proportional to slip angle at low slip angles. The rate at which cornering force builds up is described by relaxation length. Slip angle describes the deformation of the tire contact patch, and this deflection of the contact patch deforms the tire in a fashion akin to a spring.
As with deformation of a spring, deformation of the tire contact patch generates a reaction force in the tire; the cornering force. Integrating the force generated by every tread element along the contact patch length gives the total cornering force. Although the term, "tread element" is used, the compliance in the tire that leads to this effect is actually a combination of sidewall deflection and deflection of the rubber within the contact patch. The exact ratio of sidewall compliance to tread compliance is a factor in tire construction and inflation pressure.
Because the tire deformation tends to reach a maximum behind the center of the contact patch, by a distance known as pneumatic trail, it tends to generate a torque about a vertical axis known as self aligning torque.
The diagram is misleading because the reaction force would appear to be acting in the wrong direction. It is simply a matter of convention to quote positive cornering force as acting in the opposite direction to positive tire slip so that calculations are simplified, since a vehicle cornering under the influence of a cornering force to the left will generate a tire slip to the right.
The same principles can be applied to a tire being deformed longitudinally, or in a combination of both longitudinal and lateral directions. The behaviour of a tire under combined longitudinal and lateral deformation can be described by a traction circle.
See also
Camber thrust
Lateral force variation
Circle of forces
Skidpad
References
Tires
Automotive steering technologies
Force
Motorcycle dynamics | Cornering force | [
"Physics",
"Mathematics"
] | 392 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
2,138,108 | https://en.wikipedia.org/wiki/Falowiec | Falowiec (plural: falowce; from the Polish word fala, wave) is a block of flats characterised by its length and wavy shape. This type of building was built in Poland in the late 1960s and 1970s in the city of Gdańsk, where there are eight buildings of this type. It is an example of post-war modernism in the PRL.
The best-known falowiec in Gdańsk, located at the Obrońców Wybrzeża street, is the second longest housing block in Europe. It has:
11 stories (10 plus the ground floor)
nearly 6,000 occupants
1,792 flats
a length of around
It was featured in the 5th episode of The Amazing Race 23 as part of a roadblock.
References
Similar developments
Byker Wall, Newcastle upon Tyne, UK
, Rome, Italy
Prora, Rügen, Germany
Park Hill, Sheffield, UK
Karl Marx-Hof, Vienna, Austria
Architecture in Poland
House types
Culture in Gdańsk | Falowiec | [
"Engineering"
] | 201 | [
"Architecture stubs",
"Architecture"
] |
2,138,217 | https://en.wikipedia.org/wiki/Low-density%20polyethylene | Low-density polyethylene (LDPE) is a thermoplastic made from the monomer ethylene. It was the first grade of polyethylene, produced in 1933 by John C. Swallow and M.W Perrin who were working for Imperial Chemical Industries (ICI) using a high pressure process via free radical polymerization. Its manufacture employs the same method today. The EPA estimates 5.7% of LDPE (resin identification code 4) is recycled in the United States. Despite competition from more modern polymers, LDPE continues to be an important plastic grade. In 2013 the worldwide LDPE market reached a volume of about US$33 billion.
Despite its designation with the recycling symbol, it cannot be as commonly recycled as No. 1 (polyethylene terephthalate) or 2 plastics (high-density polyethylene).
Properties
LDPE is defined by a density range of 917–930 kg/m3. At room temperature it is not reactive, except to strong oxidizers; some solvents cause it to swell. It can withstand temperatures of continuously and for a short time. Made in translucent and opaque variations, it is quite flexible and tough.
LDPE has more branching (on about 2% of the carbon atoms) than HDPE, so its intermolecular forces (instantaneous-dipole induced-dipole attraction) are weaker, its tensile strength is lower, and its resilience is higher. The side branches mean that its molecules are less tightly packed and less crystalline, and therefore its density is lower.
When exposed to consistent sunlight, the plastic produces significant amounts of two greenhouse gases: methane and ethylene. Because of its lower density (high branching), it breaks down more easily than other plastics; as this happens, the surface area increases. Production of these trace gases from virgin plastics increases with surface area and with time, so that LDPE emits greenhouse gases at a more unsustainable rate than other plastics. In a test at the end of 212 days' incubation, emissions recorded were 5.8 nmol g−1 d−1 of methane, 14.5 nmol g−1 d−1 of ethylene, 3.9 nmol g−1 d−1 of ethane, and 9.7 nmol g−1 d−1 of propylene. When incubated in air, LDPE emits methane and ethylene at rates about 2 times and about 76 times, respectively, more than in water.
Chemical resistance
Excellent resistance (no attack/no chemical reaction) to dilute and concentrated acids, alcohols, bases, and esters
Good resistance (minor attack/very low chemical reactivity) to aldehydes, ketones, and vegetable oils
Limited resistance (moderate attack/significant chemical reaction, suitable for short-term use only) to aliphatic and aromatic hydrocarbons, mineral oils, and oxidizing agents
Poor resistance, and not recommended for use with halogenated hydrocarbons.
Applications
Polyolefins (LDPE, HDPE, PP) are a major type of thermoplastic. LDPE is widely used for manufacturing various containers, dispensing bottles, wash bottles, tubing, plastic parts for computer components, and various molded laboratory equipment. Its most common use is in plastic bags. Other products made from it include:
Trays and general purpose containers
Corrosion-resistant work surfaces
Parts that need to be weldable and machinable
Parts that require flexibility, for which it serves very well
Very soft and pliable parts such as snap-on lids
Six-pack rings
Juice and milk cartons are made of liquid packaging board, a laminate of paperboard and LDPE (as the waterproof inner and outer layer), and often with of a layer of aluminum foil (thus becoming aseptic packaging).
Packaging for computer hardware, such as hard disk drives, screen cards, and optical disc drives
Playground slides
Plastic wraps
Plastic bags
Plastic containers
Pipes
Housewares
Battery cases
Automotive parts
Electrical components
See also
Film blowing machine
High-density polyethylene (HDPE)
Linear low-density polyethylene (LLDPE)
Medium-density polyethylene (MDPE)
Polyethylene terephthalate (PET/PETE)
Stretch wrap
Ultra-high-molecular-weight polyethylene (UHMWPE)
References
External links
2010_MSW_Tables_and_Figures_508.pdf. epa.gov
Polyolefins
Plastics
Packaging materials
British inventions
Food packaging | Low-density polyethylene | [
"Physics"
] | 935 | [
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
2,138,223 | https://en.wikipedia.org/wiki/Aeronomy | Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena.
History
The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets.
Branches
Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy.
Terrestrial aeronomy
Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology.
Terrestrial aeronomers study atmospheric tides and upper-atmospheric lightning discharges such as red sprites, sprite halos, blue jets, and ELVES. They also investigate the causes of dissociation and ionization processes in the Earth's upper atmosphere. Terrestrial aeronomers use ground-based telescopes, balloons, satellites, and sounding rockets to gather data from the upper atmosphere.
Atmospheric tides
Atmospheric tides are global-scale periodic oscillations of the Earth′s atmosphere, analogous in many ways to ocean tides. Atmospheric tides dominate the dynamics of the mesosphere and lower thermosphere, serving as an important mechanism for transporting energy from the upper atmosphere into the lower atmosphere. Terrestrial aeronomers study atmospheric tides because an understanding of them is essential to an understanding of the atmosphere as a whole and of benefit in improving the understanding of meteorology. Modeling and observations of atmospheric tides allow researchers to monitor and predict changes in the Earth's atmosphere.
Upper-atmospheric lightning
"Upper-atmospheric lightning" or "upper-atmospheric discharge" are terms aeronomers sometimes use to refer to a family of electrical-breakdown phenomena in the Earth's upper atmosphere that occur well above the altitudes of the tropospheric lightning observed in the lower atmosphere. Currently, the preferred term for an electrical-discharge phenomenon induced in the upper atmosphere by tropospheric lightning is "transient luminous event" (TLE). There are various types of TLEs including red sprites, sprite halos, blue jets, and ELVES (an acronym for "Emission of Light and Very-Low-Frequency perturbations due to Electromagnetic Pulse Sources").
Planetary aeronomy
Planetary aeronomy studies the regions of the atmospheres of other planets that correspond to the Earth's mesosphere, thermosphere, exosphere, and ionosphere. In some cases, a planet's entire atmosphere may consist only of what on Earth constitutes the upper atmosphere, or only a portion of it. Planetary aeronomers use ground-based telescopes, space telescopes, and space probes which fly by, orbit, or land on other planets to gain knowledge of the atmospheres of those planets through the use of instruments such as interferometers, optical spectrometers, magnetometers, and plasma detectors and techniques such as radio occultation. Although planetary aeronomy originally was confined to the study of the atmospheres of the other planets in the Solar System, the discovery since 1995 of exoplanets has allowed planetary aeronomers to expand their field to include the atmospheres of those planets as well.
Comparative aeronomy
Comparative aeronomy uses the findings of terrestrial and planetary aeronomy — traditionally separate scientific fields — to compare the characteristics and behaviors of the atmospheres of other planets with one another and with the upper atmosphere of Earth. It seeks to identify and describe the ways in which differing chemistry, magnetic fields, and thermodynamics on various planets affect the creation, evolution, diversity, and disappearance of atmospheres.
Notes
See also
Atmospheric chemistry
Atmospheric physics
Exosphere
Ionosphere
Mesosphere
Meteorology
Space physics
Thermosphere
References
External links
The NOAA Aeronomy Laboratory
Atmospheric chemistry
Atmospheric physics
Electrical phenomena
Lightning
Space physics
Atmospheric sciences | Aeronomy | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,043 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Outer space",
"Atmospheric physics",
"Electrical phenomena",
"Lightning",
"nan",
"Space physics"
] |
2,138,454 | https://en.wikipedia.org/wiki/Linear%20low-density%20polyethylene | Linear low-density polyethylene (LLDPE) is a substantially linear polymer (polyethylene), with significant numbers of short branches, commonly made by copolymerization of ethylene with longer-chain olefins. Linear low-density polyethylene differs structurally from conventional low-density polyethylene (LDPE) because of the absence of long chain branching. The linearity of LLDPE results from the different manufacturing processes of LLDPE and LDPE. In general, LLDPE is produced at lower temperatures and pressures by copolymerization of ethylene and such higher alpha-olefins as butene, hexene, or octene. The copolymerization process produces an LLDPE polymer that has a narrower molecular weight distribution than conventional LDPE and in combination with the linear structure, significantly different rheological properties.
Production and properties
The production of LLDPE is initiated by transition metal catalysts, particularly Ziegler or Philips types of catalyst. The actual polymerization process can be done either in solution phase or in gas phase reactors. Usually, octene is the comonomer in solution phase while butene and hexene are copolymerized with ethylene in a gas phase reactor. LLDPE has higher tensile strength and higher impact and puncture resistance than does LDPE. It is very flexible and elongates under stress. It can be used to make thinner films, with better environmental stress cracking resistance. It has good resistance to chemicals. It has good electrical properties. However, it is not as easy to process as LDPE, has lower gloss, and narrower range for heat sealing.
Processing
LDPE and LLDPE have unique rheological or melt flow properties. LLDPE is less shear sensitive because of its narrower molecular weight distribution and shorter chain branching. During a shearing process, such as extrusion, LLDPE remains more viscous and, therefore, harder to process than an LDPE of equivalent melt index. The lower shear sensitivity of LLDPE allows for a faster stress relaxation of the polymer chains during extrusion, and, therefore, the physical properties are susceptible to changes in blow-up ratios. In melt extension, LLDPE has lower viscosity at all strain rates. This means it will not strain harden the way LDPE does when elongated. As the deformation rate of the polyethylene increases, LDPE demonstrates a dramatic rise in viscosity because of chain entanglement. This phenomenon is not observed with LLDPE because of the lack of long-chain branching in LLDPE allows the chains to slide by one another upon elongation without becoming entangled. This characteristic is important for film applications because LLDPE films can be downgauged easily while maintaining high strength and toughness. The rheological properties of LLDPE are summarized as "stiff in shear" and "soft in extension". LLDPE can be recycled, though into other things like trash can liners, lumber, landscaping ties, floor tiles, compost bins, and shipping envelopes.
Application
LLDPE has penetrated almost all traditional markets for polyethylene; it is used for plastic bags and sheets (where it allows using lower thickness than comparable LDPE), plastic wrap, stretch wrap, pouches, toys, covers, lids, pipes, buckets and containers, covering of cables, geomembranes, and mainly flexible tubing. In 2013, the world market for LLDPE reached a volume of $40 billion.
LLDPE manufactured by using metallocene catalysts is labeled mLLDPE.
See also
Cross-linked polyethylene (XLPE/PEX)
High-density polyethylene (HDPE)
Low-density polyethylene (LDPE)
Medium-density polyethylene (MDPE)
Plastic recycling
Stretch wrap
Ultra-high-molecular-weight polyethylene (UHMWPE)
References
Modern Plastic Mid-October Encyclopedia Issue, page 56 and 61
External links
Example of LLDPE Physical Properties
Polyolefins
Plastics
Packaging materials
Food packaging | Linear low-density polyethylene | [
"Physics"
] | 856 | [
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
2,138,501 | https://en.wikipedia.org/wiki/Ultra-high-molecular-weight%20polyethylene | Ultra-high-molecular-weight polyethylene (UHMWPE, UHMW) is a subset of the thermoplastic polyethylene. Also known as high-modulus polyethylene (HMPE), it has extremely long chains, with a molecular mass usually between 3.5 and 7.5 million amu. The longer chain serves to transfer load more effectively to the polymer backbone by strengthening intermolecular interactions. This results in a very tough material, with the highest impact strength of any thermoplastic presently made.
UHMWPE is odorless, tasteless, and nontoxic. It embodies all the characteristics of high-density polyethylene (HDPE) with the added traits of being resistant to concentrated acids and alkalis, as well as numerous organic solvents. It is highly resistant to corrosive chemicals except oxidizing acids; has extremely low moisture absorption and a very low coefficient of friction; is self-lubricating (see boundary lubrication); and is highly resistant to abrasion, in some forms being 15 times more resistant to abrasion than carbon steel. Its coefficient of friction is significantly lower than that of nylon and acetal and is comparable to that of polytetrafluoroethylene (PTFE, Teflon), but UHMWPE has better abrasion resistance than PTFE.
Development
Polymerization of UHMWPE was commercialized in the 1950s by Ruhrchemie AG, which has changed names over the years. Today UHMWPE powder materials, which may be directly molded into a product's final shape, are produced by Ticona, Braskem, Teijin (Endumax), Celanese, and Mitsui. Processed UHMWPE is available commercially either as fibers or in consolidated form, such as sheets or rods. Because of its resistance to wear and impact, UHMWPE continues to find increasing industrial applications, including the automotive and bottling sectors. Since the 1960s, UHMWPE has also been the material of choice for total joint arthroplasty in orthopedic and spine implants.
UHMWPE fibers branded as Dyneema, commercialized in the late 1970s by the Dutch chemical company DSM, and as Spectra, commercialized by Honeywell (then AlliedSignal), are widely used in ballistic protection, defense applications, and increasingly in medical devices, sailing, hiking equipment, climbing, and many other industries.
Structure and properties
UHMWPE is a type of polyolefin. It is made up of extremely long chains of polyethylene, which all align in the same direction. It derives its strength largely from the length of each individual molecule (chain). Van der Waals forces between the molecules are relatively weak for each atom of overlap between the molecules, but because the molecules are very long, large overlaps can exist, adding up to the ability to carry larger shear forces from molecule to molecule. Each chain is attracted to the others with so many van der Waals forces that the whole of the inter-molecular strength is high. In this way, large tensile loads are not limited as much by the comparative weakness of each localized van der Waals force.
When formed into fibers, the polymer chains can attain a parallel orientation greater than 95% and a level of crystallinity from 39% to 75%. In contrast, Kevlar derives its strength from strong bonding between relatively short molecules.
The weak bonding between olefin molecules allows local thermal excitations to disrupt the crystalline order of a given chain piece-by-piece, giving it much poorer heat resistance than other high-strength fibers. Its melting point is around , and, according to DSM, it is not advisable to use UHMWPE fibres at temperatures exceeding for long periods of time. It becomes brittle at temperatures below .
The simple structure of the molecule also gives rise to surface and chemical properties that are rare in high-performance polymers. For example, the polar groups in most polymers easily bond to water. Because olefins have no such groups, UHMWPE does not absorb water readily, nor wet easily, which makes bonding it to other polymers difficult. For the same reasons, skin does not interact with it strongly, making the UHMWPE fiber surface feel slippery. In a similar manner, aromatic polymers are often susceptible to aromatic solvents due to aromatic stacking interactions, an effect aliphatic polymers like UHMWPE are immune to. Since UHMWPE does not contain chemical groups (such as esters, amides, or hydroxylic groups) that are susceptible to attack from aggressive agents, it is very resistant to water, moisture, most chemicals, UV radiation, and micro-organisms.
Under tensile load, UHMWPE will deform continually as long as the stress is present—an effect called creep.
When UHMWPE is annealed, the material is heated to between in an oven or a liquid bath of silicone oil or glycerine. The material is then cooled down to at a rate of or less. Finally, the material is wrapped in an insulating blanket for 24 hours to bring to room temperature.
Production
Ultra-high-molecular-weight polyethylene (UHMWPE) is synthesized from its monomer ethylene, which is bonded together to form the base polyethylene product. These molecules are several orders of magnitude longer than those of familiar high-density polyethylene (HDPE) due to a synthesis process based on metallocene catalysts, resulting in UHMWPE molecules typically having 100,000 to 250,000 monomer units per molecule each compared to HDPE's 700 to 1,800 monomers.
UHMWPE is processed variously by compression moulding, ram extrusion, gel spinning, and sintering. Several European companies began compression molding UHMWPE in the early 1960s. Gel-spinning arrived much later and was intended for different applications.
In gel spinning a precisely heated gel (of a low concentration of UHMWPE in an oil) is extruded through a spinneret. The extrudate is drawn through the air, the oil extracted with a solvent which does not affect the UHMWPE, and then dried removing the solvent. The end-result is a fiber with a high degree of molecular orientation, and therefore exceptional tensile strength. Gel spinning depends on isolating individual chain molecules in the solvent so that intermolecular entanglements are minimal. Entanglements make chain orientation more difficult, and lower the strength of the final product.
Applications
Fiber
Dyneema and Spectra are brands of lightweight high-strength oriented-strand gels spun through a spinneret. They have yield strengths as high as and density as low as (for Dyneema SK75). High-strength steels have comparable yield strengths, and low-carbon steels have yield strengths much lower (around ). Since steel has a specific gravity of roughly 7.8, these materials have a strength-to-weight ratios eight times that of high-strength steels. Strength-to-weight ratios for UHMWPE are about 40% higher than for aramid. The high qualities of UHMWPE filament were discovered by Albert Pennings in 1968, but commercially viable products were made available by DSM in 1990 and Southern Ropes soon after.
Derivatives of UHMWPE yarn are used in composite plates in armor, in particular, personal armor and on occasion as vehicle armor. Civil applications containing UHMWPE fibers are cut-resistant gloves, tear-resistant hosiery, bow strings, climbing equipment, automotive winching, fishing line, spear lines for spearguns, high-performance sails, suspension lines on sport parachutes and paragliders, rigging in yachting, kites, and kite lines for kites sports.
For personal armor, the fibers are, in general, aligned and bonded into sheets, which are then layered at various angles to give the resulting composite material strength in all directions. Recently developed additions to the US Military's Interceptor body armor, designed to offer arm and leg protection, are said to utilize a form of UHMWPE fabric. A multitude of UHMWPE woven fabrics are available in the market and are used as shoe liners, pantyhose, fencing clothing, stab-resistant vests, and composite liners for vehicles.
The use of UHMWPE rope for automotive winching offers several advantages over the more common steel wire rope. The key reason for changing to UHMWPE rope is improved safety. The lower mass of UHMWPE rope, coupled with significantly lower elongation at breaking, carries far less energy than steel or nylon, which leads to almost no snap-back. UHMWPE rope does not develop kinks that can cause weak spots, and any frayed areas that may develop along the surface of the rope cannot pierce the skin like broken steel wire strands can. UHMWPE rope is less dense than water, making water recoveries easier as the recovery cable is easier to locate than wire rope. The bright colours available also aid with visibility should the rope become submerged or dirty. Another advantage in automotive applications is the reduced weight of UHMWPE rope over steel cables. A typical UHMWPE rope of can weigh around , the equivalent steel wire rope would weigh around . One notable drawback of UHMWPE rope is its susceptibility to UV damage, so many users will fit winch covers in order to protect the cable when not in use. It is also vulnerable to heat damage from contact with hot components.
Spun UHMWPE fibers excel as fishing line, as they have less stretch, are more abrasion-resistant, and are thinner than the equivalent monofilament line.
In climbing, cord and webbing made of combinations of UHMWPE and nylon yarn have gained popularity for their low weight and bulk. They exhibit very low elasticity compared to their nylon counterparts, which translates to low toughness. The fiber's very high lubricity causes poor knot-holding ability, and it is mostly used in pre-sewn 'slings' (loops of webbing)—relying on knots to join sections of UHMWPE is generally not recommended, and if necessary it is recommended to use the triple fisherman's knot rather than the traditional double fisherman's knot.
Ships' hawsers and cables made from the fiber (0.97 specific gravity) float on sea water. "Spectra wires" as they are called in the towing boat community are commonly used for face wires
as a lighter alternative to steel wires.
It is used in skis and snowboards, often in combination with carbon fiber, reinforcing the fiberglass composite material, adding stiffness and improving its flex characteristics. The UHMWPE is often used as the base layer, which contacts the snow, and includes abrasives to absorb and retain wax.
It is also used in lifting applications, for manufacturing low weight, and heavy duty lifting slings. Due to its extreme abrasion resistance it is also used as an excellent corner protection for synthetic lifting slings.
High-performance lines (such as backstays) for sailing and parasailing are made of UHMWPE, due to their low stretch, high strength, and low weight. Similarly, UHMWPE is often used for winch-launching gliders from the ground, as, in comparison with steel cable, its superior abrasion resistance results in less wear when running along the ground and into the winch, increasing the time between failures. The lower weight on the mile-long cables used also results in higher winch launches.
UHMWPE was used for the long, thick space tether in the ESA/Russian Young Engineers' Satellite 2 of September, 2007.
Dyneema composite fabric (DCF) is a laminated material consisting of a grid of Dyneema threads sandwiched between two thin transparent polyester membranes. This material is very strong for its weight, and was originally developed for use in racing yacht sails under the name 'Cuben Fiber'. More recently it has found new applications, most notably in the manufacture of lightweight and ultralight camping and backpacking equipment such as tents, backpacks, and bear-proof food bags.
In archery, UHMWPE is widely used as a material for bowstrings because of its low creep and stretch compared to, for example, Dacron (PET). Besides pure UHMWPE fibers, most manufacturers use blends to further reduce the creep and stretch of the material. In these blends, the UHMWPE fibers are blended with, for example, Vectran.
In skydiving, UHMWPE is one of the most common materials used for suspension lines, largely supplanting the earlier-used Dacron, being lighter and less bulky. UHMWPE has excellent strength and wear-resistance, but is not dimensionally stable (i.e. shrinks) when exposed to heat, which leads to gradual and uneven shrinkage of different lines as they are subject to differing amounts of friction during canopy deployment, necessitating periodic line replacement. It is also almost completely inelastic, which can exacerbate the opening shock. For that reason, Dacron lines continue to be used in student and some tandem systems, where the added bulk is less of a concern than the potential for an injurious opening. In turn, in high-performance parachutes used for swooping, UHMWPE is replaced with Vectran and HMA (high-modulus aramid), which are even thinner and dimensionally stable, but exhibit greater wear and require much more frequent maintenance to prevent catastrophic failure. UHMWPE are also used for reserve parachute closing loops when used with automatic activation devices, where their extremely low coefficient of friction is critical for proper operation in the event of cutter activation.
Medical
UHMWPE has a clinical history as a biomaterial for use in hip, knee, and (since the 1980s), for spine implants. An online repository of information and review articles related to medical grade UHMWPE, known as the UHMWPE Lexicon, was started online in 2000.
Joint replacement components have historically been made from "GUR" resins. These powder materials are produced by Ticona, typically converted into semi-forms by companies such as Quadrant and Orthoplastics, and then machined into implant components and sterilized by device manufacturers.
UHMWPE was first used clinically in 1962 by Sir John Charnley and emerged as the dominant bearing material for total hip and knee replacements in the 1970s. Throughout its history, there were unsuccessful attempts to modify UHMWPE to improve its clinical performance until the development of highly cross-linked UHMWPE in the late 1990s.
One unsuccessful attempt to modify UHMWPE was by blending the powder with carbon fibers. This reinforced UHMWPE was released clinically as "Poly Two" by Zimmer in the 1970s. The carbon fibers had poor compatibility with the UHMWPE matrix and its clinical performance was inferior to virgin UHMWPE.
A second attempt to modify UHMWPE was by high-pressure recrystallization. This recrystallized UHMWPE was released clinically as "Hylamer" by DePuy in the late 1980s. When gamma irradiated in air, this material exhibited susceptibility to oxidation, resulting in inferior clinical performance relative to virgin UHMWPE. Today, the poor clinical history of Hylamer is largely attributed to its sterilization method, and there has been a resurgence of interest in studying this material (at least among certain research circles). Hylamer fell out of favor in the United States in the late 1990s with the development of highly cross-linked UHMWPE materials, however negative clinical reports from Europe about Hylamer continue to surface in the literature.
Highly cross-linked UHMWPE materials were clinically introduced in 1998 and have rapidly become the standard of care for total hip replacements, at least in the United States. These new materials are cross-linked with gamma or electron beam radiation (50–105 kGy) and then thermally processed to improve their oxidation resistance. Five-year clinical data, from several centers, are now available demonstrating their superiority relative to conventional UHMWPE for total hip replacement (see arthroplasty). Clinical studies are still underway to investigate the performance of highly cross-linked UHMWPE for knee replacement.
In 2007, manufacturers started incorporating anti-oxidants into UHMWPE for hip and knee arthroplasty bearing surfaces. Vitamin E (a-tocopherol) is the most common anti-oxidant used in radiation-cross-linked UHMWPE for medical applications. The anti-oxidant helps quench free radicals that are introduced during the irradiation process, imparting improved oxidation resistance to the UHMWPE without the need for thermal treatment. Several companies have been selling antioxidant-stabilized joint replacement technologies since 2007, using both synthetic vitamin E as well as hindered phenol-based antioxidants.
Another important medical advancement for UHMWPE in the past decade has been the increase in use of fibers for sutures. Medical-grade fibers for surgical applications are produced by DSM under the "Dyneema Purity" trade name.
Manufacturing
UHMWPE is used in the manufacture of PVC (vinyl) windows and doors, as it can endure the heat required to soften the PVC-based materials and is used as a form/chamber filler for the various PVC shape profiles in order for those materials to be 'bent' or shaped around a template.
UHMWPE is also used in the manufacture of hydraulic seals and bearings. It is best suited for medium mechanical duties in water, oil hydraulics, pneumatics, and unlubricated applications. It has a good abrasion resistance but is better suited to soft mating surfaces.
Wire and cable
Fluoropolymer / HMWPE insulation cathodic protection cable is typically made with dual insulation. It features a primary layer of a fluoropolymer such as ethylene-chlorotrifluoroethylene (ECTFE) which is chemically resistant to chlorine, sulfuric acid, and hydrochloric acid. Following the primary layer is an HMWPE insulation layer, which provides pliable strength and allows considerable abuse during installation. The HMWPE jacketing provides mechanical protection as well.
Marine infrastructure
UHMWPE is used in marine structures for the mooring of ships and floating structures in general. The UHMWPE forms the contact surface between the floating structure and the fixed one. Timber was and is used for this application also. UHMWPE is chosen as facing of fender systems for berthing structures because of the following characteristics:
Wear resistance: best among plastics, better than steel
Impact resistance: best among plastics, similar to steel
Low friction (wet and dry conditions): self-lubricating material
See also
Low-density polyethylene (LDPE)
Medium-density polyethylene (MDPE)
Twaron
IPX Ultra-high-molecular-weight polyethylene
References
Further reading
Southern et al., The Properties of Polyethylene Crystallized Under the Orientation and Pressure Effects of a Pressure Capillary Viscometer, Journal of Applied Polymer Science vol. 14, pp. 2305–2317 (1970).
Kanamoto, On Ultra-High Tensile by Drawing Single Crystal Mats of High Molecular Weight Polyethylene, Polymer Journal vol. 15, No. 4, pp. 327–329 (1983).
External links
US Patent 5342567 Process for producing high tenacity and high modulus polyethylene fibers, issued 1994-08-30
Polymer Gel Spinning Machine Christine A. Odero, MIT, 1994
Patent application 20070148452 High strength polyethylene fiber, 2007-06-28
Analytical techniques to characterize radiation effects on UHMWPE
Next generation orthopedic implants using UHMWPE
Highly crosslinked VE-UHMWPE for hip and knee replacements
UHMWPE Characteristics, Processing Methods, Applications
Polyethylene UHMWPE HDPE LDPE LLDPE – What are the differences?
HMPE Fibre – How is it made?
Brand name materials
Body armor
Polyolefins
Plastics
Synthetic fibers | Ultra-high-molecular-weight polyethylene | [
"Physics",
"Chemistry"
] | 4,297 | [
"Synthetic fibers",
"Synthetic materials",
"Unsolved problems in physics",
"Amorphous solids",
"Plastics"
] |
2,138,731 | https://en.wikipedia.org/wiki/Drug%20reaction%20testing | Drug reaction testing uses a genetic test to predict how a particular person will respond to various prescription and non-prescription medications. It checks for genes that code for specific liver enzymes which activate, deactivate, or are influenced by various drugs.
There are currently four genetic markers commonly tested for: 2D6, 2C9, 2C19, and 1A2.
This testing has been done for some time by drug companies working on new drugs, but is relatively newly available to the general public. Strattera is the first drug to mention the test in the official documentation, although it doesn't specifically recommend that patients get the test before taking the medication.
There are four possible categories for each marker: poor metabolizer, intermediate metabolizer, extensive metabolizer, or ultra-extensive metabolizer. Different testing companies may call these by different names. Extensive metabolizers (that is, people who are extensive metabolizers of a given type) are the most common, and are the type of people for which drugs are designed. Up to 7% of Caucasians are poor metabolizers of drugs metabolized by the CYP2D6 enzyme.
People who cannot metabolize a drug will require a much lower dose than is recommended by the manufacturer, and those who metabolize it quickly may require a higher dose. Some drugs, such as codeine, will not be effective in people without the requisite enzymes to activate them.
People who are poor metabolizers of a drug may overdose while taking less than the recommended dose.
See also
Medical prescription
Contraindication
Cytochrome P450
Drug metabolism
References and end notes
Pharmacy
Clinical pharmacology | Drug reaction testing | [
"Chemistry"
] | 346 | [
"Pharmacology",
"Pharmacy",
"Clinical pharmacology"
] |
2,138,745 | https://en.wikipedia.org/wiki/Jump%20and%20Smile | The Jump & Smile is a type of fairground ride which consists of gondolas arranged on a number of radial arms around a central axis. As the central axis rotates, the arms are lifted into the air using compressed air cylinders at pseudo-random intervals, providing an erratic jumping motion. Most versions of the ride have preloaded patterns which the arms can move in, leading to an eye-catching display. Notable Jump & Smile manufactures include Sartori, Safeco, PWS, SBF Visa and Fabbri.
Variants
Standard
A standard Jump & Smile has 12 arms each holding one three-person gondola. Riders are secured by a simple lap bar. These rides have become extremely popular in Europe, particularly the models offered by Safeco, Sartori and PWS.
Floorless
A floorless Jump & Smile has 12 arms each holding one two-person floorless gondola. Unlike the regular Jump & Smile, the arms are at a higher angle so that the gondolas have enough room to not crash into the floor. The gondolas on these rides can also usually rotate freely. Riders are secured by over-the-shoulder restraints. Examples of these rides are the Fabbri Smashing Jump, the Sartori Roto Techno and the Safeco Hang-Jump.
References
Amusement rides | Jump and Smile | [
"Physics",
"Technology"
] | 266 | [
"Physical systems",
"Machines",
"Amusement rides"
] |
2,139,132 | https://en.wikipedia.org/wiki/LunaCorp | LunaCorp, was a small but ambitious private company headed by its former president David Gump, established in 1989. It was designed around a privately funded mission, using Russian technology, to put a rover on the Moon. The aim for the company was to fund the mission by the entertainment value of having customers drive the rover. The program's advisor was Dr. Buzz Aldrin, who, together with Neil Armstrong, walked on the surface of the Moon in 1969 during the first crewed lunar mission.
After producing no tangible results the company was dissolved in 2003.
The details of the mission evolved with time. Because the Moon is hotter than boiling water at noon and colder than liquid nitrogen at night, in the final version of the design the robot would avoid those extremes by circumnavigating the Moon every 29.5 days (the length of a lunar day) to stay in sunlight, a strategy originally proposed by Geoffrey Landis. "Our robot, by driving completely around the Moon at a high latitude at only a few kilometers per hour, will enjoy lunar morning temperatures all the time by staying in sync with the sun", said the mission's controller.
References
External links
LunaCorp press release (2000) from Space Frontiers.org
Snapshots of LunaCorp History, 2007.
Interview: LunaCorp and Orbital Outfitters, Daily Spaceflight News, 15 December 2010.
Private spaceflight companies
Companies disestablished in 2003
Defunct spaceflight companies | LunaCorp | [
"Astronomy"
] | 295 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
2,139,185 | https://en.wikipedia.org/wiki/Two-wire%20circuit | In telecommunication, a two-wire circuit is characterized by supporting transmission in two directions simultaneously, as opposed to four-wire circuits, which have separate pairs for transmit and receive. The subscriber local loop from the telco central office are almost all two wire for analog baseband voice calls (and some digital services like ISDN), and converted to four-wire at the line card back when telephone switching was performed on baseband audio. Today the audio is digitized and processed completely in the digital domain upstream from the local loop.
The reason for using two wires rather than four is simple economics—half the materials cost half as much to purchase and install. Note the use of the past-tense "cost," as installation of two-wire copper local loops for telephony was done primarily during the mid 20th century. In the first world there is no new infrastructure planning for new copper-based technology, and as customers are migrating to cellular telephony and high-speed Internet, wireline carriers are abandoning their copper local loops, tearing out the copper and replacing it with fiber-optic cable and/or selling the rights-of-way to third parties for private use. In developing nations, wireless communications are considered to be the most cost-effective from an infrastructure perspective. Two-wire circuits in new installations are limited to intercom and military field telephone applications, though these too are being supplanted by modern digital communication modes.
To communicate in both directions in the same wire pair, conversion between four-wire and two-wire was necessary, both at the telephone and at the central office. A hybrid coil accomplishes the conversion for both. At the central office, it is part of a four-wire terminating set, more often as part of a line card. A modern line card has no two-to-four wire conversion whatsoever; it is strictly an analog/digital interface to a system that has a completely digital and integrated signal path internally. Using actual wires to circuit switch a telephone call became obsolete when the crossbar switch (a mechanical system) was replaced by 4ESS electronic switches in the 1970s by the Bell System in the US. The old telephone hybrids of yore have been replaced by inexpensive IC chip-based components that perform the same functions at greatly reduced cost. When personal computing and the Internet became popular at the end of the 20th century, the inductive load of traditional hybrids became a liability for computer modem users, and remaining loading coils in subscriber lines were scrapped.
Impedance standards
Different countries have different standards for telephone impedance.
The European regulatory requirement CTR 21 has been officially withdrawn. Some manufacturers prefer to continue meeting CTR 21, but there is little reason to do so.
References
A History of engineering and science in the Bell System: Transmission Technology (1925-1975)
A History of engineering and science in the Bell System: Switching Technology (1925-1975)
Communication circuits | Two-wire circuit | [
"Engineering"
] | 595 | [
"Telecommunications engineering",
"Communication circuits"
] |
2,139,226 | https://en.wikipedia.org/wiki/Ordinal%20arithmetic | In the mathematical field of set theory, ordinal arithmetic describes the three usual operations on ordinal numbers: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the result of the operation or by using transfinite recursion. Cantor normal form provides a standardized way of writing ordinals. In addition to these usual ordinal operations, there are also the "natural" arithmetic of ordinals and the nimber operations.
Addition
The sum of two well-ordered sets and is the ordinal representing the variant of lexicographical order with least significant position first, on the union of the Cartesian products and . This way, every element of is smaller than every element of , comparisons within keep the order they already have, and likewise for comparisons within .
The definition of addition can also be given by transfinite recursion on . When the right addend , ordinary addition gives for any . For , the value of is the smallest ordinal strictly greater than the sum of and for all . Writing the successor and limit ordinals cases separately:
, where denotes the successor function.
when is a limit ordinal.
Ordinal addition on the natural numbers is the same as standard addition. The first transfinite ordinal is , the set of all natural numbers, followed by , , etc. The ordinal is obtained by two copies of the natural numbers ordered in the usual fashion and the second copy completely to the right of the first. Writing for the second copy, looks like
This is different from because in only does not have a direct predecessor while in the two elements and do not have direct predecessors.
Properties
Ordinal addition is, in general, not commutative. For example, since the order relation for is , which can be relabeled to . In contrast is not equal to since the order relation has a largest element (namely, ) and does not ( and are equipotent, but not order isomorphic).
Ordinal addition is still associative; one can see for example that .
Addition is strictly increasing and continuous in the right argument:
but the analogous relation does not hold for the left argument; instead we only have:
Ordinal addition is left-cancellative: if , then . Furthermore, one can define left subtraction for ordinals : there is a unique such that . On the other hand, right cancellation does not work:
but
Nor does right subtraction, even when : for example, there does not exist any such that .
If the ordinals less than are closed under addition and contain 0, then is occasionally called a -number (see additively indecomposable ordinal). These are exactly the ordinals of the form .
Multiplication
The Cartesian product, , of two well-ordered sets and can be well-ordered by a variant of lexicographical order that puts the least significant position first. Effectively, each element of is replaced by a disjoint copy of . The order-type of the Cartesian product is the ordinal that results from multiplying the order-types of and .
The definition of multiplication can also be given by transfinite recursion on . When the right factor , ordinary multiplication gives for any . For , the value of is the smallest ordinal greater than or equal to for all . Writing the successor and limit ordinals cases separately:
· 0 = 0.
, for a successor ordinal .
, when is a limit ordinal.
As an example, here is the order relation for :
00 < 10 < 20 < 30 < ... < 01 < 11 < 21 < 31 < ...,
which has the same order type as . In contrast, looks like this:
00 < 10 < 01 < 11 < 02 < 12 < 03 < 13 < ...
and after relabeling, this looks just like .
Thus, , showing that multiplication of ordinals is not in general commutative, c.f. pictures.
As is the case with addition, ordinal multiplication on the natural numbers is the same as standard multiplication.
Properties
, and the zero-product property holds: or . The ordinal 1 is a multiplicative identity, . Multiplication is associative, . Multiplication is strictly increasing and continuous in the right argument: ( and ) → . Multiplication is not strictly increasing in the left argument, for example, 1 < 2 but . However, it is (non-strictly) increasing, i.e. → .
Multiplication of ordinals is not in general commutative. Specifically, a natural number greater than 1 never commutes with any infinite ordinal, and two infinite ordinals and commute if and only if for some nonzero natural numbers and . The relation " commutes with " is an equivalence relation on the ordinals greater than 1, and all equivalence classes are countably infinite.
Distributivity holds, on the left: . However, the distributive law on the right is not generally true: while , which is different. There is a left cancellation law: If and , then . Right cancellation does not work, e.g. , but 1 and 2 are different. A left division with remainder property holds: for all and , if , then there are unique and such that and . Right division does not work: there is no such that .
The ordinal numbers form a left near-semiring, but do not form a ring. Hence the ordinals are not a Euclidean domain, since they are not even a ring; furthermore the Euclidean "norm" would be ordinal-valued using the left division here.
A -number (see Multiplicatively indecomposable ordinal) is an ordinal greater than 1 such that whenever . These consist of the ordinal 2 and the ordinals of the form .
Exponentiation
The definition of exponentiation via order types is most easily explained using Von Neumann's definition of an ordinal as the set of all smaller ordinals. Then, to construct a set of order type consider the set of all functions such that for all but finitely many elements (essentially, we consider the functions with finite support). This set is ordered lexicographically with the least significant position first: we write if and only if there exists with and for all with . This is a well-ordering and hence gives an ordinal number.
The definition of exponentiation can also be given by transfinite recursion on the exponent . When the exponent , ordinary exponentiation gives for any . For , the value of is the smallest ordinal greater than or equal to for all . Writing the successor and limit ordinals cases separately:
.
, for a successor ordinal .
, when is a limit ordinal.
Both definitions simplify considerably if the exponent is a finite number: is then just the product of copies of ; e.g. , and the elements of can be viewed as triples of natural numbers, ordered lexicographically with least significant position first. This agrees with the ordinary exponentiation of natural numbers.
But for infinite exponents, the definition may not be obvious. For example, can be identified with a set of finite sequences of elements of , properly ordered. The equation expresses the fact that finite sequences of zeros and ones can be identified with natural numbers, using the binary number system. The ordinal can be viewed as the order type of finite sequences of natural numbers; every element of (i.e. every ordinal smaller than ) can be uniquely written in the form where , , ..., are natural numbers, , ..., are nonzero natural numbers, and .
The same is true in general: every element of (i.e. every ordinal smaller than ) can be uniquely written in the form where is a natural number, , ..., are ordinals smaller than with , and are nonzero ordinals smaller than . This expression corresponds to the function which sends to for and sends all other elements of to 0.
While the same exponent-notation is used for ordinal exponentiation and cardinal exponentiation, the two operations are quite different and should not be confused. The cardinal exponentiation is defined to be the cardinal number of the set of all functions , while the ordinal exponentiation only contains the functions with finite support, typically a set of much smaller cardinality. To avoid confusing ordinal exponentiation with cardinal exponentiation, one can use symbols for ordinals (e.g. ) in the former and symbols for cardinals (e.g. ) in the latter.
Properties
.
If , then .
.
.
.
.
There are , , and for which . For instance, .
Ordinal exponentiation is strictly increasing and continuous in the right argument: If and , then .
If , then . Note, for instance, that 2 < 3 and yet .
If and , then . If or this is not the case.
For all and , if and then there exist unique , , and such that such that and .
Jacobsthal showed that the only solutions of with are given by , or and , or is any limit ordinal and where is an -number larger than .
Beyond exponentiation
There are ordinal operations that continue the sequence begun by addition, multiplication, and exponentiation, including ordinal versions of tetration, pentation, and hexation. See also Veblen function.
Cantor normal form
Every ordinal number can be uniquely written as , where is a natural number, are nonzero natural numbers, and are ordinal numbers. The degenerate case occurs when and there are no s nor s. This decomposition of is called the Cantor normal form of , and can be considered the base- positional numeral system. The highest exponent is called the degree of , and satisfies . The equality applies if and only if . In that case Cantor normal form does not express the ordinal in terms of smaller ones; this can happen as explained below.
A minor variation of Cantor normal form, which is usually slightly easier to work with, is to set all the numbers equal to 1 and allow the exponents to be equal. In other words, every ordinal number α can be uniquely written as , where is a natural number, and are ordinal numbers.
Another variation of the Cantor normal form is the "base expansion", where is replaced by any ordinal , and the numbers are nonzero ordinals less than .
The Cantor normal form allows us to uniquely express—and order—the ordinals that are built from the natural numbers by a finite number of arithmetical operations of addition, multiplication and exponentiation base-: in other words, assuming in the Cantor normal form, we can also express the exponents in Cantor normal form, and making the same assumption for the as for and so on recursively, we get a system of notation for these ordinals (for example,
denotes an ordinal).
The ordinal ε0 (epsilon nought) is the set of ordinal values α of the finite-length arithmetical expressions of Cantor normal form that are hereditarily non-trivial where non-trivial means β1<α when 0<α. It is the smallest ordinal that does not have a finite arithmetical expression in terms of , and the smallest ordinal such that , i.e. in Cantor normal form the exponent is not smaller than the ordinal itself. It is the limit of the sequence
The ordinal ε0 is important for various reasons in arithmetic (essentially because it measures the proof-theoretic strength of the first-order Peano arithmetic: that is, Peano's axioms can show transfinite induction up to any ordinal less than ε0 but not up to ε0 itself).
The Cantor normal form also allows us to compute sums and products of ordinals: to compute the sum, for example, one need merely know (see the properties listed in and ) that
if (if one can apply the distributive law on the left and rewrite this as , and if the expression is already in Cantor normal form); and to compute products, the essential facts are that when is in Cantor normal form and , then
and
if is a non-zero natural number.
To compare two ordinals written in Cantor normal form, first compare , then , then , then , and so on. At the first occurrence of inequality, the ordinal that has the larger component is the larger ordinal. If they are the same until one terminates before the other, then the one that terminates first is smaller.
Factorization into primes
Ernst Jacobsthal showed that the ordinals satisfy a form of the unique factorization theorem: every nonzero ordinal can be written as a product of a finite number of prime ordinals. This factorization into prime ordinals is in general not unique, but there is a "minimal" factorization into primes that is unique up to changing the order of finite prime factors .
A prime ordinal is an ordinal greater than 1 that cannot be written as a product of two smaller ordinals. Some of the first primes are 2, 3, 5, ... , , , , , ..., , , , ... There are three sorts of prime ordinals:
The finite primes 2, 3, 5, ...
The ordinals of the form for any ordinal . These are the prime ordinals that are limits, and are the delta numbers, the transfinite ordinals that are closed under multiplication.
The ordinals of the form for any ordinal . These are the infinite successor primes, and are the successors of gamma numbers, the additively indecomposable ordinals.
Factorization into primes is not unique: for example, , , and . However, there is a unique factorization into primes satisfying the following additional conditions:
Every limit prime must occur before any successor prime
If two consecutive primes of the prime factorization are both limits or both finite, the second one must be less than or equal to the first one.
This prime factorization can easily be read off using the Cantor normal form as follows:
First write the ordinal as a product where is the smallest power of in the Cantor normal form and is a successor.
If then writing in Cantor normal form gives an expansion of as a product of limit primes.
Now look at the Cantor normal form of . If + smaller terms, then is a product of a smaller ordinal and a prime and a natural number . Repeating this and factorizing the natural numbers into primes gives the prime factorization of .
So the factorization of the Cantor normal form ordinal
(with )
into a minimal product of infinite primes and natural numbers is
where each should be replaced by its factorization into a non-increasing sequence of finite primes and
with .
Large countable ordinals
As discussed above, the Cantor normal form of ordinals below ε0 can be expressed in an alphabet containing only the function symbols for addition, multiplication and exponentiation, as well as constant symbols for each natural number and for . We can do away with the infinitely many numerals by using just the constant symbol 0 and the operation of successor, S (for example, the natural number 4 may be expressed as S(S(S(S(0))))). This describes an ordinal notation: a system for naming ordinals over a finite alphabet. This particular system of ordinal notation is called the collection of arithmetical ordinal expressions, and can express all ordinals below ε0, but cannot express ε0. There are other ordinal notations capable of capturing ordinals well past ε0, but because there are only countably many finite-length strings over any finite alphabet, for any given ordinal notation there will be ordinals below (the first uncountable ordinal) that are not expressible. Such ordinals are known as large countable ordinals.
The operations of addition, multiplication and exponentiation are all examples of primitive recursive ordinal functions, and more general primitive recursive ordinal functions can be used to describe larger ordinals.
Natural operations
The natural sum and natural product operations on ordinals were defined in 1906 by Gerhard Hessenberg, and are sometimes called the Hessenberg sum (or product) . The natural sum of and is often denoted by or , and the natural product by or .
The natural sum and product are defined as follows. Let and be in Cantor normal form (i.e. and ). Let be the exponents sorted in nonincreasing order. Then is defined as
The natural product of and is defined as
For example, suppose and . Then , whereas . And , whereas .
The natural sum and product are commutative and associative, and natural product distributes over natural sum. The operations are also monotonic, in the sense that if then ; if then ; and if and then .
We have .
We always have and . If both and then . If both and then .
Natural sum and product are not continuous in the right argument, since, for example , and not ; and , and not .
The natural sum and product are the same as the addition and multiplication (restricted to ordinals) of John Conway's field of surreal numbers.
The natural operations come up in the theory of well partial orders; given two well partial orders and , of types (maximum linearizations) and , the type of the disjoint union is , while the type of the direct product is . One may take this relation as a definition of the natural operations by choosing and to be ordinals and ; so is the maximum order type of a total order extending the disjoint union (as a partial order) of and ; while is the maximum order type of a total order extending the direct product (as a partial order) of and . A useful application of this is when and are both subsets of some larger total order; then their union has order type at most . If they are both subsets of some ordered abelian group, then their sum has order type at most .
We can also define the natural sum by simultaneous transfinite recursion on and , as the smallest ordinal strictly greater than the natural sum of and for all and of and for all . Similarly, we can define the natural product by simultaneous transfinite recursion on and , as the smallest ordinal such that for all and . Also, see the article on surreal numbers for the definition of natural multiplication in that context; however, it uses surreal subtraction, which is not defined on ordinals.
The natural sum is associative and commutative. It is always greater or equal to the usual sum, but it may be strictly greater. For example, the natural sum of and 1 is (the usual sum), but this is also the natural sum of 1 and . The natural product is associative and commutative and distributes over the natural sum. The natural product is always greater or equal to the usual product, but it may be strictly greater. For example, the natural product of and 2 is (the usual product), but this is also the natural product of 2 and .
Under natural addition, the ordinals can be identified with the elements of the free commutative monoid generated by the gamma numbers . Under natural addition and multiplication, the ordinals can be identified with the elements of the free commutative semiring generated by the delta numbers .
The ordinals do not have unique factorization into primes under the natural product. While the full polynomial ring does have unique factorization, the subset of polynomials with non-negative coefficients does not: for example, if is any delta number, then
has two incompatible expressions as a natural product of polynomials with non-negative coefficients that cannot be decomposed further.
Nimber arithmetic
There are arithmetic operations on ordinals by virtue of the one-to-one correspondence between ordinals and nimbers. Three common operations on nimbers are nimber addition, nimber multiplication, and minimum excludance (mex). Nimber addition is a generalization of the bitwise exclusive or operation on natural numbers. The of a set of ordinals is the smallest ordinal not present in the set.
Notes
References
Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. .
External links
ordCalc ordinal calculator
Set theory
Ordinal numbers | Ordinal arithmetic | [
"Mathematics"
] | 4,367 | [
"Ordinal numbers",
"Set theory",
"Mathematical logic",
"Mathematical objects",
"Order theory",
"Numbers"
] |
2,139,357 | https://en.wikipedia.org/wiki/Finite%20morphism | In algebraic geometry, a finite morphism between two affine varieties is a dense regular map which induces isomorphic inclusion between their coordinate rings, such that is integral over . This definition can be extended to the quasi-projective varieties, such that a regular map between quasiprojective varieties is finite if any point has an affine neighbourhood V such that is affine and is a finite map (in view of the previous definition, because it is between affine varieties).
Definition by schemes
A morphism f: X → Y of schemes is a finite morphism if Y has an open cover by affine schemes
such that for each i,
is an open affine subscheme Spec Ai, and the restriction of f to Ui, which induces a ring homomorphism
makes Ai a finitely generated module over Bi. One also says that X is finite over Y.
In fact, f is finite if and only if for every open affine subscheme V = Spec B in Y, the inverse image of V in X is affine, of the form Spec A, with A a finitely generated B-module.
For example, for any field k, is a finite morphism since as -modules. Geometrically, this is obviously finite since this is a ramified n-sheeted cover of the affine line which degenerates at the origin. By contrast, the inclusion of A1 − 0 into A1 is not finite. (Indeed, the Laurent polynomial ring k[y, y−1] is not finitely generated as a module over k[y].) This restricts our geometric intuition to surjective families with finite fibers.
Properties of finite morphisms
The composition of two finite morphisms is finite.
Any base change of a finite morphism f: X → Y is finite. That is, if g: Z → Y is any morphism of schemes, then the resulting morphism X ×Y Z → Z is finite. This corresponds to the following algebraic statement: if A and C are (commutative) B-algebras, and A is finitely generated as a B-module, then the tensor product A ⊗B C is finitely generated as a C-module. Indeed, the generators can be taken to be the elements ai ⊗ 1, where ai are the given generators of A as a B-module.
Closed immersions are finite, as they are locally given by A → A/I, where I is the ideal corresponding to the closed subscheme.
Finite morphisms are closed, hence (because of their stability under base change) proper. This follows from the going up theorem of Cohen-Seidenberg in commutative algebra.
Finite morphisms have finite fibers (that is, they are quasi-finite). This follows from the fact that for a field k, every finite k-algebra is an Artinian ring. A related statement is that for a finite surjective morphism f: X → Y, X and Y have the same dimension.
By Deligne, a morphism of schemes is finite if and only if it is proper and quasi-finite. This had been shown by Grothendieck if the morphism f: X → Y is locally of finite presentation, which follows from the other assumptions if Y is Noetherian.
Finite morphisms are both projective and affine.
See also
Glossary of algebraic geometry
Finite algebra
Morphism of finite type
Notes
References
External links
Algebraic geometry
Morphisms | Finite morphism | [
"Mathematics"
] | 727 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Algebraic geometry",
"Morphisms"
] |
2,139,436 | https://en.wikipedia.org/wiki/Snow%20chains | Snow chains, or tire chains, are devices fitted to the tires of vehicles to provide increased traction when driving through snow and ice.
Snow chains attach to the drive wheels of a vehicle or special systems deploy chains which swing under the tires automatically. Although named after steel chain, snow chains may be made of other materials and in a variety of patterns and strengths. Chains are usually sold in pairs and often must be purchased to match a particular tire size (tire diameter and tread width), although some designs can be adjusted to fit various sizes of tire. Driving with chains reduces fuel efficiency, and can reduce the allowable speed of the automobile to approximately , but increase traction and braking on snowy or icy surfaces. Some regions require chains to be used under some weather conditions, but other areas prohibit the use of chains, as they can damage road surfaces.
History
Snow chains were invented in 1904 by Harry D. Weed in Canastota, New York. Weed received for his "Grip-Tread for Pneumatic Tires" on August 23, 1904. Weed's great-grandson, James Weed, said that Harry got the idea of creating chains for tires when he saw drivers wrap rope, or even vines, around their tires to increase traction on muddy or snowy roads. At this time, most people in rural Northern regions wouldn't bother driving automobiles in the winter at all, since roads were usually rolled for use with horse-drawn sleighs, rather than plowed. Automobiles were generally not winter vehicles, for a variety of reasons until the 1930s or 1940s in some areas. Only in urban areas was it possible to remove snow from streets. He sought to make a traction device that was more durable and would work with snow as well as mud.
In January 1923, American inventor Oscar E. Brown obtained for his “Nonskid Attachment for Vehicle Tires”.
In July 1935, the Canadian Auguste Trudeau obtained a patent for his tread and anti-skidding chain.
Deployment
In snowy conditions, transportation authorities may require that snow chains or other traction aids be installed on vehicles, or at least supplied for them. This can apply to all vehicles, or only those without other traction aids, such as four-wheel drive or special tires. Local requirements may be enforced at checkpoints or by other type of inspection. Snow chains should be installed on one or more drive axles of the vehicle, with requirements varying for dual-tire or multi-driven-axle vehicles that range from "one pair of tires on a driven axle" to "all tires on all driven axles", possibly also one or both steering (front) wheels, requiring snow chains whenever required by signage or conditions.
In case of running wheel loaders, it is recommended to use special protection chains due to intense pressure on tires during work.
United States
Tires come with standardized tire code sizing information, found on the sidewalls of the tires. The first letter(s), indicate the vehicle type (P for passenger, LT for light truck). The next three digits indicate the tire's width in millimeters. The middle two digit number indicates the tire's height-to-width ratio. The next character is a letter "R", which indicates radial ply tires (rather than radius). followed by a final two digit number indicating the rim size for the vehicle's wheels.
Additionally, the correct Society of Automotive Engineers (SAE) class of snow chains must be installed, based on the wheel clearance of the vehicle.
The SAE Class "S" well clearance is a common requirement on newer cars, especially if after-market wider, low-profile, or larger tires and/or wheels are fitted.
The classes are defined as follows:
SAE Class S: Regular (non-reinforced) passenger tire traction devices for vehicles with restricted wheel well clearance.
SAE Class U: Regular (non-reinforced) and lug-reinforced passenger tire traction devices for vehicles with regular (non-restricted) wheel well clearances.
SAE Class W: Passenger tire traction devices that use light truck components, as well as some light truck traction devices.
Common chain failures
Driving too fast with chains. Recommended maximum speeds in the owners' manual of the chains – generally – maximum.
Driving on dry roads with chains for extended periods of time.
Driving on dry roads with chains can cause a vehicle to slide when braking.
Driving on dry roads with chains will rapidly wear the chains.
Not securing the chains tightly enough. Owners' manual of the chains recommends tightening a second time after driving a short distance and checking for tightness from time to time. If a chain comes loose, it should either be refastened or removed before it wraps around the drive axle of the vehicle.
Tensioners or adjusters may be required. (Some chains have automatic tensioners and may be damaged if tensioners are used.)
Installing chains on non-drive wheels.
Accelerating too rapidly causing tire spin and stress on chains
If a chain does break, it can cause vehicle damage by slapping around inside the wheel well, possibly wrapping around the axle and severing brake lines
Varieties and alternatives
Tire chains are available in a variety of types that have different advantages of cost, ride smoothness, traction, durability, ease of installation, and recommended travel speed.
Materials include steel (in the form of links or cables), polyurethane, rubber, and fabric. The original-style steel-link chains are also available in a variety of carbon steel and steel alloys and link shapes. Link shapes include standard, twisted, square, and reinforced. The shape of the links changes the flexibility, grip, and strength of the chain. The links can also have added studs or V-bars for an even more aggressive traction. The use of alloy steel and hardened steel adds durability. Traction cables (cable chains, snow cables) attach like chains but are made from cable rather than chain.
Chain patterns include the ladder, diagonal, or pattern types. Ladder-type chains have cross chains perpendicular to the road and look like a ladder when carefully laid on the ground. With diagonal chains, the cross chains are diagonal to the road. Pattern types form a "net" over the tire such as a diamond or multiple diamond pattern. Some industrial pattern types also include studded, metal rings to which the chains attach and thus are called ring chains.
Most tire chains are wrapped around the circumference of the tires and held in place with rim chains, which may be chain or cable, elastic or adjustable tensioners. Automatic chains do not wrap around the tire but swing under the tire from devices permanently mounted under the vehicle which deploy via an electronic solenoid activated in the cab. Some tire chains mount onto the tires from only one side. Others use a ratcheting system for easier installation.
Alternatives include studded tires, which are snow tires with metal studs individually mounted into holes in the treads; emergency traction devices which may be similar to tire chains but mount around the tire through openings in the rim; and snow socks, which are fabric rather than chain or cable. These alternatives allow higher operating speeds than snow chains and, in the case of studs, do not require installation by the vehicle operator, but chains generally give the best traction in severe conditions.
Mud chains are similar to snow chains but for off-road, four-wheel drive applications, and generally they are larger than snow chains; they are often seen on heavy off-road equipment like log skidders, which have to operate over very rough, muddy terrain.
Wheel tracks are heavy duty assemblies similar to chains but with rigid cross links such as sometimes used on logging equipment.
Legality of use
Laws vary considerably regarding the legality of snow chain use. Some countries require them in certain snowy conditions or during certain months of the year, while other countries prohibit their use altogether to preserve road surfaces.
See also
Crampons
Ice cleat
Snow socks
Snow tires
References
External links
Yosemite National Park Tire Chains Page
Chains
Vehicle safety technologies
Inclement weather management | Snow chains | [
"Physics"
] | 1,630 | [
"Weather",
"Inclement weather management",
"Physical phenomena"
] |
2,139,612 | https://en.wikipedia.org/wiki/A%20Course%20of%20Modern%20Analysis | A Course of Modern Analysis: an introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions (colloquially known as Whittaker and Watson) is a landmark textbook on mathematical analysis written by Edmund T. Whittaker and George N. Watson, first published by Cambridge University Press in 1915. The first edition was Whittaker's alone, but later editions were co-authored with Watson.
History
Its first, second, third, and the fourth edition were published in 1902, 1915, 1920, and 1927, respectively. Since then, it has continuously been reprinted and is still in print today. A revised, expanded and digitally reset fifth edition, edited by Victor H. Moll, was published in 2021.
The book is notable for being the standard reference and textbook for a generation of Cambridge mathematicians including Littlewood and Godfrey H. Hardy. Mary L. Cartwright studied it as preparation for her final honours on the advice of fellow student Vernon C. Morton, later Professor of Mathematics at Aberystwyth University. But its reach was much further than just the Cambridge school; André Weil in his obituary of the French mathematician Jean Delsarte noted that Delsarte always had a copy on his desk. In 1941, the book was included among a "selected list" of mathematical analysis books for use in universities in an article for that purpose published by American Mathematical Monthly.
Notable features
Some idiosyncratic but interesting problems from an older era of the Cambridge Mathematical Tripos are in the exercises.
The book was one of the earliest to use decimal numbering for its sections, an innovation the authors attribute to Giuseppe Peano.
Contents
Below are the contents of the fourth edition:
Part I. The Process of Analysis
Part II. The Transcendental Functions
Reception
Reviews of the first edition
George B. Mathews, in a 1903 review article published in The Mathematical Gazette opens by saying the book is "sure of a favorable reception" because of its "attractive account of some of the most valuable and interesting results of recent analysis". He notes that Part I deals mainly with infinite series, focusing on power series and Fourier expansions while including the "elements of" complex integration and the theory of residues. Part II, in contrast, has chapters on the gamma function, Legendre functions, the hypergeometric series, Bessel functions, elliptic functions, and mathematical physics.
Arthur S. Hathaway, in another 1903 review published in the Journal of the American Chemical Society, notes that the book centers around complex analysis, but that topics such as infinite series are "considered in all their phases" along with "all those important series and functions" developed by mathematicians such as Joseph Fourier, Friedrich Bessel, Joseph-Louis Lagrange, Adrien-Marie Legendre, Pierre-Simon Laplace, Carl Friedrich Gauss, Niels Henrik Abel, and others in their respective studies of "practice problems". He goes on to say it "is a useful book for those who wish to make use of the most advanced developments of mathematical analysis in theoretical investigations of physical and chemical questions."
In a third review of the first edition, Maxime Bôcher, in a 1904 review published in the Bulletin of the American Mathematical Society notes that while the book falls short of the "rigor" of French, German, and Italian writers, it is a "gratifying sign of progress to find in an English book such an attempt at rigorous treatment as is here made". He notes that important parts of the book were otherwise non-existent in the English language.
See also
Bateman Manuscript Project
References
Further reading
(9 pages)
(1 page)
(1 page)
(1 page)
(2 pages)
(2 pages)
(2 pages)
(1 page)
(1 page)
(1 page)
(1 page)
(1 of 6 pages)
1902 non-fiction books
Cambridge University Press books
Mathematics textbooks
Mathematical analysis
Complex analysis
Books by E. T. Whittaker | A Course of Modern Analysis | [
"Mathematics"
] | 814 | [
"Mathematical analysis"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.