id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
324681 | https://en.wikipedia.org/wiki/Metatarsal%20bones | Metatarsal bones | The metatarsal bones or metatarsus (: metatarsi) are a group of five long bones in the midfoot, located between the tarsal bones (which form the heel and the ankle) and the phalanges (toes). Lacking individual names, the metatarsal bones are numbered from the medial side (the side of the great toe): the first, second, third, fourth, and fifth metatarsal (often depicted with Roman numerals). The metatarsals are analogous to the metacarpal bones of the hand. The lengths of the metatarsal bones in humans are, in descending order, second, third, fourth, fifth, and first. A bovine hind leg has two metatarsals.
Structure
The five metatarsals are dorsal convex long bones consisting of a shaft or body, a base (proximally), and a head (distally). The body is prismoid in form, tapers gradually from the tarsal to the phalangeal extremity, and is curved longitudinally, so as to be concave below, slightly convex above. The base or posterior extremity is wedge-shaped, articulating proximally with the tarsal bones, and by its sides with the contiguous metatarsal bones: its dorsal and plantar surfaces are rough for the attachment of ligaments. The head or distal extremity presents a convex articular surface, oblong from above downward, and extending farther backward below than above. Its sides are flattened, and on each is a depression, surmounted by a tubercle, for ligamentous attachment. Its plantar surface is grooved antero-posteriorly for the passage of the flexor tendons, and marked on either side by an articular eminence continuous with the terminal articular surface.
During growth, the growth plates are located distally on the metatarsals, except on the first metatarsal where it is located proximally. Yet it is quite common to have an accessory growth plate on the distal first metatarsal.
Articulations
The base of each metatarsal bone articulates with one or more of the tarsal bones at the tarsometatarsal joints, and the head with one of the first row of phalanges at the metatarsophalangeal joints. Their bases also articulate with each other at the intermetatarsal joints
The first metatarsal articulates with the medial cuneiform, and to a small extent to the intermediate cuneiform.
the second with all three cuneiforms.
the third with the lateral cuneiform.
the fourth with the lateral cuneiform and the cuboid.
The fifth with the cuboid.
Muscle attachments
Clinical significance
Injuries
The metatarsal bones are often broken by association football (soccer) players. These and other recent cases have been attributed to the lightweight design of modern football boots, which provide less protection to the foot. In 2010 some football players began testing a new sock that incorporated a rubber silicone pad over the foot to provide protection to the top of the foot. Stress fractures are thought to account for 16% of injuries related to sports participation, and the metatarsals are the bones most often involved. These fractures are sometimes called march fractures, based on their traditional association with military recruits after long marches. The second and third metatarsals are fixed while walking, thus these metatarsals are common sites of injury. The fifth metatarsal may be fractured if the foot is oversupinated during locomotion.
Protection from injuries can be given by the use of safety footwear which can use built-in or removable metatarsal guards.
Additional images
| Biology and health sciences | Skeletal system | Biology |
324697 | https://en.wikipedia.org/wiki/Metacarpal%20bones | Metacarpal bones | In human anatomy, the metacarpal bones or metacarpus, also known as the "palm bones", are the appendicular bones that form the intermediate part of the hand between the phalanges (fingers) and the carpal bones (wrist bones), which articulate with the forearm. The metacarpal bones are homologous to the metatarsal bones in the foot.
Structure
The metacarpals form a transverse arch to which the rigid row of distal carpal bones are fixed. The peripheral metacarpals (those of the thumb and little finger) form the sides of the cup of the palmar gutter and as they are brought together they deepen this concavity. The index metacarpal is the most firmly fixed, while the thumb metacarpal articulates with the trapezium and acts independently from the others. The middle metacarpals are tightly united to the carpus by intrinsic interlocking bone elements at their bases. The ring metacarpal is somewhat more mobile while the fifth metacarpal is semi-independent.
Each metacarpal bone consists of a body or shaft, and two extremities; the head at the distal or digital end (near the fingers), and the base at the proximal or carpal end (close to the wrist).
Body
The body (shaft) is prismoid in form, and curved, so as to be convex in the longitudinal direction behind, concave in front. It presents three surfaces: medial, lateral, and dorsal.
The medial and lateral surfaces are concave, for the attachment of the interosseus muscles, and separated from one another by a prominent anterior ridge.
The dorsal surface presents in its distal two-thirds a smooth, triangular, flattened area which is covered in by the tendons of the extensor muscles. This surface is bounded by two lines, which commence in small tubercles situated on either side of the digital extremity, and, passing upward, converge and meet some distance above the center of the bone and form a ridge which runs along the rest of the dorsal surface to the carpal extremity. This ridge separates two sloping surfaces for the attachment of the interossei dorsales.
To the tubercles on the digital extremities are attached the collateral ligaments of the metacarpophalangeal joints.
Base
The base (basis) or carpal extremity is of a cuboidal form, and broader behind than in front. It articulates with the carpal bones and with the adjoining metacarpal bones while its dorsal and volar surfaces are rough, for the attachment of ligaments.
Head
The head (caput) or digital extremity presents an oblong surface markedly convex from before backward, less so transversely, and flattened from side to side; it articulates with the proximal phalanx. It is broader, and extends farther upward, on the volar than on the dorsal aspect, and is longer in the antero-posterior than in the transverse diameter. On either side of the head is a tubercle for the attachment of the collateral ligament of the metacarpophalangeal joint.
The dorsal surface, broad and flat, supports the tendons of the extensor muscles.
The volar surface is grooved in the middle line for the passage of the flexor tendons, and marked on either side by an articular eminence continuous with the terminal articular surface.
Neck
The neck, or subcapital segment, is the transition zone between the body and the head.
Articulations
Besides the metacarpophalangeal joints, the metacarpal bones articulate by carpometacarpal joints as follows:
the first with the trapezium;
the second with the trapezium, trapezoid, capitate and third metacarpal;
the third with the capitate and second and fourth metacarpals;
the fourth with the capitate, hamate, and third and fifth metacarpals;
and the fifth with the hamate and fourth metacarpal;
Insertions
Extensor Carpi Radialis Longus/Brevis: Both insert on the base of metacarpal II; Assist with wrist extension and radial flexion of the wrist
Extensor Carpi Ulnaris: Inserts on the base of metacarpal V; Extends and fixes wrist when digits are being flexed; assists with ulnar flexion of wrist
Abductor Pollicis Longus: Inserts on the trapezium and base of metacarpal I; Abducts thumb in frontal plane; extends thumb at carpometacarpal joint
Opponens Pollicis: Inserts on metacarpal I; flexes metacarpal I to oppose the thumb to the fingertips
Opponens digiti minimi: Inserts on the medial surface of metacarpal V; Flexes metacarpal V at carpometacarpal joint when little finger is moved into opposition with tip of thumb; deepens palm of hand.
Clinical significance
Congenital disorders
The fourth and fifth metacarpal bones are commonly "blunted" or shortened, in pseudohypoparathyroidism and pseudopseudohypoparathyroidism.
A blunted fourth metacarpal, with normal fifth metacarpal, can signify Turner syndrome.
Blunted metacarpals (particularly the fourth metacarpal) are a symptom of nevoid basal-cell carcinoma syndrome.
Fracture
The neck of a metacarpal is a common location for a boxer's fracture, but all parts of the metacarpal bone (including head, body and base) are susceptible to fracture. During their lifetime, 2.5% of individuals will experience at least one metacarpal fracture. Bennett's fracture (base of the thumb) is the most common. Several types of treatment exist ranging from non-operative techniques, with or without immobilization, to operative techniques using closed or open reduction and internal fixation (ORIF). Generally, most fractures showing little or no displacement can be treated successfully without surgery. Intraarticular fracture-dislocations of the metacarpal head or base may require surgical fixation, as fragment displacement affecting the joint surface is rarely tolerated well.
Other animals
In four-legged animals, the metacarpals form part of the forefeet, and are frequently reduced in number, appropriate to the number of toes. In digitigrade and unguligrade animals, the metacarpals are greatly extended and strengthened, forming an additional segment to the limb, a feature that typically enhances the animal's speed. In both birds and bats, the metacarpals form part of the wing.
History
Etymology
The Greek physician Galen used to refer to the as μετακάρπιον. The Latin form more truly resembles its Ancient Greek predecessor μετακάρπιον than metacarpus. Meta– is Greek for beyond and carpal from Ancient Greek καρπός (, “wrist”).
In anatomic Latin, adjectives like , , , , and can be found. The form is more true to the later Greek form μετακάρπιος. , as in in the current official Latin nomenclature, Terminologia Anatomica is a compound consisting of Latin and Greek parts. The usage of such hybrids in anatomic Latin is disapproved by some.
Additional images
| Biology and health sciences | Skeletal system | Biology |
325019 | https://en.wikipedia.org/wiki/Modular%20group | Modular group | In mathematics, the modular group is the projective special linear group of matrices with integer coefficients and determinant 1. The matrices and are identified. The modular group acts on the upper-half of the complex plane by fractional linear transformations, and the name "modular group" comes from the relation to moduli spaces and not from modular arithmetic.
Definition
The modular group is the group of linear fractional transformations of the upper half of the complex plane, which have the form
where , , , are integers, and . The group operation is function composition.
This group of transformations is isomorphic to the projective special linear group , which is the quotient of the 2-dimensional special linear group over the integers by its center . In other words, consists of all matrices
where , , , are integers, , and pairs of matrices and are considered to be identical. The group operation is the usual multiplication of matrices.
Some authors define the modular group to be , and still others define the modular group to be the larger group .
Some mathematical relations require the consideration of the group of matrices with determinant plus or minus one. ( is a subgroup of this group.) Similarly, is the quotient group . A matrix with unit determinant is a symplectic matrix, and thus , the symplectic group of matrices.
Finding elements
To find an explicit matrix in , begin with two coprime integers , and solve the determinant equation(Notice the determinant equation forces to be coprime since otherwise there would be a factor such that , , hencewould have no integer solutions.) For example, if then the determinant equation readsthen taking and gives , henceis a matrix. Then, using the projection, these matrices define elements in .
Number-theoretic properties
The unit determinant of
implies that the fractions , , , are all irreducible, that is having no common factors (provided the denominators are non-zero, of course). More generally, if is an irreducible fraction, then
is also irreducible (again, provided the denominator be non-zero). Any pair of irreducible fractions can be connected in this way; that is, for any pair and of irreducible fractions, there exist elements
such that
Elements of the modular group provide a symmetry on the two-dimensional lattice. Let and be two complex numbers whose ratio is not real. Then the set of points
is a lattice of parallelograms on the plane. A different pair of vectors and will generate exactly the same lattice if and only if
for some matrix in . It is for this reason that doubly periodic functions, such as elliptic functions, possess a modular group symmetry.
The action of the modular group on the rational numbers can most easily be understood by envisioning a square grid, with grid point corresponding to the fraction (see Euclid's orchard). An irreducible fraction is one that is visible from the origin; the action of the modular group on a fraction never takes a visible (irreducible) to a hidden (reducible) one, and vice versa.
Note that any member of the modular group maps the projectively extended real line one-to-one to itself, and furthermore bijectively maps the projectively extended rational line (the rationals with infinity) to itself, the irrationals to the irrationals, the transcendental numbers to the transcendental numbers, the non-real numbers to the non-real numbers, the upper half-plane to the upper half-plane, et cetera.
If and are two successive convergents of a continued fraction, then the matrix
belongs to . In particular, if for positive integers , , , with and then and will be neighbours in the Farey sequence of order . Important special cases of continued fraction convergents include the Fibonacci numbers and solutions to Pell's equation. In both cases, the numbers can be arranged to form a semigroup subset of the modular group.
Group-theoretic properties
Presentation
The modular group can be shown to be generated by the two transformations
so that every element in the modular group can be represented (in a non-unique way) by the composition of powers of and . Geometrically, represents inversion in the unit circle followed by reflection with respect to the imaginary axis, while represents a unit translation to the right.
The generators and obey the relations and . It can be shown that these are a complete set of relations, so the modular group has the presentation:
This presentation describes the modular group as the rotational triangle group (infinity as there is no relation on ), and it thus maps onto all triangle groups by adding the relation , which occurs for instance in the congruence subgroup .
Using the generators and instead of and , this shows that the modular group is isomorphic to the free product of the cyclic groups and :
Braid group
The braid group is the universal central extension of the modular group, with these sitting as lattices inside the (topological) universal covering group . Further, the modular group has a trivial center, and thus the modular group is isomorphic to the quotient group of modulo its center; equivalently, to the group of inner automorphisms of .
The braid group in turn is isomorphic to the knot group of the trefoil knot.
Quotients
The quotients by congruence subgroups are of significant interest.
Other important quotients are the triangle groups, which correspond geometrically to descending to a cylinder, quotienting the coordinate modulo , as . is the group of icosahedral symmetry, and the triangle group (and associated tiling) is the cover for all Hurwitz surfaces.
Presenting as a matrix group
The group can be generated by the two matrices
since
The projection turns these matrices into generators of , with relations similar to the group presentation.
Relationship to hyperbolic geometry
The modular group is important because it forms a subgroup of the group of isometries of the hyperbolic plane. If we consider the upper half-plane model of hyperbolic plane geometry, then the group of all
orientation-preserving isometries of consists of all Möbius transformations of the form
where , , , are real numbers. In terms of projective coordinates, the group acts on the upper half-plane by projectivity:
This action is faithful. Since is a subgroup of , the modular group is a subgroup of the group of orientation-preserving isometries of .
Tessellation of the hyperbolic plane
The modular group acts on as a discrete subgroup of , that is, for each in we can find a neighbourhood of which does not contain any other element of the orbit of . This also means that we can construct fundamental domains, which (roughly) contain exactly one representative from the orbit of every in . (Care is needed on the boundary of the domain.)
There are many ways of constructing a fundamental domain, but a common choice is the region
bounded by the vertical lines and , and the circle . This region is a hyperbolic triangle. It has vertices at and , where the angle between its sides is , and a third vertex at infinity, where the angle between its sides is 0.
There is a strong connection between the modular group and elliptic curves. Each point in the upper half-plane gives an elliptic curve, namely the quotient of by the lattice generated by 1 and .
Two points in the upper half-plane give isomorphic elliptic curves if and only if they are related by a transformation in the modular group. Thus, the quotient of the upper half-plane by the action of the modular group is the so-called moduli space of elliptic curves: a space whose points describe isomorphism classes of elliptic curves. This is often visualized as the fundamental domain described above, with some points on its boundary identified.
The modular group and its subgroups are also a source of interesting tilings of the hyperbolic plane. By transforming this fundamental domain in turn by each of the elements of the modular group, a regular tessellation of the hyperbolic plane by congruent hyperbolic triangles known as the V6.6.∞ Infinite-order triangular tiling is created. Note that each such triangle has one vertex either at infinity or on the real axis .
This tiling can be extended to the Poincaré disk, where every hyperbolic triangle has one vertex on the boundary of the disk. The tiling of the Poincaré disk is given in a natural way by the -invariant, which is invariant under the modular group, and attains every complex number once in each triangle of these regions.
This tessellation can be refined slightly, dividing each region into two halves (conventionally colored black and white), by adding an orientation-reversing map; the colors then correspond to orientation of the domain. Adding in and taking the right half of the region (where ) yields the usual tessellation. This tessellation first appears in print in , where it is credited to Richard Dedekind, in reference to .
The map of groups (from modular group to triangle group) can be visualized in terms of this tiling (yielding a tiling on the modular curve), as depicted in the video at right.
Congruence subgroups
Important subgroups of the modular group , called congruence subgroups, are given by imposing congruence relations on the associated matrices.
There is a natural homomorphism given by reducing the entries modulo . This induces a homomorphism on the modular group . The kernel of this homomorphism is called the principal congruence subgroup of level , denoted . We have the following short exact sequence:
Being the kernel of a homomorphism is a normal subgroup of the modular group . The group is given as the set of all modular transformations
for which and .
It is easy to show that the trace of a matrix representing an element of cannot be −1, 0, or 1, so these subgroups are torsion-free groups. (There are other torsion-free subgroups.)
The principal congruence subgroup of level 2, , is also called the modular group . Since is isomorphic to , is a subgroup of index 6. The group consists of all modular transformations for which and are odd and and are even.
Another important family of congruence subgroups are the modular group defined as the set of all modular transformations for which , or equivalently, as the subgroup whose matrices become upper triangular upon reduction modulo . Note that is a subgroup of . The modular curves associated with these groups are an aspect of monstrous moonshine – for a prime number , the modular curve of the normalizer is genus zero if and only if divides the order of the monster group, or equivalently, if is a supersingular prime.
Dyadic monoid
One important subset of the modular group is the dyadic monoid, which is the monoid of all strings of the form for positive integers . This monoid occurs naturally in the study of fractal curves, and describes the self-similarity symmetries of the Cantor function, Minkowski's question mark function, and the Koch snowflake, each being a special case of the general de Rham curve. The monoid also has higher-dimensional linear representations; for example, the representation can be understood to describe the self-symmetry of the blancmange curve.
Maps of the torus
The group is the linear maps preserving the standard lattice , and is the orientation-preserving maps preserving this lattice; they thus descend to self-homeomorphisms of the torus (SL mapping to orientation-preserving maps), and in fact map isomorphically to the (extended) mapping class group of the torus, meaning that every self-homeomorphism of the torus is isotopic to a map of this form. The algebraic properties of a matrix as an element of correspond to the dynamics of the induced map of the torus.
Hecke groups
The modular group can be generalized to the Hecke groups, named for Erich Hecke, and defined as follows.
The Hecke group with , is the discrete group generated by
where . For small values of , one has:
The modular group is isomorphic to and they share properties and applications – for example, just as one has the free product of cyclic groups
more generally one has
which corresponds to the triangle group . There is similarly a notion of principal congruence subgroups associated to principal ideals in .
History
The modular group and its subgroups were first studied in detail by Richard Dedekind and by Felix Klein as part of his Erlangen programme in the 1870s. However, the closely related elliptic functions were studied by Joseph Louis Lagrange in 1785, and further results on elliptic functions were published by Carl Gustav Jakob Jacobi and Niels Henrik Abel in 1827.
| Mathematics | Abstract algebra | null |
325030 | https://en.wikipedia.org/wiki/Historical%20geology | Historical geology | Historical geology or palaeogeology is a discipline that uses the principles and methods of geology to reconstruct the geological history of Earth. Historical geology examines the vastness of geologic time, measured in billions of years, and investigates changes in the Earth, gradual and sudden, over this deep time. It focuses on geological processes, such as plate tectonics, that have changed the Earth's surface and subsurface over time and the use of methods including stratigraphy, structural geology, paleontology, and sedimentology to tell the sequence of these events. It also focuses on the evolution of life during different time periods in the geologic time scale.
Historical development
During the 17th century, Nicolas Steno was the first to observe and propose a number of basic principles of historical geology, including three key stratigraphic principles: the law of superposition, the principle of original horizontality, and the principle of lateral continuity.
18th-century geologist James Hutton contributed to an early understanding of the Earth's history by proposing the theory of uniformitarianism, which is now a basic principle in all branches of geology. Uniformitarianism describes an Earth formed by the same natural phenomena that are at work today, the product of slow and continuous geological changes. The theory can be summarized by the phrase "the present is the key to the past." Hutton also described the concept of deep time. The prevailing conceptualization of Earth history in 18th-century Europe, grounded in a literal interpretation of Christian scripture, was that of a young Earth shaped by catastrophic events. Hutton, however, depicted a very old Earth, shaped by slow, continuous change. Charles Lyell further developed the theory of uniformitarianism in the 19th century. Modern geologists have generally acknowledged that Earth's geological history is a product of both sudden, cataclysmic events (such as meteorite impacts and volcanic eruptions) and gradual processes (such as weathering, erosion, and deposition).
The discovery of radioactive decay in the late 19th century and the development of radiometric dating techniques in the 20th century provided a means of deriving absolute ages of events in geological history.
Use and importance
Geology is considered a historical science; accordingly, historical geology plays a prominent role in the field.
Historical geology covers much of the same subject matter as physical geology, the study of geological processes and the ways in which they shape the Earth's structure and composition. Historical geology extends physical geology into the past.
Economic geology, the search for and extraction of fuel and raw materials, is heavily dependent on an understanding of the geological history of an area. Environmental geology, which examines the impacts of natural hazards such as earthquakes and volcanism, must rely on a detailed knowledge of geological history.
Methods
Stratigraphy
Layers of rock, or strata, represent a geologic record of Earth's history. Stratigraphy is the study of strata: their order, position, and age.
Structural geology
Structural geology is concerned with rocks' deformational histories.
Paleontology
Fossils are organic traces of Earth's history. In a historical geology context, paleontological methods can be used to study fossils and their environments, including surrounding rocks, and place them within the geologic time scale.
Sedimentology
Sedimentology is the study of the formation, transport, deposition, and diagenesis of sediments. Sedimentary rocks, including limestone, sandstone, and shale, serve as a record of Earth's history: they contain fossils and are transformed by geological processes, such as weathering, erosion, and deposition, through deep time.
Relative dating
Historical geology makes use of relative dating in order to establish the sequence of geological events in relation to each another, without determining their specific numerical ages or ranges.
Absolute dating
Absolute dating allows geologists to determine a more precise chronology of geological events, based on numerical ages or ranges. Absolute dating includes the use of radiometric dating methods, such as radiocarbon dating, potassium–argon dating, and uranium–lead dating. Luminescence dating, dendrochronology, and amino acid dating are other methods of absolute dating.
Plate tectonics
The theory of plate tectonics explains how the movement of lithospheric plates has structured the Earth throughout its geological history.
Weathering, erosion, and deposition
Weathering, erosion, and deposition are examples of gradual geological processes, taking place over large sections of the geologic time scale. In the rock cycle, rocks are continually broken down, transported, and deposited, cycling through three main rock types: sedimentary, metamorphic, and igneous.
Paleoclimatology
Paleoclimatology is the study of past climates recorded in geological time.
Brief geological history
| Physical sciences | Basics | Earth science |
325060 | https://en.wikipedia.org/wiki/Tidal%20power | Tidal power | Tidal power or tidal energy is harnessed by converting energy from tides into useful forms of power, mainly electricity using various methods.
Although not yet widely used, tidal energy has the potential for future electricity generation. Tides are more predictable than the wind and the sun. Among sources of renewable energy, tidal energy has traditionally suffered from relatively high cost and limited availability of sites with sufficiently high tidal ranges or flow velocities, thus constricting its total availability. However many recent technological developments and improvements, both in design (e.g. dynamic tidal power, tidal lagoons) and turbine technology (e.g. new axial turbines, cross flow turbines), indicate that the total availability of tidal power may be much higher than previously assumed and that economic and environmental costs may be brought down to competitive levels.
Historically, tide mills have been used both in Europe and on the Atlantic coast of North America. Incoming water was contained in large storage ponds, and as the tide goes out, it turns waterwheels that use the mechanical power to mill grain. The earliest occurrences date from the Middle Ages, or even from Roman times. The process of using falling water and spinning turbines to create electricity was introduced in the U.S. and Europe in the 19th century.
Electricity generation from marine technologies increased an estimated 16% in 2018, and an estimated 13% in 2019. Policies promoting R&D are needed to achieve further cost reductions and large-scale development. The world's first large-scale tidal power plant was France's Rance Tidal Power Station, which became operational in 1966. It was the largest tidal power station in terms of output until Sihwa Lake Tidal Power Station opened in South Korea in August 2011. The Sihwa station uses sea wall defense barriers complete with 10 turbines generating 254 MW.
Principle
Tidal energy is taken from the Earth's oceanic tides. Tidal forces result from periodic variations in gravitational attraction exerted by celestial bodies. These forces create corresponding motions or currents in the world's oceans. This results in periodic changes in sea levels, varying as the Earth rotates. These changes are highly regular and predictable, due to the consistent pattern of the Earth's rotation and the Moon's orbit around the Earth. The magnitude and variations of this motion reflect the changing positions of the Moon and Sun relative to the Earth, the effects of Earth's rotation, and local geography of the seafloor and coastlines.
Tidal power is the only technology that draws on energy inherent in the orbital characteristics of the Earth–Moon system, and to a lesser extent in the Earth–Sun system. Other natural energies exploited by human technology originate directly or indirectly from the Sun, including fossil fuel, conventional hydroelectric, wind, biofuel, wave and solar energy. Nuclear energy makes use of Earth's mineral deposits of fissionable elements, while geothermal power utilizes the Earth's internal heat, which comes from a combination of residual heat from planetary accretion (about 20%) and heat produced through radioactive decay (80%).
A tidal generator converts the energy of tidal flows into electricity. Greater tidal variation and higher tidal current velocities can dramatically increase the potential of a site for tidal electricity generation. On the other hand, tidal energy has high reliability, excellent energy density, and high durability.
Because the Earth's tides are ultimately due to gravitational interaction with the Moon and Sun and the Earth's rotation, tidal power is practically inexhaustible, and is thus classified as a renewable energy resource. Movement of tides causes a loss of mechanical energy in the Earth-Moon system: this results from pumping of water through natural restrictions around coastlines and consequent viscous dissipation at the seabed and in turbulence. This loss of energy has caused the rotation of the Earth to slow in the 4.5 billion years since its formation. During the last 620 million years the period of rotation of the Earth (length of a day) has increased from 21.9 hours to 24 hours; in this period the Earth-Moon system has lost 17% of its rotational energy. While tidal power will take additional energy from the system, the effect is negligible and would not be noticeable in the foreseeable future.
Methods
Tidal power can be classified into four generating methods:
Tidal stream generator
Tidal stream generators make use of the kinetic energy of moving water to power turbines, in a similar way to wind turbines that use the wind to power turbines. Some tidal generators can be built into the structures of existing bridges or are entirely submersed, thus avoiding concerns over aesthetics or visual impact. Land constrictions such as straits or inlets can create high velocities at specific sites, which can be captured using turbines. These turbines can be horizontal, vertical, open, or ducted.
Tidal barrage
Tidal barrages use potential energy in the difference in height (or hydraulic head) between high and low tides. When using tidal barrages to generate power, the potential energy from a tide is seized through the strategic placement of specialized dams. When the sea level rises and the tide begins to come in, the temporary increase in tidal power is channeled into a large basin behind the dam, holding a large amount of potential energy. With the receding tide, this energy is then converted into mechanical energy as the water is released through large turbines that create electrical power through the use of generators. Barrages are essentially dams across the full width of a tidal estuary.
Tidal lagoon
A new tidal energy design option is to construct circular retaining walls embedded with turbines that can capture the potential energy of tides. The created reservoirs are similar to those of tidal barrages, except that the location is artificial and does not contain a pre-existing ecosystem.
The lagoons can also be in double (or triple) format without pumping or with pumping that will flatten out the power output. The pumping power could be provided by excess to grid demand renewable energy from for example wind turbines or solar photovoltaic arrays. Excess renewable energy rather than being curtailed could be used and stored for a later period of time. Geographically dispersed tidal lagoons with a time delay between peak production would also flatten out peak production providing near baseload production at a higher cost than other alternatives such as district heating renewable energy storage. The cancelled Tidal Lagoon Swansea Bay in Wales, United Kingdom would have been the first tidal power station of this type once built.
Dynamic tidal power
Dynamic tidal power (or DTP) is a theoretical technology that would exploit an interaction between potential and kinetic energies in tidal flows. It proposes that very long dams (for example: 30–50 km length) be built from coasts straight out into the sea or ocean, without enclosing an area. Tidal phase differences are introduced across the dam, leading to a significant water-level differential in shallow coastal seas – featuring strong coast-parallel oscillating tidal currents such as found in the UK, China, and Korea.
US and Canadian studies in the 20th century
The first study of large scale tidal power plants was by the US Federal Power Commission in 1924. If built, power plants would have been located in the northern border area of the US state of Maine and the southeastern border area of the Canadian province of New Brunswick, with various dams, powerhouses, and ship locks enclosing the Bay of Fundy and Passamaquoddy Bay (note: see map in reference). Nothing came of the study, and it is unknown whether Canada had been approached about the study by the US Federal Power Commission.
In 1956, utility Nova Scotia Light and Power of Halifax commissioned a pair of studies into commercial tidal power development feasibility on the Nova Scotia side of the Bay of Fundy. The two studies, by Stone & Webster of Boston and by Montreal Engineering Company of Montreal, independently concluded that millions of horsepower (i.e. gigawatts) could be harnessed from Fundy but that development costs would be commercially prohibitive.
There was also a report on the international commission in April 1961 entitled "Investigation of the International Passamaquoddy Tidal Power Project" produced by both the US and Canadian Federal Governments. According to benefit to costs ratios, the project was beneficial to the US but not to Canada.
A study was commissioned by the Canadian & Nova Scotian and New Brunswick governments (Reassessment of Fundy Tidal Power) to determine the potential for tidal barrages at Chignecto Bay and Minas Basin – at the end of the Fundy Bay estuary. There were three sites determined to be financially feasible: Shepody Bay (1550 MW), Cumberland Basin (1085 MW), and Cobequid Bay (3800 MW). These were never built despite their apparent feasibility in 1977.
US studies in the 21st century
The Snohomish PUD, a public utility district located primarily in Snohomish County, Washington State, began a tidal energy project in 2007. In April 2009 the PUD selected OpenHydro, a company based in Ireland, to develop turbines and equipment for eventual installation. The project as initially designed was to place generation equipment in areas of high tidal flow and operate that equipment for four to five years. After the trial period the equipment would be removed. The project was initially budgeted at a total cost of $10 million, with half of that funding provided by the PUD out of utility reserve funds, and half from grants, primarily from the US federal government. The PUD paid for part of this project from reserves and received a $900,000 grant in 2009 and a $3.5 million grant in 2010 in addition to using reserves to pay an estimated $4 million of costs. In 2010 the budget estimate was increased to $20 million, half to be paid by the utility, half by the federal government. The utility was unable to control costs on this project, and by October 2014, the costs had ballooned to an estimated $38 million and were projected to continue to increase. The PUD proposed that the federal government provide an additional $10 million towards this increased cost, citing a gentlemen's agreement. When the federal government refused to pay this, the PUD cancelled the project after spending nearly $10 million from reserves and grants. The PUD abandoned all tidal energy exploration after this project was cancelled and does not own or operate any tidal energy sources.
Rance tidal power plant in France
In 1966, Électricité de France opened the Rance Tidal Power Station, located on the estuary of the Rance River in Brittany. It was the world's first tidal power station. The plant was for 45 years the largest tidal power station in the world by installed capacity: Its 24 turbines reach peak output at 240 megawatts (MW) and average 57 MW, a capacity factor of approximately 24%.
Tidal power development in the UK
The world's first marine energy test facility was established in 2003 to start the development of the wave and tidal energy industry in the UK. Based in Orkney, Scotland, the European Marine Energy Centre (EMEC) has supported the deployment of more wave and tidal energy devices than at any other single site in the world. EMEC provides a variety of test sites in real sea conditions. Its grid connected tidal test site is located at the Fall of Warness, off the island of Eday, in a narrow channel which concentrates the tide as it flows between the Atlantic Ocean and North Sea. This area has a very strong tidal current, which can travel up to in spring tides. Tidal energy developers that have tested at the site include: Alstom (formerly Tidal Generation Ltd); ANDRITZ HYDRO Hammerfest; Atlantis Resources Corporation; Nautricity; OpenHydro; Scotrenewables Tidal Power; Voith. The resource could be 4 TJ per year. Elsewhere in the UK, annual energy of 50 TWh can be extracted if 25 GW capacity is installed with pivotable blades.
Current and future tidal power schemes
The Rance tidal power plant built over a period of six years from 1960 to 1966 at La Rance, France. It has 240 MW installed capacity.
254 MW Sihwa Lake Tidal Power Plant in South Korea is the largest tidal power installation in the world. Construction was completed in 2011.
The Jiangxia Tidal Power Station, south of Hangzhou in China has been operational since 1985, with current installed capacity of 3.2 MW. More tidal power is planned near the mouth of the Yalu River.
The first in-stream tidal current generator in North America (Race Rocks Tidal Power Demonstration Project) was installed at Race Rocks on southern Vancouver Island in September 2006. The Race Rocks project was shut down after operating for five years (2006–2011) because high operating costs produced electricity at a rate that was not economically feasible. The next phase in the development of this tidal current generator will be in Nova Scotia (Bay of Fundy).
A small project was built by the Soviet Union at Kislaya Guba on the Barents Sea. It has 0.4 MW installed capacity. In 2006 it was upgraded with a 1.2 MW experimental advanced orthogonal turbine.
Jindo Uldolmok Tidal Power Plant in South Korea is a tidal stream generation scheme planned to be expanded progressively to 90 MW of capacity by 2013. The first 1 MW was installed in May 2009.
A 1.2 MW SeaGen system became operational in late 2008 on Strangford Lough in Northern Ireland. It was decommissioned and removed in 2016.
The contract for an 812 MW tidal barrage near Ganghwa Island (South Korea) north-west of Incheon has been signed by Daewoo. Completion was planned for 2015 but project was retracted in 2013.
A 1,320 MW barrage was proposed by the South Korean government in 2009, to be built around islands west of Incheon. The project halted since 2012 due to environmental concerns.
The Scottish Government has approved plans for a 10 MW ''Òran na Mara'' array of tidal stream generators near Islay, Scotland, costing 40 million pounds, and consisting of 10 turbines – enough to power over 5,000 homes. The first turbine was expected to be in operation by 2013 and then once again announced in 2021, but as of 2023 none existed.
The Indian state of Gujarat was planning to host South Asia's first commercial-scale tidal power station. The company Atlantis Resources planned to install a 50 MW tidal farm in the Gulf of Kutch on India's west coast, with construction planned to start 2012, later withdrawn due to high costs.
Ocean Renewable Power Corporation was the first company to deliver tidal power to the US grid in September 2012 when its pilot TidGen system was successfully deployed in Cobscook Bay, near Eastport.
In New York City, Verdant Power successfully deployed and operated three tidal turbines in the East River near Roosevelt Island, on a single triangular base system, called a TriFrame. The Roosevelt Island Tidal Energy (RITE) Project generated over 300MWh of electricity to the local grid, an American marine energy record. The system's performance was independently confirmed by Scotland's European Marine Energy Centre (EMEC) under the new International Electrotechnical Commission (IEC) international standards. This is the first instance of a third-party verification of a tidal energy converter to an international standard.
The largest tidal energy project entitled MeyGen (398 MW) is currently in construction in the Pentland Firth in northern Scotland with 6 MW operational since 2018.
Construction of a 320 MW tidal lagoon power plant outside the city of Swansea in the UK was granted planning permission in June 2015, however it was later rejected by the UK government in 2018. If built it would have been the world's first tidal power plant based on a constructed lagoon.
Mersey Tidal Power, a proposed tidal range barrage within the channel of the Mersey Estuary with a capacity of up to 1 GW is undergoing local consultation by the Liverpool City Region Combined Authority.
Up to 240 MW of tidal stream generation is proposed at Morlais, Anglesey from multiple developers, with the first turbines expected to be installed in 2026. , a total of 38 MW of capacity has been awarded Contracts for Difference to supply power to the GB grid.
Issues and challenges
Environmental concerns
Tidal power can affect marine life. The turbines' rotating blades can accidentally kill swimming sea life. Projects such as the one in Strangford include a safety mechanism that turns off the turbine when marine animals approach. However, this feature causes a major loss in energy because of the amount of marine life that passes through the turbines. Some fish may avoid the area if threatened by a constantly rotating or noisy object. Marine life is a huge factor when siting tidal power energy generators, and precautions are taken to ensure that as few marine animals as possible are affected by it. In terms of global warming potential (i.e. carbon footprint), the impact of tidal power generation technologies ranges between 15 and 37 gCO2-eq/kWhe, with a median value of 23.8 gCO2-eq/kWhe. This is in line with the impact of other renewables like wind and solar power, and significantly better than fossil-based technologies. The Tethys database provides access to scientific literature and general information on the potential environmental effects of tidal energy.
Tidal turbines
The main environmental concern with tidal energy is associated with blade strike and entanglement of marine organisms as high-speed water increases the risk of organisms being pushed near or through these devices. As with all offshore renewable energies, there is also a concern about how the creation of electromagnetic fields and acoustic outputs may affect marine organisms. Because these devices are in the water, the acoustic output can be greater than those created with offshore wind energy. Depending on the frequency and amplitude of sound generated by the tidal energy devices, this acoustic output can have varying effects on marine mammals (particularly those who echolocate to communicate and navigate in the marine environment, such as dolphins and whales). Tidal energy removal can also cause environmental concerns such as degrading far-field water quality and disrupting sediment processes. Depending on the size of the project, these effects can range from small traces of sediment building up near the tidal device to severely affecting nearshore ecosystems and processes.
Tidal barrage
Installing a barrage may change the shoreline within the bay or estuary, affecting a large ecosystem that depends on tidal flats. Inhibiting the flow of water in and out of the bay, there may also be less flushing of the bay or estuary, causing additional turbidity (suspended solids) and less saltwater, which may result in the death of fish that act as a vital food source to birds and mammals. Migrating fish may also be unable to access breeding streams, and may attempt to pass through the turbines. The same acoustic concerns apply to tidal barrages. Decreasing shipping accessibility can become a socio-economic issue, though locks can be added to allow slow passage. However, the barrage may improve the local economy by increasing land access as a bridge. Calmer waters may also allow better recreation in the bay or estuary. In August 2004, a humpback whale swam through the open sluice gate of the Annapolis Royal Generating Station at slack tide, ending up trapped for several days before eventually finding its way out to the Annapolis Basin.
Tidal lagoon
Environmentally, the main concerns are blade strike on fish attempting to enter the lagoon, the acoustic output from turbines, and changes in sedimentation processes. However, all these effects are localized and do not affect the entire estuary or bay.
Corrosion
Saltwater causes corrosion in metal parts. It can be difficult to maintain tidal stream generators due to their size and depth in the water. The use of corrosion-resistant materials such as stainless steels, high-nickel alloys, copper-nickel alloys, nickel-copper alloys and titanium can greatly reduce, or eliminate corrosion damage. Composite materials could also be used, as composites do not corrode and could provide lightweight, durable structures for tidal power. Composite materials are being evaluated for tidal power.
Mechanical fluids, such as lubricants, can leak out, which may be harmful to the marine life nearby. Proper maintenance can minimize the number of harmful chemicals that may enter the environment.
Fouling
The biological events that happen when placing any structure in an area of high tidal currents and high biological productivity in the ocean will ensure that the structure becomes an ideal substrate for the growth of marine organisms.
Cost
Tidal energy has a high initial cost, which may be one of the reasons why it is not a popular source of renewable energy, although research has shown that the public is willing to pay for and support research and development of tidal energy devices. The methods of generating electricity from tidal energy are relatively new technology. Tidal energy is however still very early in the research process and it may be possible to reduce costs in future. The cost-effectiveness varies according to the site of the tidal generators. One indication of cost-effectiveness is the Gibrat ratio, which is the length of the barrage in metres divided by the annual energy production in kilowatt hours.
As tidal energy is reliable, it can reasonably be predicted how long it will take to pay off the high up-front cost of these generators. Due to the success of a greatly simplified design, the orthogonal turbine offers considerable cost savings. As a result, the production period of each generating unit is reduced, lower metal consumption is needed and technical efficiency is greater.
A possible risk is rising sea levels due to climate change, which may alter the characteristics of the local tides reducing future power generation.
Structural health monitoring
The high load factors resulting from the fact that water is around 800 times denser than air, and the predictable and reliable nature of tides compared with the wind, make tidal energy particularly attractive for electric power generation. Condition monitoring is the key for exploiting it cost-efficiently.
| Technology | Power generation | null |
325077 | https://en.wikipedia.org/wiki/Domain%20theory | Domain theory | Domain theory is a branch of mathematics that studies special kinds of partially ordered sets (posets) commonly called domains. Consequently, domain theory can be considered as a branch of order theory. The field has major applications in computer science, where it is used to specify denotational semantics, especially for functional programming languages. Domain theory formalizes the intuitive ideas of approximation and convergence in a very general way and is closely related to topology.
Motivation and intuition
The primary motivation for the study of domains, which was initiated by Dana Scott in the late 1960s, was the search for a denotational semantics of the lambda calculus. In this formalism, one considers "functions" specified by certain terms in the language. In a purely syntactic way, one can go from simple functions to functions that take other functions as their input arguments. Using again just the syntactic transformations available in this formalism, one can obtain so-called fixed-point combinators (the best-known of which is the Y combinator); these, by definition, have the property that f(Y(f)) = Y(f) for all functions f.
To formulate such a denotational semantics, one might first try to construct a model for the lambda calculus, in which a genuine (total) function is associated with each lambda term. Such a model would formalize a link between the lambda calculus as a purely syntactic system and the lambda calculus as a notational system for manipulating concrete mathematical functions. The combinator calculus is such a model. However, the elements of the combinator calculus are functions from functions to functions; in order for the elements of a model of the lambda calculus to be of arbitrary domain and range, they could not be true functions, only partial functions.
Scott got around this difficulty by formalizing a notion of "partial" or "incomplete" information to represent computations that have not yet returned a result. This was modeled by considering, for each domain of computation (e.g. the natural numbers), an additional element that represents an undefined output, i.e. the "result" of a computation that never ends. In addition, the domain of computation is equipped with an ordering relation, in which the "undefined result" is the least element.
The important step to finding a model for the lambda calculus is to consider only those functions (on such a partially ordered set) that are guaranteed to have least fixed points. The set of these functions, together with an appropriate ordering, is again a "domain" in the sense of the theory. But the restriction to a subset of all available functions has another great benefit: it is possible to obtain domains that contain their own function spaces, i.e. one gets functions that can be applied to themselves.
Beside these desirable properties, domain theory also allows for an appealing intuitive interpretation. As mentioned above, the domains of computation are always partially ordered. This ordering represents a hierarchy of information or knowledge. The higher an element is within the order, the more specific it is and the more information it contains. Lower elements represent incomplete knowledge or intermediate results.
Computation then is modeled by applying monotone functions repeatedly on elements of the domain in order to refine a result. Reaching a fixed point is equivalent to finishing a calculation. Domains provide a superior setting for these ideas since fixed points of monotone functions can be guaranteed to exist and, under additional restrictions, can be approximated from below.
A guide to the formal definitions
In this section, the central concepts and definitions of domain theory will be introduced. The above intuition of domains being information orderings will be emphasized to motivate the mathematical formalization of the theory. The precise formal definitions are to be found in the dedicated articles for each concept. A list of general order-theoretic definitions, which include domain theoretic notions as well can be found in the order theory glossary. The most important concepts of domain theory will nonetheless be introduced below.
Directed sets as converging specifications
As mentioned before, domain theory deals with partially ordered sets to model a domain of computation. The goal is to interpret the elements of such an order as pieces of information or (partial) results of a computation, where elements that are higher in the order extend the information of the elements below them in a consistent way. From this simple intuition it is already clear that domains often do not have a greatest element, since this would mean that there is an element that contains the information of all other elements—a rather uninteresting situation.
A concept that plays an important role in the theory is that of a directed subset of a domain; a directed subset is a non-empty subset of the order in which any two elements have an upper bound that is an element of this subset. In view of our intuition about domains, this means that any two pieces of information within the directed subset are consistently extended by some other element in the subset. Hence we can view directed subsets as consistent specifications, i.e. as sets of partial results in which no two elements are contradictory. This interpretation can be compared with the notion of a convergent sequence in analysis, where each element is more specific than the preceding one. Indeed, in the theory of metric spaces, sequences play a role that is in many aspects analogous to the role of directed sets in domain theory.
Now, as in the case of sequences, we are interested in the limit of a directed set. According to what was said above, this would be an element that is the most general piece of information that extends the information of all elements of the directed set, i.e. the unique element that contains exactly the information that was present in the directed set, and nothing more. In the formalization of order theory, this is just the least upper bound of the directed set. As in the case of the limit of a sequence, the least upper bound of a directed set does not always exist.
Naturally, one has a special interest in those domains of computations in which all consistent specifications converge, i.e. in orders in which all directed sets have a least upper bound. This property defines the class of directed-complete partial orders, or dcpo for short. Indeed, most considerations of domain theory do only consider orders that are at least directed complete.
From the underlying idea of partially specified results as representing incomplete knowledge, one derives another desirable property: the existence of a least element. Such an element models that state of no information—the place where most computations start. It also can be regarded as the output of a computation that does not return any result at all.
Computations and domains
Now that we have some basic formal descriptions of what a domain of computation should be, we can turn to the computations themselves. Clearly, these have to be functions, taking inputs from some computational domain and returning outputs in some (possibly different) domain. However, one would also expect that the output of a function will contain more information when the information content of the input is increased. Formally, this means that we want a function to be monotonic.
When dealing with dcpos, one might also want computations to be compatible with the formation of limits of a directed set. Formally, this means that, for some function f, the image f(D) of a directed set D (i.e. the set of the images of each element of D) is again directed and has as a least upper bound the image of the least upper bound of D. One could also say that f preserves directed suprema. Also note that, by considering directed sets of two elements, such a function also has to be monotonic. These properties give rise to the notion of a Scott-continuous function. Since this often is not ambiguous one also may speak of continuous functions.
Approximation and finiteness
Domain theory is a purely qualitative approach to modeling the structure of information states. One can say that something contains more information, but the amount of additional information is not specified. Yet, there are some situations in which one wants to speak about elements that are in a sense much simpler (or much more incomplete) than a given state of information. For example, in the natural subset-inclusion ordering on some powerset, any infinite element (i.e. set) is much more "informative" than any of its finite subsets.
If one wants to model such a relationship, one may first want to consider the induced strict order < of a domain with order ≤. However, while this is a useful notion in the case of total orders, it does not tell us much in the case of partially ordered sets. Considering again inclusion-orders of sets, a set is already strictly smaller than another, possibly infinite, set if it contains just one less element. One would, however, hardly agree that this captures the notion of being "much simpler".
Way-below relation
A more elaborate approach leads to the definition of the so-called order of approximation, which is more suggestively also called the way-below relation. An element x is way below an element y, if, for every directed set D with supremum such that
,
there is some element d in D such that
.
Then one also says that x approximates y and writes
.
This does imply that
,
since the singleton set {y} is directed. For an example, in an ordering of sets, an infinite set is way above any of its finite subsets. On the other hand, consider the directed set (in fact, the chain) of finite sets
Since the supremum of this chain is the set of all natural numbers N, this shows that no infinite set is way below N.
However, being way below some element is a relative notion and does not reveal much about an element alone. For example, one would like to characterize finite sets in an order-theoretic way, but even infinite sets can be way below some other set. The special property of these finite elements x is that they are way below themselves, i.e.
.
An element with this property is also called compact. Yet, such elements do not have to be "finite" nor "compact" in any other mathematical usage of the terms. The notation is nonetheless motivated by certain parallels to the respective notions in set theory and topology. The compact elements of a domain have the important special property that they cannot be obtained as a limit of a directed set in which they did not already occur.
Many other important results about the way-below relation support the claim that this definition is appropriate to capture many important aspects of a domain.
Bases of domains
The previous thoughts raise another question: is it possible to guarantee that all elements of a domain can be obtained as a limit of much simpler elements? This is quite relevant in practice, since we cannot compute infinite objects but we may still hope to approximate them arbitrarily closely.
More generally, we would like to restrict to a certain subset of elements as being sufficient for getting all other elements as least upper bounds. Hence, one defines a base of a poset P as being a subset B of P, such that, for each x in P, the set of elements in B that are way below x contains a directed set with supremum x. The poset P is a continuous poset if it has some base. Especially, P itself is a base in this situation. In many applications, one restricts to continuous (d)cpos as a main object of study.
Finally, an even stronger restriction on a partially ordered set is given by requiring the existence of a base of finite elements. Such a poset is called algebraic. From the viewpoint of denotational semantics, algebraic posets are particularly well-behaved, since they allow for the approximation of all elements even when restricting to finite ones. As remarked before, not every finite element is "finite" in a classical sense and it may well be that the finite elements constitute an uncountable set.
In some cases, however, the base for a poset is countable. In this case, one speaks of an ω-continuous poset. Accordingly, if the countable base consists entirely of finite elements, we obtain an order that is ω-algebraic.
Special types of domains
A simple special case of a domain is known as an elementary or flat domain. This consists of a set of incomparable elements, such as the integers, along with a single "bottom" element considered smaller than all other elements.
One can obtain a number of other interesting special classes of ordered structures that could be suitable as "domains". We already mentioned continuous posets and algebraic posets. More special versions of both are continuous and algebraic cpos. Adding even further completeness properties one obtains continuous lattices and algebraic lattices, which are just complete lattices with the respective properties. For the algebraic case, one finds broader classes of posets that are still worth studying: historically, the Scott domains were the first structures to be studied in domain theory. Still wider classes of domains are constituted by SFP-domains, L-domains, and bifinite domains.
All of these classes of orders can be cast into various categories of dcpos, using functions that are monotone, Scott-continuous, or even more specialized as morphisms. Finally, note that the term domain itself is not exact and thus is only used as an abbreviation when a formal definition has been given before or when the details are irrelevant.
Important results
A poset D is a dcpo if and only if each chain in D has a supremum. (The 'if' direction relies on the axiom of choice.)
If f is a continuous function on a domain D then it has a least fixed point, given as the least upper bound of all finite iterations of f on the least element ⊥:
.
This is the Kleene fixed-point theorem. The symbol is the directed join.
Generalizations
A continuity space is a generalization of metric spaces and posets that can be used to unify the notions of metric spaces and domains.
| Mathematics | Order theory | null |
325663 | https://en.wikipedia.org/wiki/Pollination%20management | Pollination management | Pollination management is the horticultural practices that accomplish or enhance pollination of a crop, to improve yield or quality, by understanding of the particular crop's pollination needs, and by knowledgeable management of pollenizers, pollinators, and pollination conditions.
While people think first of the European honey bee when pollination comes up, in fact there are many different means of pollination management that are used, both other insects and other mechanisms. There are other insects commercially available that are more efficient, like the blue orchard bee for fruit and nut trees, local bumblebees better specialized for some other crops, hand pollination that is essential for production of hybrid seeds and some greenhouse situations, and even pollination machines.
Pollinator decline
With the decline of both wild and domestic pollinator populations, pollination management is becoming an increasingly important part of horticulture. Factors that cause the loss of pollinators include pesticide misuse, unprofitability of beekeeping for honey, rapid transfer of pests and diseases to new areas of the globe, urban/suburban development, changing crop patterns, clearcut logging (particularly when mixed forests are replaced by monoculture pine), clearing of hedgerows and other wild areas, bad diet because of loss of floral biodiversity, and a loss of nectar corridors for migratory pollinators. With the declining habitat and resources available to sustain bee populations, populations are declining.
Importance
The increasing size of fields and orchards (monoculture) increase the importance of pollination management. Monoculture can cause a brief period when pollinators have more food resources than they can use (but monofloral diet can reduce their immune system) while other periods of the year can bring starvation or pesticide contamination of food sources. Most nectar source and pollen source throughout the growing season to build up their numbers.
Crops that traditionally have had managed pollination include apple, almonds, pears, some plum and cherry varieties, blueberries, cranberries, cucumbers, cantaloupe, watermelon, alfalfa seeds, onion seeds, and many others. Some crops that have traditionally depended entirely on chance pollination by wild pollinators need pollination management nowadays to make a profitable crop. Many of these were at one time universally turning to honeybees, but as science has shown that honeybees are actually inefficient pollinators, demand for other managed pollinators has risen. While honeybees may visit dozens of different kinds of flowers, diluting the orchard pollen they carry, the Blue orchard bee will visit only the intended tree, producing a much higher fertilization rate. The focus on the specific tree also makes the orchard bee 100 times more efficient at pollinating, per bee.
Some crops, especially when planted in a monoculture situation, require a very high level of pollinators to produce economically viable crops, especially if depending on the more generalized honeybee. This may be because of lack of attractiveness of the blossoms, or from trying to pollinate with an alternative when the native pollinator is extinct or rare. These include crops such as alfalfa, cranberries, and kiwifruit. This technique is known as saturation pollination. In many such cases, various native bees are vastly more efficient at pollination (e.g., with blueberries), but the inefficiency of the honey bees is compensated for by using large numbers of hives, the total number of foragers thereby far exceeding the local abundance of native pollinators. In a very few cases, it has been possible to develop commercially viable pollination techniques that use the more efficient pollinators, rather than continued reliance on honey bees, as in the management of the alfalfa leafcutter bee.
In the case of the kiwifruit, its flowers do not even produce nectar, so that honeybees are reluctant to even visit them, unless present in such overwhelming numbers that they do so incidentally. This has led bumblebee pollination companies to begin offering their services for kiwifruit, as they appear to be far more efficient at the job than honeybees, even more efficient than hand pollination.
It is estimated that about one hive per acre will sufficiently pollinate watermelons. In the 1950s when the woods were full of wild bee trees, and beehives were normally kept on most South Carolina farms, a farmer who grew ten acres (4 ha) of watermelons would be a large grower and probably had all the pollination needed. But today's grower may grow 200 acres (80 ha), and, if lucky, there might be one bee tree left within range. The only option in the current economy is to bring beehives to the field during blossom time.
Types of pollinators
Organisms that are currently being used as pollinators in managed pollination are honey bees, bumblebees, alfalfa leafcutter bees, and orchard mason bees. Other species are expected to be added to this list as this field develops. Humans also can be pollinators, as the gardener who hand pollinates her squash blossoms, or the Middle Eastern farmer, who climbs his date palms to pollinate them.
The Cooperative extension service recommends one honey bee hive per acre (2.5 hives per hectare) for standard watermelon varieties to meet this crop's pollination needs. In the past, when fields were small, pollination was accomplished by a mix of bees kept on farms, bumblebees, carpenter bees, feral honey bees in hollow trees and other insects. Today, with melons planted in large tracts, the grower may no longer have hives on the farm; he may have poisoned many of the pollinators by spraying blooming cotton; he may have logged off the woods, removing hollow trees that provided homes for bees, and pushed out the hedgerows that were home for solitary native bees and other pollinating insects.
Planning for improved pollination
Before pollination needs were understood, orchardists often planted entire blocks of apples of a single variety. Because apples are self-sterile, and different members of a single variety are genetic clones (equivalent to a single plant), this is not a good idea. Growers now supply pollenizers, by planting crab apples interspersed in the rows, or by grafting crab apple limbs on some trees. Pollenizers can also be supplied by putting drum bouquets of crab apples or a compatible apple variety in the orchard blocks.
The field of pollination management cannot be placed wholly within any other field, because it bridges several fields. It draws from horticulture, apiculture, zoology (especially entomology), ecology, and botany.
Improving pollination with suboptimal bee densities
Growers’ demand for beehives far exceeds the available supply. The number of managed beehives in the US has steadily declined from close to 6 million after WWII, to less than 2.5 million today. In contrast, the area dedicated to growing bee-pollinated crops has grown over 300% in the same time period. To make matters worse, in the past five years we have seen a decline in winter managed beehives, which has reached an unprecedented rate near 30%. At present, there is an enormous demand for beehive rentals that cannot always be met. There is a clear need across the agricultural industry for a management tool to draw pollinators into cultivations and encourage them to preferentially visit and pollinate the flowering crop. By attracting pollinators like honeybees and increasing their foraging behavior, particularly in the center of large plots, we can increase grower returns and optimize yield from their plantings.
| Technology | Horticulture | null |
325806 | https://en.wikipedia.org/wiki/Graph%20%28discrete%20mathematics%29 | Graph (discrete mathematics) | In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called link or line). Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges.
The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any person A can shake hands with a person B only if B also shakes hands with A. In contrast, if an edge from a person A to a person B means that A owes money to B, then this graph is directed, because owing money is not necessarily reciprocated.
Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense by J. J. Sylvester in 1878 due to a direct relation between mathematics and chemical structure (what he called a chemico-graphical image).
Definitions
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.
Graph
A graph (sometimes called an undirected graph to distinguish it from a directed graph, or a simple graph to distinguish it from a multigraph) is a pair , where is a set whose elements are called vertices (singular: vertex), and is a set of unordered pairs of vertices, whose elements are called edges (sometimes links or lines).
The vertices and of an edge are called the edge's endpoints. The edge is said to join and and to be incident on them. A vertex may belong to no edge, in which case it is not joined to any other vertex and is called isolated. When an edge exists, the vertices and are called adjacent.
A multigraph is a generalization that allows multiple edges to have the same pair of endpoints. In some texts, multigraphs are simply called graphs.
Sometimes, graphs are allowed to contain loops, which are edges that join a vertex to itself. To allow loops, the pairs of vertices in must be allowed to have the same node twice. Such generalized graphs are called graphs with loops or simply graphs when it is clear from the context that loops are allowed.
Generally, the vertex set is taken to be finite (which implies that the edge set is also finite). Sometimes infinite graphs are considered, but they are usually viewed as a special kind of binary relation, because most results on finite graphs either do not extend to the infinite case or need a rather different proof.
An empty graph is a graph that has an empty set of vertices (and thus an empty set of edges). The order of a graph is its number of vertices, usually denoted by . The size of a graph is its number of edges, typically denoted by . However, in some contexts, such as for expressing the computational complexity of algorithms, the term size is used for the quantity (otherwise, a non-empty graph could have size 0). The degree or valency of a vertex is the number of edges that are incident to it; for graphs with loops, a loop is counted twice.
In a graph of order , the maximum degree of each vertex is (or if loops are allowed, because a loop contributes 2 to the degree), and the maximum number of edges is (or if loops are allowed).
The edges of a graph define a symmetric relation on the vertices, called the adjacency relation. Specifically, two vertices and are adjacent if is an edge. A graph is fully determined by its adjacency matrix , which is an square matrix, with specifying the number of connections from vertex to vertex . For a simple graph, is either 0, indicating disconnection, or 1, indicating connection; moreover because an edge in a simple graph cannot start and end at the same vertex. Graphs with self-loops will be characterized by some or all being equal to a positive integer, and multigraphs (with multiple edges between vertices) will be characterized by some or all being equal to a positive integer. Undirected graphs will have a symmetric adjacency matrix (meaning ).
Directed graph
A directed graph or digraph is a graph in which edges have orientations.
In one restricted but very common sense of the term, a directed graph is a pair comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called directed edges, directed links, directed lines, arrows, or arcs), which are ordered pairs of distinct vertices: .
To avoid ambiguity, this type of object may be called precisely a directed simple graph.
In the edge directed from to , the vertices and are called the endpoints of the edge, the tail of the edge and the head of the edge. The edge is said to join and and to be incident on and on . A vertex may exist in a graph and not belong to an edge. The edge is called the inverted edge of . Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head.
In one more general sense of the term allowing multiple edges, a directed graph is sometimes defined to be an ordered triple comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs);
, an incidence function mapping every edge to an ordered pair of vertices (that is, an edge is associated with two distinct vertices): .
To avoid ambiguity, this type of object may be called precisely a directed multigraph.
A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) which is not in . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of should be modified to . For directed multigraphs, the definition of should be modified to . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver) respectively.
The edges of a directed simple graph permitting loops is a homogeneous relation ~ on the vertices of that is called the adjacency relation of . Specifically, for each edge , its endpoints and are said to be adjacent to one another, which is denoted .
Mixed graph
A mixed graph is a graph in which some edges may be directed and some may be undirected. It is an ordered triple for a mixed simple graph and for a mixed multigraph with , (the undirected edges), (the directed edges), and defined as above. Directed and undirected graphs are special cases.
Weighted graph
A weighted graph or a network is a graph in which a number (the weight) is assigned to each edge. Such weights might represent for example costs, lengths or capacities, depending on the problem at hand. Such graphs arise in many contexts, for example in shortest path problems such as the traveling salesman problem.
Types of graphs
Oriented graph
One definition of an oriented graph is that it is a directed graph in which at most one of and may be edges of the graph. That is, it is a directed graph that can be formed as an orientation of an undirected (simple) graph.
Some authors use "oriented graph" to mean the same as "directed graph". Some authors use "oriented graph" to mean any orientation of a given undirected graph or multigraph.
Regular graph
A regular graph is a graph in which each vertex has the same number of neighbours, i.e., every vertex has the same degree. A regular graph with vertices of degree k is called a k‑regular graph or regular graph of degree k.
Complete graph
A complete graph is a graph in which each pair of vertices is joined by an edge. A complete graph contains all possible edges.
Finite graph
A finite graph is a graph in which the vertex set and the edge set are finite sets. Otherwise, it is called an infinite graph.
Most commonly in graph theory it is implied that the graphs discussed are finite. If the graphs are infinite, that is usually specifically stated.
Connected graph
In an undirected graph, an unordered pair of vertices is called connected if a path leads from x to y. Otherwise, the unordered pair is called disconnected.
A connected graph is an undirected graph in which every unordered pair of vertices in the graph is connected. Otherwise, it is called a disconnected graph.
In a directed graph, an ordered pair of vertices is called strongly connected if a directed path leads from x to y. Otherwise, the ordered pair is called weakly connected if an undirected path leads from x to y after replacing all of its directed edges with undirected edges. Otherwise, the ordered pair is called disconnected.
A strongly connected graph is a directed graph in which every ordered pair of vertices in the graph is strongly connected. Otherwise, it is called a weakly connected graph if every ordered pair of vertices in the graph is weakly connected. Otherwise it is called a disconnected graph.
A k-vertex-connected graph or k-edge-connected graph is a graph in which no set of vertices (respectively, edges) exists that, when removed, disconnects the graph. A k-vertex-connected graph is often called simply a k-connected graph.
Bipartite graph
A bipartite graph is a simple graph in which the vertex set can be partitioned into two sets, W and X, so that no two vertices in W share a common edge and no two vertices in X share a common edge. Alternatively, it is a graph with a chromatic number of 2.
In a complete bipartite graph, the vertex set is the union of two disjoint sets, W and X, so that every vertex in W is adjacent to every vertex in X but there are no edges within W or X.
Path graph
A path graph or linear graph of order is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the where i = 1, 2, …, n − 1. Path graphs can be characterized as connected graphs in which the degree of all but two vertices is 2 and the degree of the two remaining vertices is 1. If a path graph occurs as a subgraph of another graph, it is a path in that graph.
Planar graph
A planar graph is a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect.
Cycle graph
A cycle graph or circular graph of order is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the where i = 1, 2, …, n − 1, plus the edge . Cycle graphs can be characterized as connected graphs in which the degree of all vertices is 2. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph.
Tree
A tree is an undirected graph in which any two vertices are connected by exactly one path, or equivalently a connected acyclic undirected graph.
A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees.
Polytree
A polytree (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree.
A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest.
Advanced classes
More advanced kinds of graphs are:
Petersen graph and its generalizations;
perfect graphs;
cographs;
chordal graphs;
other graphs with large automorphism groups: vertex-transitive, arc-transitive, and distance-transitive graphs;
strongly regular graphs and their generalizations distance-regular graphs.
Properties of graphs
Two edges of a graph are called adjacent if they share a common vertex. Two edges of a directed graph are called consecutive if the head of the first one is the tail of the second one. Similarly, two vertices are called adjacent if they share a common edge (consecutive if the first one is the tail and the second one is the head of an edge), in which case the common edge is said to join the two vertices. An edge and a vertex on that edge are called incident.
The graph with only one vertex and no edges is called the trivial graph. A graph with only vertices and no edges is known as an edgeless graph. The graph with no vertices and no edges is sometimes called the null graph or empty graph, but the terminology is not consistent and not all mathematicians allow this object.
Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be called vertex-labeled. However, for many questions it is better to treat vertices as indistinguishable. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident edges.) The same remarks apply to edges, so graphs with labeled edges are called edge-labeled. Graphs with labels attached to edges or vertices are more generally designated as labeled. Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are called unlabeled. (In the literature, the term labeled may apply to other kinds of labeling, besides that which serves only to distinguish different vertices or edges.)
The category of all graphs is the comma category Set ↓ D where D: Set → Set is the functor taking a set s to s × s.
Examples
The diagram is a schematic representation of the graph with vertices and edges
In computer science, directed graphs are used to represent knowledge (e.g., conceptual graph), finite-state machines, and many other discrete structures.
A binary relation R on a set X defines a directed graph. An element x of X is a direct predecessor of an element y of X if and only if xRy.
A directed graph can model information networks such as Twitter, with one user following another.
Particularly regular examples of directed graphs are given by the Cayley graphs of finitely-generated groups, as well as Schreier coset graphs
In category theory, every small category has an underlying directed multigraph whose vertices are the objects of the category, and whose edges are the arrows of the category. In the language of category theory, one says that there is a forgetful functor from the category of small categories to the category of quivers.
Graph operations
There are several operations that produce new graphs from initial ones, which might be classified into the following categories:
unary operations, which create a new graph from an initial one, such as:
edge contraction,
line graph,
dual graph,
complement graph,
graph rewriting;
binary operations, which create a new graph from two initial ones, such as:
disjoint union of graphs,
cartesian product of graphs,
tensor product of graphs,
strong product of graphs,
lexicographic product of graphs,
series–parallel graphs.
Generalizations
In a hypergraph, an edge can join any positive number of vertices.
An undirected graph can be seen as a simplicial complex consisting of 1-simplices (the edges) and 0-simplices (the vertices). As such, complexes are generalizations of graphs since they allow for higher-dimensional simplices.
Every graph gives rise to a matroid.
In model theory, a graph is just a structure. But in that case, there is no limitation on the number of edges: it can be any cardinal number, see continuous graph.
In computational biology, power graph analysis introduces power graphs as an alternative representation of undirected graphs.
In geographic information systems, geometric networks are closely modeled after graphs, and borrow many concepts from graph theory to perform spatial analysis on road networks or utility grids.
| Mathematics | Discrete mathematics | null |
325831 | https://en.wikipedia.org/wiki/Injection%20moulding | Injection moulding | Injection moulding (U.S. spelling: injection molding) is a manufacturing process for producing parts by injecting molten material into a mould, or mold. Injection moulding can be performed with a host of materials mainly including metals (for which the process is called die-casting), glasses, elastomers, confections, and most commonly thermoplastic and thermosetting polymers. Material for the part is fed into a heated barrel, mixed (using a helical screw), and injected into a mould cavity, where it cools and hardens to the configuration of the cavity. After a product is designed, usually by an industrial designer or an engineer, moulds are made by a mould-maker (or toolmaker) from metal, usually either steel or aluminium, and precision-machined to form the features of the desired part. Injection moulding is widely used for manufacturing a variety of parts, from the smallest components to entire body panels of cars. Advances in 3D printing technology, using photopolymers that do not melt during the injection moulding of some lower-temperature thermoplastics, can be used for some simple injection moulds.
Injection moulding uses a special-purpose machine that has three parts: the injection unit, the mould and the clamp. Parts to be injection-moulded must be very carefully designed to facilitate the moulding process; the material used for the part, the desired shape and features of the part, the material of the mould, and the properties of the moulding machine must all be taken into account. The versatility of injection moulding is facilitated by this breadth of design considerations and possibilities.
Applications
Injection moulding is used to create many things such as wire spools, packaging, bottle caps, automotive parts and components, toys, pocket combs, some musical instruments (and parts of them), one-piece chairs and small tables, storage containers, mechanical parts (including gears), and most other plastic products available today. Injection moulding is the most common modern method of manufacturing plastic parts; it is ideal for producing high volumes of the same object.
Process characteristics
Injection moulding uses a ram or screw-type plunger to force molten plastic or rubber material into a mould cavity; this solidifies into a shape that has conformed to the contour of the mould. It is most commonly used to process both thermoplastic and thermosetting polymers, with the volume used of the former being considerably higher. Thermoplastics are prevalent due to characteristics that make them highly suitable for injection moulding, such as ease of recycling, versatility for a wide variety of applications, and ability to soften and flow on heating. Thermoplastics also have an element of safety over thermosets; if a thermosetting polymer is not ejected from the injection barrel in a timely manner, chemical crosslinking may occur causing the screw and check valves to seize and potentially damaging the injection moulding machine.
Injection moulding consists of the high pressure injection of the raw material into a mould, which shapes the polymer into the desired form. Moulds can be of a single cavity or multiple cavities. In multiple cavity moulds, each cavity can be identical and form the same parts or can be unique and form multiple different geometries during a single cycle. Moulds are generally made from tool steels, but stainless steels and aluminium moulds are suitable for certain applications. Aluminium moulds are typically ill-suited for high volume production or parts with narrow dimensional tolerances, as they have inferior mechanical properties and are more prone to wear, damage, and deformation during the injection and clamping cycles; however, aluminium moulds are cost-effective in low-volume applications, as mould fabrication costs and time are considerably reduced. Many steel moulds are designed to process well over a million parts during their lifetime and can cost hundreds of thousands of dollars to fabricate.
When thermoplastics are moulded, typically pelletised raw material is fed through a hopper into a heated barrel with a reciprocating screw. Upon entrance to the barrel, the temperature increases and the Van der Waals forces that resist relative flow of individual chains are weakened as a result of increased space between molecules at higher thermal energy states. This process reduces its viscosity, which enables the polymer to flow with the driving force of the injection unit. The screw delivers the raw material forward, mixes and homogenises the thermal and viscous distributions of the polymer, and reduces the required heating time by mechanically shearing the material and adding a significant amount of frictional heating to the polymer. The material feeds forward through a check valve and collects at the front of the screw into a volume known as a shot. A shot is the volume of material that is used to fill the mould cavity, compensate for shrinkage, and provide a cushion (approximately 10% of the total shot volume, which remains in the barrel and prevents the screw from bottoming out) to transfer pressure from the screw to the mould cavity. When enough material has gathered, the material is forced at high pressure and velocity into the part forming cavity. The exact amount of shrinkage is a function of the resin being used, and can be relatively predictable. To prevent spikes in pressure, the process normally uses a transfer position corresponding to a 95–98% full cavity where the screw shifts from a constant velocity to a constant pressure control. Often injection times are well under 1 second. Once the screw reaches the transfer position the packing pressure is applied, which completes mould filling and compensates for thermal shrinkage, which is quite high for thermoplastics relative to many other materials. The packing pressure is applied until the gate (cavity entrance) solidifies. Due to its small size, the gate is normally the first place to solidify through its entire thickness. Once the gate solidifies, no more material can enter the cavity; accordingly, the screw reciprocates and acquires material for the next cycle while the material within the mould cools so that it can be ejected and be dimensionally stable. This cooling duration is dramatically reduced by the use of cooling lines circulating water or oil from an external temperature controller. Once the required temperature has been achieved, the mould opens and an array of pins, sleeves, strippers, etc. are driven forward to demould the article. Then, the mould closes and the process is repeated.
For a two-shot mould, two separate materials are incorporated into one part. This type of injection moulding is used to add a soft touch to knobs, to give a product multiple colours, or to produce a part with multiple performance characteristics.
For thermosets, typically two different chemical components are injected into the barrel. These components immediately begin irreversible chemical reactions that eventually crosslinks the material into a single connected network of molecules. As the chemical reaction occurs, the two fluid components permanently transform into a viscoelastic solid. Solidification in the injection barrel and screw can be problematic and have financial repercussions; therefore, minimising the thermoset curing within the barrel is vital. This typically means that the residence time and temperature of the chemical precursors are minimised in the injection unit. The residence time can be reduced by minimising the barrel's volume capacity and by maximising the cycle times. These factors have led to the use of a thermally isolated, cold injection unit that injects the reacting chemicals into a thermally isolated hot mould, which increases the rate of chemical reactions and results in shorter time required to achieve a solidified thermoset component. After the part has solidified, valves close to isolate the injection system and chemical precursors, and the mould opens to eject the moulded parts. Then, the mould closes and the process repeats.
Pre-moulded or machined components can be inserted into the cavity while the mould is open, allowing the material injected in the next cycle to form and solidify around them. This process is known as insert moulding and allows single parts to contain multiple materials. This process is often used to create plastic parts with protruding metal screws so they can be fastened and unfastened repeatedly. This technique can also be used for In-mould labelling and film lids may also be attached to moulded plastic containers.
A parting line, sprue, gate marks, and ejector pin marks are usually present on the final part. None of these features are typically desired, but are unavoidable due to the nature of the process. Gate marks occur at the gate that joins the melt-delivery channels (sprue and runner) to the part forming cavity. Parting line and ejector pin marks result from minute misalignments, wear, gaseous vents, clearances for adjacent parts in relative motion, and/or dimensional differences of the melting surfaces contacting the injected polymer. Dimensional differences can be attributed to non-uniform, pressure-induced deformation during injection, machining tolerances, and non-uniform thermal expansion and contraction of mould components, which experience rapid cycling during the injection, packing, cooling, and ejection phases of the process. Mould components are often designed with materials of various coefficients of thermal expansion. These factors cannot be simultaneously accounted for without astronomical increases in the cost of design, fabrication, processing, and quality monitoring. The skillful mould and part designer positions these aesthetic detriments in hidden areas if feasible.
History
In 1846 the British inventor Charles Hancock, a relative of Thomas Hancock, patented an injection molding machine.
American inventor John Wesley Hyatt, together with his brother Isaiah, patented one of the first injection moulding machines in 1872. This machine was relatively simple compared to machines in use today: it worked like a large hypodermic needle, using a plunger to inject plastic through a heated cylinder into a mould. The industry progressed slowly over the years, producing products such as collar stays, buttons, and hair combs (generally though, plastics, in its modern definition, are a more recent development ).
The German chemists Arthur Eichengrün and Theodore Becker invented the first soluble forms of cellulose acetate in 1903, which was much less flammable than cellulose nitrate. It was eventually made available in a powder form from which it was readily injection moulded. Arthur Eichengrün developed the first injection moulding press in 1919. In 1939, Arthur Eichengrün patented the injection moulding of plasticised cellulose acetate.
The industry expanded rapidly in the 1940s because World War II created a huge demand for inexpensive, mass-produced products. In 1946, American inventor James Watson Hendry built the first screw injection machine, which allowed much more precise control over the speed of injection and the quality of articles produced. This machine also allowed material to be mixed before injection, so that coloured or recycled plastic could be added to virgin material and mixed thoroughly before being injected. In the 1970s, Hendry went on to develop the first gas-assisted injection moulding process, which permitted the production of complex, hollow articles that cooled quickly. This greatly improved design flexibility as well as the strength and finish of manufactured parts while reducing production time, cost, weight and waste. By 1979, plastic production overtook steel production, and by 1990, aluminium moulds were widely used in injection moulding. Today, screw injection machines account for the vast majority of all injection machines.
The plastic injection moulding industry has evolved over the years from producing combs and buttons to producing a vast array of products for many industries including automotive, medical, aerospace, consumer products, toys, plumbing, packaging, and construction.
Examples of polymers best suited for the process
Most polymers, sometimes referred to as resins, may be used, including all thermoplastics, some thermosets, and some elastomers. Since 1995, the total number of available materials for injection moulding has increased at a rate of 750 per year; there were approximately 18,000 materials available when that trend began. Available materials include alloys or blends of previously developed materials, so product designers can choose the material with the best set of properties from a vast selection. Major criteria for selection of a material are the strength and function required for the final part, as well as the cost, but also each material has different parameters for moulding that must be taken into account. Other considerations when choosing an injection moulding material include flexural modulus of elasticity, or the degree to which a material can be bent without damage, as well as heat deflection and water absorption. Common polymers like epoxy and phenolic are examples of thermosetting plastics while nylon, polyethylene, and polystyrene are thermoplastic. Until comparatively recently, plastic springs were not possible, but advances in polymer properties make them now quite practical. Applications include buckles for anchoring and disconnecting outdoor-equipment webbing.
Equipment
Injection moulding machines consist of a material hopper, an injection ram or screw-type plunger, and a heating unit. Also known as platens, they hold the moulds in which the components are shaped. Presses are rated by tonnage, which expresses the amount of clamping force that the machine can exert. This force keeps the mould closed during the injection process. Tonnage can vary from less than 5 tons to over 9,000 tons, with the higher figures used in comparatively few manufacturing operations. The total clamp force needed is determined by the projected area of the part being moulded. This projected area is multiplied by a clamp force of from 1.8 to 7.2 tons for each square centimetre of the projected areas. As a rule of thumb, 4 or 5 tons/in2 can be used for most products. If the plastic material is very stiff, it requires more injection pressure to fill the mould, and thus more clamp tonnage to hold the mould closed. The required force can also be determined by the material used and the size of the part. Larger parts require higher clamping force.
Mould
Mould or die are the common terms used to describe the tool used to produce plastic parts in moulding.
Since moulds have been expensive to manufacture, they were usually only used in mass production where thousands of parts were being produced. Typical moulds are constructed from hardened steel, pre-hardened steel, aluminium, and/or beryllium-copper alloy. The choice of material for the mold is not only based on cost considerations, but also has a lot to do with the product life cycle. In general, steel moulds cost more to construct, but their longer lifespan offsets the higher initial cost over a higher number of parts made before wearing out. Pre-hardened steel moulds are less wear-resistant and are used for lower volume requirements or larger components; their typical steel hardness is 38–45 on the Rockwell-C scale. Hardened steel moulds are heat treated after machining; these are by far superior in terms of wear resistance and lifespan. Typical hardness ranges between 50 and 60 Rockwell-C (HRC). Aluminium moulds can cost substantially less, and when designed and machined with modern computerised equipment can be economical for moulding tens or even hundreds of thousands of parts. Beryllium copper is used in areas of the mould that require fast heat removal or areas that see the most shear heat generated. The moulds can be manufactured either by CNC machining or by using electrical discharge machining processes.
Mould design
The mould consists of two primary components, the injection mould (A plate) and the ejector mould (B plate). These components are also referred to as moulder and mouldmaker. Plastic resin enters the mould through a sprue or gate in the injection mould; the sprue bushing is to seal tightly against the nozzle of the injection barrel of the moulding machine and to allow molten plastic to flow from the barrel into the mould, also known as the cavity. The sprue bushing directs the molten plastic to the cavity images through channels that are machined into the faces of the A and B plates. These channels allow plastic to run along them, so they are referred to as runners. The molten plastic flows through the runner and enters one or more specialised gates and into the cavity geometry to form the desired part.
The amount of resin required to fill the sprue, runner and cavities of a mould comprises a "shot". Trapped air in the mould can escape through air vents that are ground into the parting line of the mould, or around ejector pins and slides that are slightly smaller than the holes retaining them. If the trapped air is not allowed to escape, it is compressed by the pressure of the incoming material and squeezed into the corners of the cavity, where it prevents filling and can also cause other defects. The air can even become so compressed that it ignites and burns the surrounding plastic material.
To allow for removal of the moulded part from the mould, the mould features must not overhang one another in the direction that the mould opens, unless parts of the mould are designed to move from between such overhangs when the mould opens using components called Lifters.
Sides of the part that appear parallel with the direction of draw (the axis of the cored position (hole) or insert is parallel to the up and down movement of the mould as it opens and closes) are typically angled slightly, called draft, to ease release of the part from the mould. Insufficient draft can cause deformation or damage. The draft required for mould release is primarily dependent on the depth of the cavity; the deeper the cavity, the more draft necessary. Shrinkage must also be taken into account when determining the draft required. If the skin is too thin, then the moulded part tends to shrink onto the cores that form while cooling and cling to those cores, or the part may warp, twist, blister or crack when the cavity is pulled away.
A mould is usually designed so that the moulded part reliably remains on the ejector (B) side of the mould when it opens, and draws the runner and the sprue out of the (A) side along with the parts. The part then falls freely when ejected from the (B) side. Tunnel gates, also known as submarine or mould gates, are located below the parting line or mould surface. An opening is machined into the surface of the mould on the parting line. The moulded part is cut (by the mould) from the runner system on ejection from the mould. Ejector pins, also known as knockout pins, are circular pins placed in either half of the mould (usually the ejector half), which push the finished moulded product, or runner system out of a mould.The ejection of the article using pins, sleeves, strippers, etc., may cause undesirable impressions or distortion, so care must be taken when designing the mould.
The standard method of cooling is passing a coolant (usually water) through a series of holes drilled through the mould plates and connected by hoses to form a continuous pathway. The coolant absorbs heat from the mould (which has absorbed heat from the hot plastic) and keeps the mould at a proper temperature to solidify the plastic at the most efficient rate.
To ease maintenance and venting, cavities and cores are divided into pieces, called inserts, and sub-assemblies, also called inserts, blocks, or chase blocks. By substituting interchangeable inserts, one mould may make several variations of the same part.
More complex parts are formed using more complex moulds. These may have sections called slides, that move into a cavity perpendicular to the draw direction, to form overhanging part features. When the mould is opened, the slides are pulled away from the plastic part by using stationary “angle pins” on the stationary mould half. These pins enter a slot in the slides and cause the slides to move backward when the moving half of the mould opens. The part is then ejected and the mould closes. The closing action of the mould causes the slides to move forward along the angle pins.
A mould can produce several copies of the same parts in a single "shot". The number of "impressions" in the mould of that part is often incorrectly referred to as cavitation. A tool with one impression is often called a single impression (cavity) mould. A mould with two or more cavities of the same parts is usually called a multiple impression (cavity) mould. (Not to be confused with "Multi-shot moulding" {which is dealt with in the next section.}) Some extremely high production volume moulds (like those for bottle caps) can have over 128 cavities.
In some cases, multiple cavity tooling moulds a series of different parts in the same tool. Some toolmakers call these moulds family moulds, as all the parts are related—e.g., plastic model kits.
Some moulds allow previously moulded parts to be reinserted to allow a new plastic layer to form around the first part. This is often referred to as overmoulding. This system can allow for production of one-piece tires and wheels.
Moulds for highly precise and extremely small parts from micro injection molding requires extra care in the design stage, as material resins react differently compared to their full-sized counterparts where they must quickly fill these incredibly small spaces, which puts them under intense shear strains.
Multi-shot moulding
Two-shot, double-shot or multi-shot moulds are designed to "overmould" within a single moulding cycle and must be processed on specialised injection moulding machines with two or more injection units. This process is actually an injection moulding process performed twice and therefore can allow only for a much smaller margin of error. In the first step, the base colour material is moulded into a basic shape, which contains spaces for the second shot. Then the second material, a different colour, is injection-moulded into those spaces. Pushbuttons and keys, for instance, made by this process have markings that cannot wear off, and remain legible with heavy use.
Mould storage
Manufacturers go to great lengths to protect custom moulds due to their high average costs. The perfect temperature and humidity levels are maintained to ensure the longest possible lifespan for each custom mould. Custom moulds, such as those used for rubber injection moulding, are stored in temperature and humidity controlled environments to prevent warping.
Tool materials
Tool steel is often used. Mild steel, aluminium, nickel or epoxy are suitable only for prototype or very short production runs. Modern hard aluminium (7075 and 2024 alloys) with proper mould design, can easily make moulds capable of 100,000 or more part life with proper mould maintenance.
Machining
Moulds are built through two main methods: standard machining and EDM. Standard machining, in its conventional form, has historically been the method of building injection moulds. With technological developments, CNC machining became the predominant means of making more complex moulds with more accurate mould details in less time than traditional methods.
The electrical discharge machining (EDM) or spark erosion process has become widely used in mould making. As well as allowing the formation of shapes that are difficult to machine, the process allows pre-hardened moulds to be shaped so that no heat treatment is required. Changes to a hardened mould by conventional drilling and milling normally require annealing to soften the mould, followed by heat treatment to harden it again. EDM is a simple process in which a shaped electrode, usually made of copper or graphite, is very slowly lowered onto the mould surface over a period of many hours, which is immersed in paraffin oil (kerosene). A voltage applied between tool and mould causes spark erosion of the mould surface in the inverse shape of the electrode.
Cost
The number of cavities incorporated into a mould directly correlate in moulding costs. Fewer cavities require far less tooling work, so limiting the number of cavities lowers initial manufacturing costs to build an injection mould.
As the number of cavities play a vital role in moulding costs, so does the complexity of the part's design. Complexity can be incorporated into many factors such as surface finishing, tolerance requirements, internal or external threads, fine detailing or the number of undercuts that may be incorporated.
Further details, such as undercuts, or any feature that needs additional tooling, increases mould cost. Surface finish of the core and cavity of moulds further influences cost.
Rubber injection moulding process produces a high yield of durable products, making it the most efficient and cost-effective method of moulding. Consistent vulcanisation processes involving precise temperature control significantly reduces all waste material.
Injection process
Usually, the plastic materials are formed in the shape of pellets or granules and sent from the raw material manufacturers in paper bags. With injection moulding, pre-dried granular plastic is fed by a forced ram from a hopper into a heated barrel. As the granules are slowly moved forward by a screw-type plunger, the plastic is forced into a heated chamber, where it is melted. As the plunger advances, the melted plastic is forced through a nozzle that rests against the mould, allowing it to enter the mould cavity through a gate and runner system. The mould remains cold so the plastic solidifies almost as soon as the mould is filled.
Injection moulding cycle
The sequence of events during the injection mould of a plastic part is called the injection moulding cycle. The cycle begins when the mould closes, followed by the injection of the polymer into the mould cavity. Once the cavity is filled, a holding pressure is maintained to compensate for material shrinkage. In the next step, the screw turns, feeding the next shot to the front screw. This causes the screw to retract as the next shot is prepared. Once the part is sufficiently cool, the mould opens and the part is ejected.
Scientific versus traditional moulding
Traditionally, the injection portion of the moulding process was done at one constant pressure to fill and pack the cavity. This method, however, allowed for a large variation in dimensions from cycle-to-cycle. More commonly used now is scientific or decoupled moulding, a method pioneered by RJG Inc. In this the injection of the plastic is "decoupled" into stages to allow better control of part dimensions and more cycle-to-cycle (commonly called shot-to-shot in the industry) consistency. First the cavity is filled to approximately 98% full using velocity (speed) control. Although the pressure should be sufficient to allow for the desired speed, pressure limitations during this stage are undesirable. Once the cavity is 98% full, the machine switches from velocity control to pressure control, where the cavity is "packed out" at a constant pressure, where sufficient velocity to reach desired pressures is required. This lets workers control part dimensions to within thousandths of an inch or better.
Different types of injection moulding processes
Although most injection moulding processes are covered by the conventional process description above, there are several important moulding variations including, but not limited to:
Die casting
Metal injection moulding
Thin-wall injection moulding
Injection moulding of liquid silicone rubber
Reaction injection moulding
Micro injection moulding
Gas-assisted injection moulding
Cube mold technology
Multi-material injection molding
A more comprehensive list of injection moulding processes may be found here:
Process troubleshooting
Like all industrial processes, injection molding can produce flawed parts, even in toys. In the field of injection moulding, troubleshooting is often performed by examining defective parts for specific defects and addressing these defects with the design of the mould or the characteristics of the process itself. Trials are often performed before full production runs in an effort to predict defects and determine the appropriate specifications to use in the injection process.
When filling a new or unfamiliar mould for the first time, where shot size for that mould is unknown, a technician/tool setter may perform a trial run before a full production run. They start with a small shot weight and fills gradually until the mould is 95 to 99% full. Once they achieve this, they apply a small amount of holding pressure and increase holding time until gate freeze off (solidification time) has occurred. Gate freeze off time can be determined by increasing the hold time, and then weighing the part. When the weight of the part does not change, the gate has frozen and no more material is injected into the part. Gate solidification time is important, as this determines cycle time and the quality and consistency of the product, which itself is an important issue in the economics of the production process. Holding pressure is increased until the parts are free of sinks and part weight has been achieved.
Moulding defects
Injection moulding is a complex technology with possible production problems. They can be caused either by defects in the moulds, or more often by the moulding process itself.
Methods such as industrial CT scanning can help with finding these defects externally as well as internally.
Tolerances
Tolerance depends on the dimensions of the part. An example of a standard tolerance for a 1-inch dimension of an LDPE part with 0.125 inch wall thickness is +/- 0.008 inch (0.2 mm).
Power requirements
The power required for this process of injection moulding depends on many things and varies between materials used. Manufacturing Processes Reference Guide states that the power requirements depend on "a material's specific gravity, melting point, thermal conductivity, part size, and molding rate." Below is a table from page 243 of the same reference as previously mentioned that best illustrates the characteristics relevant to the power required for the most commonly used materials.
Robotic moulding
Automation means that the smaller size of parts permits a mobile inspection system to examine multiple parts more quickly. In addition to mounting inspection systems on automatic devices, multiple-axis robots can remove parts from the mould and position them for further processes.
Specific instances include removing of parts from the mould immediately after the parts are created, as well as applying machine vision systems. A robot grips the part after the ejector pins have been extended to free the part from the mould. It then moves them into either a holding location or directly onto an inspection system. The choice depends upon the type of product, as well as the general layout of the manufacturing equipment. Vision systems mounted on robots have greatly enhanced quality control for insert moulded parts. A mobile robot can more precisely determine the placement accuracy of the metal component, and inspect faster than a human can.
Gallery
| Technology | Materials | null |
325880 | https://en.wikipedia.org/wiki/Elephant%20bird | Elephant bird | Elephant birds are extinct flightless birds belonging to the order Aepyornithiformes that were native to the island of Madagascar. They are thought to have gone extinct around AD 1000, likely as a result of human activity. Elephant birds comprised three species, one in the genus Mullerornis, and two in Aepyornis. Aepyornis maximus is possibly the largest bird to have ever lived, with their eggs being the largest known for any amniote. Elephant birds are palaeognaths (whose flightless representatives are often known as ratites), and their closest living relatives are kiwi (found only in New Zealand), suggesting that ratites did not diversify by vicariance during the breakup of Gondwana but instead convergently evolved flightlessness from ancestors that dispersed more recently by flying.
Discovery
Elephant birds have been extinct since at least the 17th century. Étienne de Flacourt, a French governor of Madagascar during the 1640s and 1650s, mentioned an ostrich-like bird, said to inhabit unpopulated regions, although it is unclear whether he was repeating folk tales from generations earlier. In 1659, Flacourt wrote of the "vouropatra – a large bird which haunts the Ampatres and lays eggs like the ostriches; so that the people of these places may not take it, it seeks the most lonely places." There has been speculation, especially popular in the latter half of the 19th century, that the legendary roc from the accounts of Marco Polo was ultimately based on elephant birds, but this is disputed.
Between 1830 and 1840, European travelers in Madagascar saw giant eggs and eggshells. British observers were more willing to believe the accounts of giant birds and eggs because they knew of the moa in New Zealand. In 1851 the genus Aepyornis and species A. maximus were scientifically described in a paper presented to the Paris Academy of Sciences by Isidore Geoffroy Saint-Hilaire, based on bones and eggs recently obtained from the island, which resulted in wide coverage in the popular presses of the time, particularly due to their very large eggs.
Two whole eggs have been found in dune deposits in southern Western Australia, one in the 1930s (the Scott River egg) and one in 1992 (the Cervantes egg); both have been identified as Aepyornis maximus rather than Genyornis newtoni, an extinct giant bird known from the Pleistocene of Australia. It is hypothesized that the eggs floated from Madagascar to Australia on the Antarctic Circumpolar Current. Evidence supporting this is the finding of two fresh penguin eggs that washed ashore on Western Australia but may have originated in the Kerguelen Islands, and an ostrich egg found floating in the Timor Sea in the early 1990s.
Taxonomy and biogeography
Like the ostrich, rhea, cassowary, emu, kiwi and extinct moa, elephant birds were ratites; they could not fly, and their breast bones had no keel. Because Madagascar and Africa separated before the ratite lineage arose, elephant birds has been thought to have dispersed and become flightless and gigantic in situ.
More recently, it has been deduced from DNA sequence comparisons that the closest living relatives of elephant birds are New Zealand kiwi, though the split between the two groups is deep, with the two lineages being estimated to have diverged from each other around 54 million years ago.
Placement of Elephant birds within Palaeognathae, after:
The ancestors of elephant birds are thought to have arrived in Madagascar well after Gondwana broke apart. The existence of possible flying palaeognathae in the Miocene such as Proapteryx further supports the view that ratites did not diversify in response to vicariance. Gondwana broke apart in the Cretaceous and their phylogenetic tree does not match the process of continental drift. Madagascar has a notoriously poor Cenozoic terrestrial fossil record, with essentially no fossils between the end of the Cretaceous (Maevarano Formation) and the Late Pleistocene. Complete mitochondrial genomes obtained from elephant birds eggshells suggest that Aepyornis and Mullerornis are significantly genetically divergent from each other, with a molecular clock analysis estimating the split around 27 million years ago. Molecular dating estimates that the divergence between Aepyornithidae and Mullerornithidae occurred approximately 30 Ma, close to the Eocene-Oligocene boundary, a period of marked global cooling and faunal turnover in the Northern Hemisphere.
Species
Up to 10 or 11 species in the genus Aepyornis have been described, but the validity of many have been disputed, with numerous authors treating them all in just one species, A. maximus. Up to three species have been described in Mullerornis. Recent work has restricted the number of elephant bird species to three, with two in Aepyornis, one in Mullerornis.
Order Aepyornithiformes Newton 1884 [Aepyornithes Newton 1884]
Genus Aepyornis Geoffroy Saint-Hilaire 1850 (Synonym: Vorombe Hansford & Turvey 2018)
Aepyornis hildebrandti Burckhardt, 1893 (Possibly divided into two subspecies)
Aepyornis maximus Hilaire, 1851
Genus Mullerornis Milne-Edwards & Grandidier 1894
Mullerornis modestus (Milne-Edwards & Grandidier 1869) Hansford & Turvey 2018
All elephant birds are usually placed in the single family Aepyornithidae, but some authors suggest Aepyornis and Mullerornis should be placed in separate families within the Aepyornithiformes, with the latter placed into Mullerornithidae.
Description
Elephant birds were large sized birds (the largest reaching tall in normal standing posture) that had vestigial wings, long legs and necks, with small heads relative to body size, which bore straight, thick conical beaks that were not hooked. The tops of elephant bird skulls display punctuated marks, which may have been attachment sites for fleshy structures or head feathers. Mullerornis is the smallest of the elephant birds, with a body mass of around , with its skeleton much less robustly built than Aepyornis. A. hildebrandti is thought to have had a body mass of around . Estimates of the body mass of Aepyornis maximus span from around to making it one of the largest birds ever, alongside Dromornis stirtoni and Pachystruthio dmanisensis. Females of A. maximus are suggested to have been larger than the males, as is observed in other ratites.
Biology
Examination of brain endocasts has shown that both A. maximus and A. hildebrandti had greatly reduced optic lobes, similar to those of their closest living relatives, the kiwis, and consistent with a similar nocturnal lifestyle. The optic lobes of Mullerornis were also reduced, but to a lesser degree, suggestive of a nocturnal or crepuscular lifestyle. A. maximus had relatively larger olfactory bulbs than A. hildebrandti, suggesting that the former occupied forested habitats where the sense of smell is more useful while the latter occupied open habitats.
Diet
A 2022 isotope analysis study suggested that some specimens of Aepyornis hildebrandti were mixed feeders that had a large (~48%) grazing component to their diets, similar to that of the living Rhea americana, while the other species (A. maximus, Mullerornis modestus) were probably browsers. It has been suggested that Aepyornis straightened its legs and brought its torso into an erect position in order to browse higher vegetation. Some rainforest fruits with thick, highly sculptured endocarps, such as that of the currently undispersed and highly threatened forest coconut palm (Voanioala gerardii), may have been adapted for passage through ratite guts and consumed by elephant birds, and the fruit of some palm species are indeed dark bluish-purple (e.g., Ravenea louvelii and Satranala decussilvae), just like many cassowary-dispersed fruits, suggesting that they too may have been eaten by elephant birds.
Growth and reproduction
Elephant birds are suggested to have grown in periodic spurts rather than having continuous growth. An embryonic skeleton of Aepyornis is known from an intact egg, around 80–90% of the way through incubation before it died. This skeleton shows that even at this early ontogenetic stage that the skeleton was robust, much more so than comparable hatchling ostriches or rheas, which may suggest that hatchlings were precocial.
The eggs of Aepyornis are the largest known for any amniote, and have a volume of around , a length of approximately and a width of . The largest Aepyornis eggs are on average thick, with an estimated weight of approximately . Eggs of Mullerornis were much smaller, estimated to be only thick, with a weight of about . The large size of elephant bird eggs means that they would have required substantial amounts of calcium, which is usually taken from a reservoir in the medullary bone in the femurs of female birds. Possible remnants of this tissue have been described from the femurs of A. maximus.
Extinction
It is widely believed that the extinction of elephant birds was a result of human activity. The birds were initially widespread, occurring from the northern to the southern tip of Madagascar. The late Holocene also witnessed the extinction of other Malagasy animals, including several species of Malagasy hippopotamus, two species of giant tortoise (Aldabrachelys abrupta and Aldabrachelys grandidieri), the giant fossa, over a dozen species of giant lemurs, the aardvark-like animal Plesiorycteropus, and the crocodile Voay. Several elephant bird bones with incisions have been dated to approximately 10,000 BC which some authors suggest are cut marks, which have been proposed as evidence of a long history of coexistence between elephant birds and humans; however, these conclusions conflict with more commonly accepted evidence of a much shorter history of human presence on the island and remain controversial. The oldest securely dated evidence for humans on Madagascar dates to the mid-first millennium AD.
A 2021 study suggested that elephant birds, along with the Malagasy hippopotamus species, became extinct in the interval 800–1050 AD (1150–900 years Before Present), based on the timing of the latest radiocarbon dates. The timing of the youngest radiocarbon dates co-incided with major environmental alteration across Madagascar by humans changing forest into grassland, probably for cattle pastoralism, with the environmental change likely being induced by the use of fire. This reduction of forested area may have had cascade effects, like making elephant birds more likely to be encountered by hunters, though there is little evidence of human hunting of elephant birds. Humans may have utilized elephant bird eggs. Introduced diseases (hyperdisease) have been proposed as a cause of extinction, but the plausibility for this is weakened due to the evidence of centuries of overlap between humans and elephant birds on Madagascar.
| Biology and health sciences | Palaeognathae | Animals |
326123 | https://en.wikipedia.org/wiki/Windows%207 | Windows 7 | Windows 7 is a major release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on July 22, 2009, and became generally available on October 22, 2009. It is the successor to Windows Vista, released nearly three years earlier. Windows 7's server counterpart, Windows Server 2008 R2, was released at the same time. It was succeeded by Windows 8 in October 2012.
Extended support ended on January 14, 2020, over ten years after the release of Windows 7, after which the operating system ceased receiving further updates. A paid support program was available for enterprises, providing security updates for Windows 7 for up to three years since the official end of life.
Windows 7 was intended to be an incremental upgrade to Windows Vista, addressing the previous OS's poor reception while maintaining hardware and software compatibility as well as fixing some of Vista's inconsistencies (such as Vista's aggressive User Account Control). Windows 7 continued improvements on the Windows Aero user interface with the addition of a redesigned taskbar that allows pinned applications, and new window management features. Other new features were added to the operating system, including libraries, the new file-sharing system HomeGroup, and support for multitouch input. A new "Action Center" was also added to provide an overview of system security and maintenance information, and tweaks were made to the User Account Control system to make it less intrusive. Windows 7 also shipped with updated versions of several stock applications, including Internet Explorer 8, Windows Media Player, and Windows Media Center.
Unlike Windows Vista, Windows 7 received warm reception among reviewers and consumers with critics considering the operating system to be a major improvement over its predecessor because of its improved performance, its more intuitive interface, fewer User Account Control popups, and other improvements made across the platform. Windows 7 was a major success for Microsoft; even before its official release, pre-order sales for the operating system on the online retailer Amazon.com had surpassed previous records. In just six months, over 100 million copies had been sold worldwide, increasing to over 630 million licenses by July 2012. By January 2018, Windows 10 surpassed Windows 7 as the most popular version of Windows worldwide. Windows 11 overtook Windows 7 as the second most popular Windows version on all continents in August 2022. , just 3% of traditional PCs running Windows are running Windows 7, although it remains relatively popular in parts of the world, such as China (where it is tied with Windows 11), and is second most popular in some countries.
It is the final version of Microsoft Windows that supports processors without SSE2 or NX (although an update released in 2018 dropped support for non-SSE2 processors).
Naming
Windows 7 is the successor to Windows Vista, and its version name is Windows NT 6.1, compared to Vista's NT 6.0; its naming caused some confusion when it was announced in 2008. Windows president Steven Sinofsky commented that Windows 95 was the fourth version of Windows, but Windows 7 counts up from Windows NT 4.0 as it is a descendant of NT.
Development history
Originally, a version of Windows codenamed "Blackcomb" was planned as the successor to Windows XP and Windows Server 2003 in 2000. Major features were planned for Blackcomb, including an emphasis on searching and querying data and an advanced storage system named WinFS to enable such scenarios. However, an interim, minor release, codenamed "Longhorn," was announced for 2003, delaying the development of Blackcomb. By the middle of 2003, however, Longhorn had acquired some of the features originally intended for Blackcomb. After three major malware outbreaks—the Blaster, Nachi, and Sobig worms—exploited flaws in Windows operating systems within a short time period in August 2003, Microsoft changed its development priorities, putting some of Longhorn's major development work on hold while developing new service packs for Windows XP and Windows Server 2003. Development of Longhorn (Windows Vista) was also restarted, and thus delayed, in August 2004. A number of features were cut from Longhorn. Blackcomb was renamed Vienna in early 2006, and was later canceled in 2007 due to the scope of the project.
When released, Windows Vista was criticized for its long development time, performance issues, spotty compatibility with existing hardware and software at launch, changes affecting the compatibility of certain PC games, and unclear assurances by Microsoft that certain computers shipping with XP before launch would be "Vista Capable" (which led to a class-action lawsuit), among other critiques. As such, the adoption of Vista in comparison to XP remained somewhat low. In July 2007, following the shelving of the Vienna project and six months following the public release of Vista, it was reported that the next version of Windows would then be codenamed Windows 7, with plans for a final release within three years. Bill Gates, in an interview with Newsweek, suggested that Windows 7 would be more "user-centric". Gates later said that Windows 7 would also focus on performance improvements. Steven Sinofsky later expanded on this point, explaining in the Engineering Windows 7 blog that the company was using a variety of new tracing tools to measure the performance of many areas of the operating system on an ongoing basis, to help locate inefficient code paths and to help prevent performance regressions. Senior Vice President Bill Veghte stated that Windows Vista users migrating to Windows 7 would not find the kind of device compatibility issues they encountered migrating from Windows XP. An estimated 1,000 developers worked on Windows 7. These were broadly divided into "core operating system" and "Windows client experience", in turn organized into 25 teams of around 40 developers on average.
In October 2008, it was announced that Windows 7 would also be the official name of the operating system. There had been some confusion over naming the product Windows 7, while versioning it as 6.1 to indicate its similar build to Windows Vista and increase compatibility with applications that only check major version numbers, similar to Windows 2000 and Windows XP both having 5.x version numbers. The first external release to select Microsoft partners came in January 2008 with Milestone 1, build 6519. Speaking about Windows 7 on October 16, 2008, Microsoft CEO Steve Ballmer confirmed compatibility between Windows Vista and Windows 7, indicating that Windows 7 would be a refined version of Windows Vista.
At PDC 2008, Microsoft demonstrated Windows 7 with its reworked taskbar. On December 27, 2008, the Windows 7 Beta was leaked onto the Internet via BitTorrent. According to a performance test by ZDNet, Windows 7 Beta beat both Windows XP and Windows Vista in several key areas, including boot and shutdown time and working with files, such as loading documents. Other areas did not beat XP, including PC Pro benchmarks for typical office activities and video editing, which remain identical to Vista and slower than XP. On January 7, 2009, the x64 version of the Windows 7 Beta (build 7000) was leaked onto the web, with some torrents being infected with a trojan. At CES 2009, Microsoft CEO Steve Ballmer announced the Windows 7 Beta, build 7000, had been made available for download to MSDN and TechNet subscribers in the format of an ISO image. The stock wallpaper of the beta version contained a digital image of the Betta fish.
The release candidate, build 7100, became available for MSDN and TechNet subscribers, and Connect Program participants on April 30, 2009. On May 5, 2009, it became available to the general public, although it had also been leaked onto the Internet via BitTorrent. The release candidate was available in five languages and expired on June 1, 2010, with shutdowns every two hours starting March 1, 2010. Microsoft stated that Windows 7 would be released to the general public on October 22, 2009, less than three years after the launch of its predecessor. Microsoft released Windows 7 to MSDN and Technet subscribers on August 6, 2009. Microsoft announced that Windows 7, along with Windows Server 2008 R2, was released to manufacturing in the United States and Canada on July 22, 2009. Windows 7 build 7600.16385.090713-1255, which was compiled on July 13, 2009, was declared the final RTM build after passing all Microsoft's tests internally.
Features
New and changed
Among Windows 7's new features are advances in touch and handwriting recognition, support for virtual hard disks, improved performance on multi-core processors, improved boot performance, DirectAccess, and kernel improvements. Windows 7 adds support for systems using multiple heterogeneous graphics cards from different vendors (Heterogeneous Multi-adapter), a new version of Windows Media Center, a Gadget for Windows Media Center, improved media features, XPS Essentials Pack, and Windows PowerShell being included, and a redesigned Calculator with multiline capabilities including Programmer and Statistics modes along with unit conversion for length, weight, temperature, and several others. Many new items have been added to the Control Panel, including ClearType Text Tuner, Display Color Calibration Wizard, Gadgets, Recovery, Troubleshooting, Workspaces Center, Location and Other Sensors, Credential Manager, Biometric Devices, System Icons, and Display. Windows Security Center has been renamed to Action Center (Windows Health Center and Windows Solution Center in earlier builds), which encompasses both security and maintenance of the computer. ReadyBoost on 32-bit editions now supports up to 256 gigabytes of extra allocation. Windows 7 also supports images in RAW image format through the addition of Windows Imaging Component-enabled image decoders, which enables raw image thumbnails, previewing and metadata display in Windows Explorer, plus full-size viewing and slideshows in Windows Photo Viewer and Windows Media Center. Windows 7 also has a native TFTP client with the ability to transfer files to or from a TFTP server.
The taskbar has seen the biggest visual changes, where the old Quick Launch toolbar has been replaced with the ability to pin applications to the taskbar. Buttons for pinned applications are integrated with the task buttons. These buttons also enable Jump Lists to allow easy access to common tasks, and files frequently used with specific applications. The revamped taskbar also allows the reordering of taskbar buttons. To the far right of the system clock is a small rectangular button that serves as the Show desktop icon. By default, hovering over this button makes all visible windows transparent for a quick look at the desktop. In touch-enabled displays such as touch screens, tablet PCs, etc., this button is slightly (8 pixels) wider in order to accommodate being pressed by a finger. Clicking this button minimizes all windows, and clicking it a second time restores them.
Window management in Windows 7 has several new features: Aero Snap maximizes a window when it is dragged to the top, left, or right of the screen. Dragging windows to the left or right edges of the screen allows users to snap software windows to either side of the screen, such that the windows take up half the screen. When a user moves windows that were snapped or maximized using Snap, the system restores their previous state. Snap functions can also be triggered with keyboard shortcuts. Aero Shake hides all inactive windows when the active window's title bar is dragged back and forth rapidly.
Windows 7 includes 13 additional sound schemes, titled Afternoon, Calligraphy, Characters, Cityscape, Delta, Festival, Garden, Heritage, Landscape, Quirky, Raga, Savanna, and Sonata. Internet Spades, Internet Backgammon and Internet Checkers, which were removed in Windows Vista, were restored in Windows 7. Users are able to disable or customize many more Windows components than was possible in Windows Vista. New additions to this list of components include Internet Explorer 8, Windows Media Player 12, Windows Media Center, Windows Search, and Windows Gadget Platform. A new version of Microsoft Virtual PC, newly renamed as Windows Virtual PC was made available for Windows 7 Professional, Enterprise, and Ultimate editions. It allows multiple Windows environments, including Windows XP Mode, to run on the same machine. Windows XP Mode runs Windows XP in a virtual machine, and displays applications within separate windows on the Windows 7 desktop. Furthermore, Windows 7 supports the mounting of a virtual hard disk (VHD) as a normal data storage, and the bootloader delivered with Windows 7 can boot the Windows system from a VHD; however, this ability is only available in the Enterprise and Ultimate editions. The Remote Desktop Protocol (RDP) of Windows 7 is also enhanced to support real-time multimedia application including video playback and 3D games, thus allowing use of DirectX 10 in remote desktop environments. The three application limit, previously present in the Windows Vista and Windows XP Starter Editions, has been removed from Windows 7. All editions include some new and improved features, such as Windows Search, Security features, and some features new to Windows 7, that originated within Vista. Optional BitLocker Drive Encryption is included with Windows 7 Ultimate and Enterprise. Windows Defender is included; Microsoft Security Essentials antivirus software is a free download. All editions include Shadow Copy, which—every day or so—System Restore uses to take an automatic "previous version" snapshot of user files that have changed. Backup and restore have also been improved, and the Windows Recovery Environment—installed by default—replaces the optional Recovery Console of Windows XP.
A new system known as "Libraries" was added for file management; users can aggregate files from multiple folders into a "Library." By default, libraries for categories such as Documents, Pictures, Music, and Video are created, consisting of the user's personal folder and the Public folder for each. The system is also used as part of a new home networking system known as HomeGroup; devices are added to the network with a password, and files and folders can be shared with all other devices in the HomeGroup, or with specific users. The default libraries, along with printers, are shared by default, but the personal folder is set to read-only access by other users, and the Public folder can be accessed by anyone.
Windows 7 includes improved globalization support through a new Extended Linguistic Services API to provide multilingual support (particularly in Ultimate and Enterprise editions). Microsoft also implemented better support for solid-state drives, including the new TRIM command, and Windows 7 is able to identify a solid-state drive uniquely. Native support for USB 3.0 is not included because of delays in the finalization of the standard. At WinHEC 2008 Microsoft announced that color depths of 30-bit and 48-bit would be supported in Windows 7 along with the wide color gamut scRGB (which for HDMI 1.3 can be converted and output as xvYCC). The video modes supported in Windows 7 are 16-bit sRGB, 24-bit sRGB, 30-bit sRGB, 30-bit with extended color gamut sRGB, and 48-bit scRGB.
For developers, Windows 7 includes a new networking API with support for building SOAP-based web services in native code (as opposed to .NET-based WCF web services), new features to simplify development of installation packages and shorten application install times. Windows 7, by default, generates fewer User Account Control (UAC) prompts because it allows digitally signed Windows components to gain elevated privileges without a prompt. Additionally, users can now adjust the level at which UAC operates using a sliding scale.
Removed
Certain capabilities and programs that were a part of Windows Vista are no longer present or have been changed, resulting in the removal of certain functionalities; these include the classic Start Menu user interface, some taskbar features, Windows Explorer features, Windows Media Player features, Windows Ultimate Extras, Search button, and InkBall. Four applications bundled with Windows Vista—Windows Photo Gallery, Windows Movie Maker, Windows Calendar and Windows Mail—are not included with Windows 7 and were replaced by Windows Live-branded versions as part of the Windows Live Essentials suite.
Editions
Windows 7 is available in six different editions, of which the Home Premium, Professional, and Ultimate were available at retail in most countries, and as pre-loaded software on most new computers. Home Premium and Professional were aimed at home users and small businesses respectively, while Ultimate was aimed at enthusiasts. Each edition of Windows 7 includes all of the capabilities and features of the edition below it, and adds additional features oriented towards their market segments; for example, Professional adds additional networking and security features such as Encrypting File System and the ability to join a domain. Ultimate contained a superset of the features from Home Premium and Professional, along with other advanced features oriented towards power users, such as BitLocker drive encryption; unlike Windows Vista, there were no "Ultimate Extras" add-ons created for Windows 7 Ultimate. Retail copies were available in "upgrade" and higher-cost "full" version licenses; "upgrade" licenses require an existing version of Windows to install, while "full" licenses can be installed on computers with no existing operating system.
The remaining three editions were not available at retail, of which two were available exclusively through OEM channels as pre-loaded software. The Starter edition is a stripped-down version of Windows 7 meant for low-cost devices such as netbooks. In comparison to Home Premium, Starter has reduced multimedia functionality, does not allow users to change their desktop wallpaper or theme, disables the "Aero Glass" theme, does not have support for multiple monitors, and can only address 2GB of RAM. Home Basic was sold only in emerging markets, and was positioned in between Home Premium and Starter. The highest edition, Enterprise, is functionally similar to Ultimate, but is only sold through volume licensing via Microsoft's Software Assurance program.
All editions aside from Starter support both IA-32 and x86-64 architectures, Starter only supports 32-bit systems. Retail copies of Windows 7 are distributed on two DVDs: one for the IA-32 version and the other for x86-64. OEM copies include one DVD, depending on the processor architecture licensed. The installation media for consumer versions of Windows 7 are identical, the product key and corresponding license determines the edition that is installed. The Windows Anytime Upgrade service can be used to purchase an upgrade that unlocks the functionality of a higher edition, such as going from Starter to Home Premium, and Home Premium to Ultimate. Most copies of Windows 7 only contained one license; in certain markets, a "Family Pack" version of Windows 7 Home Premium was also released for a limited time, which allowed upgrades on up to three computers. In certain regions, copies of Windows 7 were only sold in, and could only be activated in a designated region.
Support lifecycle
Support for the original release of Windows 7 (without a service pack) ended on April 9, 2013, requiring users to update to Windows 7 Service Pack 1 in order to continue receiving updates and support. Microsoft ended the sale of new retail copies of Windows 7 in October 2014, and the sale of new OEM licenses for Windows 7 Home Basic, Home Premium, and Ultimate ended on October 31, 2014. OEM sales of PCs with Windows 7 Professional pre-installed ended on October 31, 2016. The sale of non-Professional OEM licenses was stopped on October 31, 2014.
Mainstream support for Windows 7 ended on January 13, 2015. Extended support for Windows 7 ended on January 14, 2020.
Variants of Windows 7 for embedded systems and thin clients have different support policies: Windows Embedded Standard 7 support ended in October 2020. Windows Thin PC and Windows Embedded POSReady 7 had support until October 2021.
In March 2019, Microsoft announced that it would display notifications to users informing users of the upcoming end of support, and direct users to a website urging them to purchase a Windows 10 upgrade or a new computer.
In August 2019, researchers reported that "all modern versions of Microsoft Windows" may be at risk for "critical" system compromise because of design flaws of hardware device drivers from multiple providers. In the same month, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Microsoft Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may now include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from the older Windows XP version to the most recent Windows 10 versions; a patch to correct the flaw is currently available.
In September 2019, Microsoft announced that it would provide free security updates for Windows 7 on federally-certified voting machines through the 2020 United States elections.
Extended Security Updates
On September 7, 2018, Microsoft announced a paid "Extended Security Updates" (ESU) service that will offer additional updates for Windows 7 Professional and Enterprise for up to three years after the end of extended support, available via specific volume licensing programs in yearly installments.
Windows 7 Professional for Embedded Systems, Windows Embedded Standard 7, and Windows Embedded POSReady 7 also get Extended Security Updates for up to three years after their end of extended support date, via OEMs. The Extended Security Updates program for Windows Embedded POSReady 7 ended on October 8, 2024, marking the final end of IA-32 updates on the Windows NT 6.1 product line after more than 15 years.
In August 2019, Microsoft announced it would offer a year of 'free' extended security updates to some business users.
Third-party support
In January 2023, version 109 of the Chromium-based Microsoft Edge became the last version of Edge to support Windows 7, Windows 8/8.1, Windows Server 2012, and Windows Server 2012 R2. Alongside this, several other web browsers based on the Chromium codebase also dropped support for these operating systems after version 109, including Google Chrome and Opera. A fork of Chromium named Supermium is maintained for versions of Windows older than Windows 10, including Windows 7.
Mozilla maintains Firefox 115 Extended Support Release (ESR) to support Windows 7, 8 and 8.1. Mozilla has committed to support it until at least March 2025.
Steam ended support for Windows 7, 8, and 8.1 on January 1, 2024.
Upgradability
Several Windows 7 components are upgradable to the latest versions, which include new versions introduced in later versions of Windows, and other major Microsoft applications are available. These latest versions for Windows 7 include:
DirectX 11
Internet Explorer 11
Microsoft Edge (Chromium, version 109)
Windows Virtual PC
.NET Framework 4.8
Visual Studio 2019
Office 2016 was the last version of Microsoft Office to be compatible with Windows 7.
System requirements
Additional requirements to use certain features:
Windows XP Mode (Professional, Ultimate and Enterprise): Requires an additional 1 GB of RAM and additional 15 GB of available hard disk space. As of March 18, 2010, the requirement for a processor capable of hardware virtualization has been lifted.
Windows Media Center (included in Home Premium, Professional, Ultimate and Enterprise), requires a TV tuner to receive and record TV.
Physical memory
The maximum amount of RAM that Windows 7 supports varies depending on the product edition and on the processor architecture, as shown in the following table.
Processor limits
Windows 7 Professional and up support up to 2 physical processors (CPU sockets),
whereas Windows 7 Starter, Home Basic, and Home Premium editions support only 1. Physical processors with either multiple cores, or hyper-threading, or both, implement more than one logical processor per physical processor. The x86 editions of Windows 7 support up to 32 logical processors; x64 editions support up to 256 (4 x 64).
Extent of hardware support
In January 2016, Microsoft announced that it would no longer support Windows platforms older than Windows 10 on any future Intel-compatible processor lines, citing difficulties in reliably allowing the operating system to operate on newer hardware. Microsoft stated that effective July 17, 2017, devices with Intel Skylake CPUs were only to receive the "most critical" updates for Windows 7 and 8.1, and only if they have been judged not to affect the reliability of Windows 7 on older hardware. For enterprise customers, Microsoft issued a list of Skylake-based devices "certified" for Windows 7 and 8.1 in addition to Windows 10, to assist them in migrating to newer hardware that can eventually be upgraded to 10 once they are ready to transition. Microsoft and their hardware partners provide special testing and support for these devices on 7 and 8.1 until the July 2017 date.
On March 18, 2016, in response to criticism from enterprise customers, Microsoft delayed the end of support and non-critical updates for Skylake systems to July 17, 2018, but stated that they would also continue to receive security updates through the end of extended support. In August 2016, citing a "strong partnership with our OEM partners and Intel", Microsoft retracted the decision and stated that it would continue to support Windows 7 and 8.1 on Skylake hardware through the end of their extended support lifecycle. However, the restrictions on newer CPU microarchitectures remain in force.
In March 2017, a Microsoft knowledge base article announced which implies that devices using Intel Kaby Lake, AMD Bristol Ridge, or AMD Ryzen, would be blocked from using Windows Update entirely. In addition, official Windows 7 device drivers are not available for the Kaby Lake and Ryzen platforms.
Security updates released since March 2018 contained bugs that affect processors that do not support SSE2 extensions, including all Pentium III, Athlon XP, and prior processors. Microsoft initially stated that it would attempt to resolve this issue, and prevented installation of the affected patches on these systems. However, Microsoft retroactively modified its support documents on June 15, 2018 to remove the promise that this bug would be resolved, replacing it with a statement suggesting that users obtain a newer processor. This effectively ends further patch support for Windows 7 on these older systems.
Updates
Service Pack 1
Windows 7 Service Pack 1 (SP1) was announced on March 18, 2010. A beta was released on July 12, 2010. The final version was released to the public on February 22, 2011. At the time of release, it was not made mandatory. It was available via Windows Update, direct download, or by ordering the Windows 7 SP1 DVD. The service pack is on a much smaller scale than those released for previous versions of Windows, particularly Windows Vista.
Windows 7 Service Pack 1 adds support for Advanced Vector Extensions (AVX), a 256-bit instruction set extension for processors, and improves IKEv2 by adding additional identification fields such as E-mail ID to it. In addition, it adds support for Advanced Format 512e as well as additional Identity Federation Services. Windows 7 Service Pack 1 also resolves a bug related to HDMI audio and another related to printing XPS documents.
In Europe, the automatic nature of the BrowserChoice.eu feature was dropped in Windows 7 Service Pack 1 in February 2011 and remained absent for 14 months despite Microsoft reporting that it was still present, subsequently described by Microsoft as a "technical error." As a result, in March 2013, the European Commission fined Microsoft €561 million to deter companies from reneging on settlement promises.
Platform Update
The Platform Update for Windows 7 SP1 and Windows Server 2008 R2 SP1 was released on February 26, 2013 after a pre-release version had been released on November 5, 2012. It is also included with Internet Explorer 10 for Windows 7.
It includes enhancements to Direct2D, DirectWrite, Direct3D, Windows Imaging Component (WIC), Windows Advanced Rasterization Platform (WARP), Windows Animation Manager (WAM), XPS Document API, H.264 Video Decoder and JPEG XR decoder. However support for Direct3D 11.1 is limited as the update does not include DXGI/WDDM 1.2 from Windows 8, making unavailable many related APIs and significant features such as stereoscopic frame buffer, feature level 11_1 and optional features for levels 10_0, 10_1 and 11_0.
Disk Cleanup update
In October 2013, a Disk Cleanup Wizard addon was released that lets users delete outdated Windows updates on Windows 7 SP1, thus reducing the size of the WinSxS directory. This update backports some features found in Windows 8.
Windows Management Framework 5.0
Windows Management Framework 5.0 includes updates to Windows PowerShell 5.0, Windows PowerShell Desired State Configuration (DSC), Windows Remote Management (WinRM), Windows Management Instrumentation (WMI). It was released on February 24, 2016 and was eventually superseded by Windows Management Framework 5.1.
Convenience rollup
In May 2016, Microsoft released a "Convenience rollup update for Windows 7 SP1 and Windows Server 2008 R2 SP1," which contains all patches released between the release of SP1 and April 2016. The rollup is not available via Windows Update, and must be downloaded manually. This package can also be integrated into a Windows 7 installation image.
Since October 2016, all security and reliability updates are cumulative. Downloading and installing updates that address individual problems is no longer possible, but the number of updates that must be downloaded to fully update the OS is significantly reduced.
Monthly update rollups (July 2016 – January 2020)
In June 2018, Microsoft announced that Windows 7 would be moved to a monthly update model beginning with updates released in September 2018, two years after Microsoft switched the rest of their supported operating systems to that model. With the new update model, instead of updates being released as they became available, only two update packages were released on the second Tuesday of every month until Windows 7 reached its end of life—one package containing security and quality updates, and a smaller package that contained only the security updates. Users could choose which package they wanted to install each month. Later in the month, another package would be released which was a preview of the next month's security and quality update rollup.
Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows XP and Windows Me would end on July 31, 2019 (and for Windows 7 on January 22, 2020).
The last non-extended security update rollup packages were released on January 14, 2020, the last day that Windows 7 had extended support.
End of support (after January 14, 2020)
On January 14, 2020, Windows 7 support ended with Microsoft no longer providing security updates or fixes after that date, except for subscribers of the Windows 7 Extended Security Updates (ESU), who were able to receive Windows 7 security updates through January 10, 2023. However, there have been two updates that have been issued to non-ESU subscribers:
In February 2020, Microsoft released an update via Windows Update to fix a black wallpaper issue caused by the January 2020 update for Windows 7.
In June 2020, Microsoft released an update via Windows Update to roll out the new Chromium-based Microsoft Edge to Windows 7 and 8.1 machines that are not connected to Active Directory. Users, e.g. those on Active Directory, can download Edge from Microsoft's website.
In a support document, Microsoft has stated that a full-screen upgrade warning notification would be displayed on Windows 7 PCs on all editions except the Enterprise edition after January 15, 2020. The notification does not appear on machines connected to Active Directory, machines in kiosk mode, or machines subscribed for Extended Security Updates.
ESU rollups
As part of the September 2022 Extended Security Updates (ESU) rollup, Microsoft quietly added in Secure Boot support, along with partial UEFI support.
Reception
Critical reception
Windows 7 received critical acclaim, with critics noting the increased usability and functionality when compared with its predecessor, Windows Vista. CNET gave Windows 7 Home Premium a rating of 4.5 out of 5 stars, stating that it "is more than what Vista should have been, [and] it's where Microsoft needed to go". PC Magazine rated it a 4 out of 5 saying that Windows 7 is a "big improvement" over Windows Vista, with fewer compatibility problems, a retooled taskbar, simpler home networking and faster start-up. Maximum PC gave Windows 7 a rating of 9 out of 10 and called Windows 7 a "massive leap forward" in usability and security, and praised the new Taskbar as "worth the price of admission alone." PC World called Windows 7 a "worthy successor" to Windows XP and said that speed benchmarks showed Windows 7 to be slightly faster than Windows Vista. PC World also named Windows 7 one of the best products of the year.
In its review of Windows 7, Engadget said that Microsoft had taken a "strong step forward" with Windows 7 and reported that speed is one of Windows 7's major selling points—particularly for the netbook sets. Laptop Magazine gave Windows 7 a rating of 4 out of 5 stars and said that Windows 7 makes computing more intuitive, offered better overall performance including a "modest to dramatic" increase in battery life on laptop computers. TechRadar gave Windows 7 a rating of 5 out of 5 stars, concluding that "it combines the security and architectural improvements of Windows Vista with better performance than XP can deliver on today's hardware. No version of Windows is ever perfect, but Windows 7 really is the best release of Windows yet." USA Today and The Telegraph also gave Windows 7 favorable reviews.
Nick Wingfield of The Wall Street Journal wrote, "Visually arresting," and "A pleasure." Mary Branscombe of Financial Times wrote, "A clear leap forward." Jesus Diaz of Gizmodo wrote, "Windows 7 Kills Snow Leopard." Don Reisinger of CNET wrote, "Delightful." David Pogue of The New York Times wrote, "Faster." J. Peter Bruzzese and Richi Jennings of Computerworld wrote, "Ready."
Some Windows Vista Ultimate users have expressed concerns over Windows 7 pricing and upgrade options. Windows Vista Ultimate users wanting to upgrade from Windows Vista to Windows 7 had to either pay $219.99 to upgrade to Windows 7 Ultimate or perform a clean install, which requires them to reinstall all of their programs.
The changes to User Account Control on Windows 7 were criticized for being potentially insecure, as an exploit was discovered allowing untrusted software to be launched with elevated privileges by exploiting a trusted component. Peter Bright of Ars Technica argued that "the way that the Windows 7 UAC 'improvements' have been made completely exempts Microsoft's developers from having to do that work themselves. With Windows 7, it's one rule for Redmond, another one for everyone else." Microsoft's Windows kernel engineer Mark Russinovich acknowledged the problem, but noted that malware can also compromise a system when users agree to a prompt.
Sales
In July 2009, in only eight hours, pre-orders of Windows 7 at amazon.co.uk surpassed the demand which Windows Vista had in its first 17 weeks. It became the highest-grossing pre-order in Amazon's history, surpassing sales of the previous record holder, the seventh Harry Potter book. After 36 hours, 64-bit versions of Windows 7 Professional and Ultimate editions sold out in Japan. Two weeks after its release its market share had surpassed that of Snow Leopard, released two months previously as the most recent update to Apple's Mac OS X operating system. According to Net Applications, Windows 7 reached a 4% market share in less than three weeks; in comparison, it took Windows Vista seven months to reach the same mark. As of February 2014, Windows 7 had a market share of 47.49% according to Net Applications; in comparison, Windows XP had a market share of 29.23%.
On March 4, 2010, Microsoft announced that it had sold more than 90 million licenses.
By April 23, 2010, more than 100 million copies were sold in six months, which made it Microsoft's fastest-selling operating system. As of June 23, 2010, Windows 7 has sold 150 million copies which made it the fastest selling operating system in history with seven copies sold every second. Based on worldwide data taken during June 2010 from Windows Update 46% of Windows 7 PCs run the 64-bit edition of Windows 7. According to Stephen Baker of the NPD Group during April 2010 in the United States 77% of PCs sold at retail were pre-installed with the 64-bit edition of Windows 7. As of July 22, 2010, Windows 7 had sold 175 million copies. On October 21, 2010, Microsoft announced that more than 240 million copies of Windows 7 had been sold. Three months later, on January 27, 2011, Microsoft announced total sales of 300 million copies of Windows 7. On July 12, 2011, the sales figure was refined to over 400 million end-user licenses and business installations. As of July 9, 2012, over 630 million licenses have been sold; this number includes licenses sold to OEMs for new PCs.
Antitrust concerns
As with other Microsoft operating systems, Windows 7 was studied by United States federal regulators who oversee the company's operations following the 2001 United States v. Microsoft Corp. settlement. According to status reports filed, the three-member panel began assessing prototypes of the new operating system in February 2008. Michael Gartenberg, an analyst at Jupiter Research, said, "[Microsoft's] challenge for Windows 7 will be how can they continue to add features that consumers will want that also don't run afoul of regulators."
In order to comply with European antitrust regulations, Microsoft proposed the use of a "ballot" screen containing download links to competing web browsers, thus removing the need for a version of Windows completely without Internet Explorer, as previously planned. Microsoft announced that it would discard the separate version for Europe and ship the standard upgrade and full packages worldwide, in response to criticism involving Windows 7 E and concerns from manufacturers about possible consumer confusion if a version of Windows 7 with Internet Explorer were shipped later, after one without Internet Explorer.
As with the previous version of Windows, an N version, which does not come with Windows Media Player, has been released in Europe, but only for sale directly from Microsoft sales websites and selected others.
| Technology | Operating Systems | null |
326177 | https://en.wikipedia.org/wiki/African%20forest%20elephant | African forest elephant | The African forest elephant (Loxodonta cyclotis) is one of the two living species of African elephant, along with the African bush elephant. It is native to humid tropical forests in West Africa and the Congo Basin. It is the smallest of the three living elephant species, reaching a shoulder height of . As with other African elephants, both sexes have straight, down-pointing tusks, which begin to grow once the animals reach 1–3 years old. The forest elephant lives in highly sociable family groups of up to 20 individuals. Since they forage primarily on leaves, seeds, fruit, and tree bark, they have often been referred to as the 'megagardener of the forest'; the species is one of many that contributes significantly to maintaining the composition, diversity and structure of the Guinean Forests of West Africa and the Congolese rainforests. Seeds of various plants will go through the elephant's digestive tract and eventually pass through in the animal's droppings (likely in a new location where they will sprout), thus helping to maintain the spread and biodiversity of the forests.
The first scientific description of the species was published in 1900. During the 20th century, overhunting caused a sharp decline in population, and by 2013 it was estimated that fewer than 30,000 individuals remained. It is threatened by habitat loss, fragmentation, and poaching. The conservation status of populations varies across range countries. Since 2021, the species has been listed as Critically Endangered on the IUCN Red List.
Taxonomy
Elephas (Loxodonta) cyclotis was the scientific name proposed by Paul Matschie in 1900 who described the skulls of a female and a male specimen collected by the Sanaga River in southern Cameroon.
Phylogeny and evolution
The African forest elephant was long considered to be a subspecies of the African elephant, together with the African bush elephant. Morphological and DNA analysis showed that they are two distinct species.
The taxonomic status of the African pygmy elephant (Loxodonta pumilio) was uncertain for a long time. Phylogenetic analysis of the mitochondrial genome of nine specimens from museum collections indicates that it is an African forest elephant whose diminutive size or early maturity is due to environmental conditions.
Phylogenetic analysis of nuclear DNA of African bush and forest elephants, Asian elephants, woolly mammoths and American mastodons revealed that the African forest elephant and African bush elephant are two distinct species that genetically diverged at least 1.9 million years ago. They are therefore considered distinct species. Despite evidence of hybridization between the two species where their ranges overlap, there appears to have been little gene flow between the two species since their initial divergence.
DNA from the extinct European straight-tusked elephant (Palaeoloxodon antiquus) indicates that members of the extinct elephant genus Palaeoloxodon interbred with African forest elephants, with over 1/3 of the nuclear genome as well as the mitochondrial genome of the straight-tusked elephant deriving from that of African forest elephants, with the genomic contribution more closely related to modern West African populations of the forest elephant than to other populations. Palaeoloxodon carried multiple separate mitochondrial lineages derived from forest elephants.
Diagram of the relationships of elephant mitochondrial genomes, after Lin et al. 2023.
Description
The African forest elephant is considerably smaller than the African bush elephant, though the size of the species has been subject to contradictory estimates. A 2000 study suggested that bulls of the species reach a shoulder height of , and weighed , while females were about tall at the shoulder and . However, a 2003 study of forest elephants at a reserve in Gabon did not find any elephants taller than . A 2015 study alternately suggested that fully grown African forest elephant males in optimal condition were only on average tall and in weight, with the largest individuals (representing less than 1 in 100,000 as a proportion of the total population) no bigger than tall and in weight.
The African forest elephant has grey skin, which looks yellow to reddish after wallowing. It is sparsely covered with black coarse hair, which is long around the tip of the tail. The length of the tail varies between individuals from half the height of the rump to almost touching ground. It has five toenails on the fore foot and four on the hind foot. Its back is nearly straight. Its oval-shaped ears have small elliptical-shaped tips. and the tip of the trunk has two finger-like processes.
The African forest elephant's tusks are straight and point downwards, and are present in both males and females. The African forest elephant has pink tusks, which are thinner and harder than the tusks of the African bush elephant. The length and diameter vary between individuals. Tusks of bulls grow throughout life, tusks of cows cease growing when they are sexually mature. The tusks are used to push through the dense undergrowth of their habitat. The largest tusk recorded for the species is long and in weight. A larger tusk measuring long and weighing has been recorded, but this may belong to a forest-bush elephant hybrid. The average tusk size is uncertain due to measurements historically being lumped in with those of African bush elephants, but based on the sizes of the largest known tusks may be in the region of and .
Distribution and habitat
Populations of the African forest elephant in Central Africa range in large contiguous rainforest tracts from Cameroon to the Democratic Republic of the Congo, with the largest stable population in Gabon, where suitable habitat covers 90% of the country.
Nonetheless, it was estimated that the population of African forest elephants in Central Africa had declined by around 86% (in the 31 years preceding 2021) due to poaching and loss of habitat. In places such as Cameroon, Democratic Republic of the Congo, Republic of the Congo and the Central African Republic, many areas of appropriate forest habitat have been reduced after years of warfare and human conflict. During the first wildlife survey in 30 years (in 2021) by the Wildlife Conservation Society and the National Parks of Gabon, it was reported that an estimated 95,000 forest elephants lived in Gabon. Prior to this the population had been estimated to be as low as 50,000 to 60,000 individuals.
They are also distributed in the evergreen moist deciduous Upper Guinean forests in Ivory Coast and Ghana, in West Africa.
There is a small population of perhaps 10-25 elephants living on the escarpment to the east of Luanda in the Kambondo forest (Hines, pers. comm. 2015) and sightings of these elephants are marked as point records. Which are the southernmost forest elephants in Africa.
Behaviour and ecology
The African forest elephant lives in family groups. Groups observed in the rain forest of Gabon's Lopé National Park between 1984 and 1991 comprised between three and eight individuals. Groups of up to 20 individuals were observed in the Dzanga-Sangha Complex of Protected Areas, comprising adult cows, their daughters and subadult sons. Family members look after calves together, called allomothering. Once young bulls reach sexual maturity, they separate from the family group and form loose bachelor groups for a few days, but usually stay alone. Adult bulls associate with family groups only during the mating season. Family groups travel about per day and move in a home range of up to .
Their seasonal movement is related to the availability of ripe fruits in Primary Rainforests.
They use a complex network of permanent trails that pass through stands of fruit trees and connect forest clearings with mineral licks. These trails are reused by humans and other animals.
In Odzala-Kokoua National Park, groups were observed to frequently meet at forest clearings indicating a fission–fusion society. They stayed longer when other groups were also present. Smaller groups joined large groups, and bulls joined family units.
Diet
The African forest elephant is an herbivore. Elephants observed in Lopé National Park fed mostly tree bark and leaves, and at least 72 different fruits.
To supplement their diet with minerals, they congregate at mineral-rich waterholes and mineral licks.
Elephant dung piles collected in Kahuzi-Biéga National Park contained seeds and fruit remains of Omphalocarpum mortehanii, junglesop (Anonidium mannii), Antrocaryon nannanii, Klainedoxa gabonensis, Treculia africana, Tetrapleura tetraptera, Uapaca guineensis, Autranella congolensis, Gambeya africana and G. lacourtiana, Mammea africana, Cissus dinklagei, and Grewia midlbrandii. Dung piles collected in a lowland rain forest in the northern Republic of Congo contained seeds of at least 96 plant species, with a minimum of 30 intact seeds and up to 1102 large seeds of more than in a single pile. Based on the analysis of 855 dung piles, it has been estimated that African forest elephants disperse a daily mean of 346 large seeds per of at least 73 tree species; they transport about a third of the large seeds for more than .
Seeds passed by elephant gut germinate faster. The African forest elephant is one of the most effective seed dispersers in the tropics and has been referred to as the "megagardener of the forest" due to its significant role in maintaining plant diversity. In the Cuvette Centrale, 14 of 18 megafaunal tree species depend on seed dissemination by African forest elephants, including wild mango (Irvingia gabonensis), Parinari excelsa and Tridesmostemon omphalocarpoides. These 14 species are not able to survive without elephants.
African forest elephants provide ecological services that maintain the composition and structure of Central African forests.
Communication
Vocalization is a trait found among L. cyclotis with studies emphasizing significance in acoustic structure and their social dynamics. Vocalization patterns can be classified into three main types: single rumble, single broadband, and combinatorial. Rumbles are tonal, low-frequency calls, while broadband are calls that lack clear harmonic structures, resembling barks and roars. The utilization of rumbles and broadband calls in combinatorial calls may involve distinct acoustic elements, forming multi-element calls, which combine meaningless elements to generate context-specific meaningful calls. L. cyclotis also exhibits a more balanced distribution of combinatorial call types compared to other elephant species. Despite having a simpler social structure, L. cyclotis can display a comparable repertoire of rumble-roar call combinations than L. africana. Communication patterns vary across age and sex, with adult males typically producing more combinatorial calls than adult females. Additionally, certain events may provoke a behavioral change, as evidenced by lowered levels of vocalizations in response to gunfire sounds related to poaching.
For these mammals, hearing and smell are the most important senses they possess because they do not have good eyesight. They can recognize and hear vibrations through the ground and can detect food sources with their sense of smell. Elephants are also an arrhythmic species, meaning they have the ability to see just as well in dim light as they can in the daylight. They are capable of doing so because the retina in their eyes adjusts nearly as quickly as light does.
The elephant's feet are sensitive and can detect vibrations through the ground, whether thunder or elephant calls, from up to 10 miles away.
Reproduction
Females reach sexual maturity between the age of 8 and 12 years, depending on the population density and nutrition available. On average, they begin breeding at the age of 23 and give birth every 5–6 years. As a result, the birth rate is lower than the bush species, which starts breeding at age 12 and has a calf every 3–4 years.
Baby elephants weigh around at birth. Almost immediately, they can stand up and move around, allowing the mother to roam around and forage, which is also essential to reduce predation. The baby suckles using its mouth while its trunk is held over its head. Their tusks do not come until around 16 months and calves are not weaned until they are roughly 4 or 5 years old. By this time, their tusks are around long and begin to get in the way of suckling.
Forest elephants have a lifespan of about 60 to 70 years and mature slowly, coming to puberty in their early teens. Bulls generally pass puberty within the next year or two of females. Between the ages of 15 and 25, bulls experience "musth", which is a hormonal state they experience marked by increased aggression. The male secretes fluid from the temporal gland between his ear and eye during this time. Younger bulls often experience musth for a shorter period of time, while older bulls do for a longer time. When in musth, bulls have a more erect walk with their head high and tusks inward, they may rub their heads on trees or bushes to spread the musth scent, and they may even flap their ears, accompanied by a musth rumble, so that their smell can be blown toward other elephants. Another behavior affiliated with musth is urination. Bulls allow their urine to slowly come out and spray the insides of their hind legs. All of these behaviors are to advertise to receptive females and competing bulls that they are in the musth state. Bulls only return to the herd to breed or to socialize; they do not provide prenatal care to their offspring but rather play a fatherly role to younger bulls to show dominance.
Females are polyestrous, meaning they are capable of conceiving multiple times a year, which is a reason why they do not appear to have a breeding season. However, there does appear to be a peak in conceptions during the two rainy seasons of the year. Generally, the female conceives after two or three matings. Though the female has plenty of room in her uterus for twins, twins are rarely conceived. Gestation lasts 22 months. Based on the maturity, fertility, and gestation rates, African forest elephants have the capacity to increase their population by 5% annually under ideal conditions.
Traditional hunting
African forest elephants are hunted by various hunter-gatherer groups in the Congo basin, including by Mbuti pygmies, among others. It is unknown how long the active hunting of elephants in the region has been practised, and it may have only begun as a response for the demand for ivory beginning in the 19th century or earlier.
Elephants are traditionally hunted using spears, typically to stab at the lower abdomen (as is done among the Mbuti) or knees, both of which are effective at rendering the animal immobile. Anthropologist Mitsuo Ichikawa observed the hunting of elephants by Mbuti pygmies in fieldwork during the 1970s and 1980s, when the Mbuti used spears tipped with metal points (though earlier reports suggest that that prior to this they used purely wooden spears, which may have been less effective at breaking the elephants hide). As observed by Ichikawa, elephant hunting by the Mbuti pygmies involved both small and large groups of hunters, which was led by at least one experienced hunter called a mtuma. Before the hunt began, ritual acts of singing and dancing were performed by the community to support the success of the hunt. These hunters often went into the forest without food, living off of wild honey and vegetables, smearing themselves in mud, elephant dung, and charcoal made from certain plants to disguise their scent from the elephants. Once the traces of an elephant are detected, it was carefully tracked, before being approached from downwind and stabbed. It typically took several hours to several days from the first stab to the death of the elephant.
Many hunts failed due to elephants detecting the hunters before being stabbed and fleeing, with field research by Ichikawa finding that only one out of six Mbuti elephant hunts were successful in a six-month period, corresponding to around 60–70 days of total hunting time, meaning that despite the large quantity of meat provided by each individual elephant, it did not provide reliable subsistence, with the Mbuti instead relying on hunting smaller animals. Following the death of the animal, the Mbuti hunters returned to their homes, with the whole community moving to dismember the elephant carcass. Meat was shared equally among the community with the exception of a few body parts which were reserved for certain community members, with the feast on the animals remains lasting for several days. Elephant hunting was a dangerous activity that was known to result in the deaths of hunters.
Threats
Both African elephant species are threatened foremost by habitat loss and habitat fragmentation following conversion of forests for plantations of non-timber crops, livestock farming, and building urban and industrial areas. As a result, human-elephant conflict has increased. Poaching for ivory and bushmeat is a significant threat in Central Africa. Because of a spike in poaching, the African forest elephant was declared Critically Endangered by the IUCN in 2021 after it was found that the population had decreased by more than 80% over 3 generations.
Civil unrest, human encroachment, and habit fragmentation leaves some elephants confined to small patches of forest without sufficient food. In January 2014, International Fund for Animal Welfare undertook a relocation project at the request of the Ivory Coast government, moving four elephants from Daloa to Assagny National Park.
Poaching
Genetic analysis of confiscated ivory showed that 328 tusks of African forest elephants seized in the Philippines between 1996 and 2005 originated in the eastern Democratic Republic of the Congo; 2,871 tusks seized in Hong Kong between 2006 and 2013 originated in Tridom, the tri-national Dja-Odzala-Minkébé protected area complex and the adjacent Dzanga Sangha Reserve in the Central African Republic. So did partly worked ivory confiscated between 2013 and 2014 at warehouses in Togo comprising of tusks.
The hard ivory of the African forest elephant makes for more enhanced carving and fetches a higher price on the black market. This preference is evident in Japan, where hard ivory has nearly monopolized the trade for some time. Premium quality bachi, a traditional Japanese plucking tool used for string instruments, is contrived exclusively from African forest elephant tusks. In the impenetrable and often trackless expanses of the rain forests of the Congo Basin, poaching is extremely difficult to detect and track. Levels of off-take, for the most part, are estimated from ivory seizures. The scarcely populated and unprotected forests in Central Africa are most likely becoming increasingly alluring to organized poacher gangs.
Late in the 20th century, conservation workers established a DNA identification system to trace the origin of poached ivory. Due to poaching to meet high demand for ivory, the African forest elephant population approached critical levels in the 1990s and early 2000s. Over several decades, numbers are estimated to have fallen from approximately 700,000 to less than 100,000, with about half of the remaining population in Gabon. In May 2013, Sudanese poachers killed 26 elephants in the Central African Republic's Dzanga Bai World Heritage Site. Communications equipment, video cameras, and additional training of park guards were provided following the massacre to improve protection of the site. From mid-April to mid-June 2014, poachers killed 68 elephants in Garamba National Park, including young ones without tusks.
At the request of President Ali Bongo Ondimba, twelve British soldiers traveled to Gabon in 2015 to assist in training park rangers following the poaching of many elephants in Minkebe National Park.
On 19 August 2020, Guyvanho, a poacher who killed over 500 African forest elephants in the Nouabalé-Ndoki National Park, was convicted to 30 years in prison for charges of poaching and others. Guyvanho was the first poacher to be tried criminally in the Republic of the Congo, and has the longest prison sentence for a poacher in the Republic of the Congo.
Bushmeat trade
It is not ivory alone that drives African forest elephant poaching. Killing for bushmeat in Central Africa has evolved into an international business in recent decades with markets reaching New York and other major cities of the United States, and the industry is still on the rise. This illegal market poses the greatest threat not only to forest elephants where hunters can target elephants of all ages, including calves, but to all of the larger species in the forests. There are actions that can be taken to lower the incentive for supplying to the bushmeat market. Regional markets, and international trade, require the transporting of extensive amounts of animal meat which, in turn, requires the utilisation of vehicles. Having checkpoints on major roads and railroads can potentially help disrupt commercial networks. In 2006, it was estimated that 410 African forest elephants are killed yearly in the Cross-Sanaga-Bioko coastal forests.
Conservation
In 1986, the African Elephant Database was initiated with the aim to monitor the status of African elephant populations. This database includes results from aerial surveys, dung counts, interviews with local people, and data on poaching.
Both African elephant species have been listed by the Convention on International Trade in Endangered Species of Wild Fauna and Flora on CITES Appendix I since 1989. This listing banned commercial international trade of wild African elephants and their parts and derivatives by countries that signed the CITES agreement. Populations of Botswana, Namibia and Zimbabwe were listed in CITES Appendix II in 1997 as was the population of South Africa in 2000. Hunting elephants is banned in the Central African Republic, Democratic Republic of Congo, Gabon, Côte d'Ivoire, and Senegal.
African forest elephants are estimated to constitute up to one-third of the continent's elephant population but have been poorly studied because of the difficulty in observing them through the dense vegetation that makes up their habitat. Thermal imaging has facilitated observation of the species, leading to more information on their ecology, numbers, and behavior, including their interactions with elephants and other species. Scientists have learned more about how the elephants, who have poor night vision, negotiate their environment using only their hearing and olfactory senses. They also appeared to be much more active sexually during the night compared to the day, which was unexpected.
| Biology and health sciences | Proboscidea | Animals |
326193 | https://en.wikipedia.org/wiki/Solenodon | Solenodon | Solenodons (from , 'channel' or 'pipe' and , 'tooth') are venomous, nocturnal, burrowing, insectivorous mammals belonging to the family Solenodontidae . The two living solenodon species are the Cuban solenodon (Atopogale cubana) and the Hispaniolan solenodon (Solenodon paradoxus). Threats to both species include habitat destruction and predation by non-native cats, dogs, and mongooses, introduced by humans to the solenodons' home islands to control snakes and rodents.
The Hispaniolan solenodon covers a wide range of habitats on the island of Hispaniola from lowland dry forest to highland pine forest. Two other described species became extinct during the Quaternary period. Oligocene North American genera, such as Apternodus, have been suggested as relatives of Solenodon, but the origins of the animal remain obscure.
Taxonomy
Two genera, Atopogale and Solenodon, are known, each with one extant species. Other genera have been erected, but are now regarded as junior synonyms. Solenodontidae show retention of primitive mammal characteristics. In 2016, solenodons were confirmed by genetic analysis as belonging to an evolutionary branch that split from the lineage leading to hedgehogs, moles, and shrews before the Cretaceous-Paleogene extinction event. They are one of two families of Caribbean soricomorphs. The other family, Nesophontidae, became extinct during the Holocene. Molecular data suggest they diverged from solenodons roughly 57 million years ago. The solenodon is estimated to have diverged from other living mammals about 73 million years ago.
Extant species
In addition, 2 extinct species, the giant solenodon (S. arredondoi) and Marcano's solenodon (S. marcanoi) are both thought to have gone extinct during the last 500 years, both presumably due to predation by introduced rats.
Characteristics
Traditionally, solenodons' closest relatives were considered to be the giant water shrew of Africa and Tenrecidae of Madagascar, though they are now known to be more closely related to true shrews (Eulipotyphla). Solenodons resemble very large shrews, and are often compared to them; with extremely elongated cartilaginous snouts, long, naked, scaly tails, hairless feet, and small eyes. The Cuban solenodon is generally smaller than its Hispaniolan counterpart. It is also a rusty brown with black on its throat and back. The Hispaniolan solenodon is a darker brown with yellowish tint to the face. The snout is flexible and, in the Hispaniolan solenodon, actually has a ball-and-socket joint at the base to increase its mobility. This allows the animal to investigate narrow crevices where potential prey may be hiding.
Solenodons are also noted for the glands in their inguinal and groin areas that secrete what is described as a musky, goat-like odor. Solenodons range from from nose to rump, and weigh between .
Solenodons have a few unusual traits, one of them being the position of the two teats on the female, almost on the buttocks of the animal, and another being the venomous saliva that flows from modified salivary glands in the mandible through grooves on the second lower incisors ("solenodon" derives from the Greek "grooved tooth"). Solenodons are among a handful of venomous mammals. Fossil records show that some other now-extinct mammal groups also had the dental venom delivery system, indicating that the solenodon's most distinct characteristic may have been a more general ancient mammalian characteristic that has been lost in most modern mammals and is only retained in a couple of very ancient lineages. The solenodon has often been called a "living fossil" because it has endured virtually unchanged for the past 76 million years.
It is not known exactly how long solenodons can live in the wild. However, certain individuals of the Cuban species have been recorded to have lived for up to five years in captivity and individuals of the Hispaniolan species for up to eleven years.
West Indian natives have long known about the venomous character of the solenodon bite. Early studies on the nature of the tiny mammal's saliva suggested that it was very similar to the neurotoxic venom of certain snakes. More recently, the venom has been found to be related to that of the northern short-tailed shrew and it is mostly composed of kallikreins KLK1, serine proteases that prevent blood clotting, cause hypotension and ultimately end up being fatal to the prey. The KLK1 in solenodon venom is very similar to serine protease found in venomous snakes like vipers, and has evolved in parallel in both lineages from an ancient toxin precursor. Solenodons create venom in enlarged submaxillary glands, and only inject venom through their bottom set of teeth. The symptoms of a solenodon bite include general depression, breathing difficulty, paralysis, and convulsions; large enough doses have resulted in death in lab studies on mice.
Their diets consist largely of insects, earthworms, and other invertebrates, but they also eat vertebrate carrion, and perhaps even some living vertebrate prey, such as small reptiles or amphibians. They have also been known to feed on fruits, roots, and vegetables. Based on observation of the solenodon in captivity, they have only been known to drink while bathing. Solenodons have a relatively unspecialised, and almost complete dentition, with a dental formula of: .
Solenodons find food by sniffing the ground until they come upon their prey. If the prey is small enough, the solenodon will consume it immediately. After coming across the prey, the solenodon will bring the forelimbs up to either side of the prey and then move the head forward, opening the jaw and properly catching its prey. While sniffing for food, the solenodon can get through physical barriers with the help of its sharp claws.
There has been research that suggests that males and females of the two species have different eating habits. The female has a habit of scattering the food to make sure that no morsel of food is missed as it is foraging. The male was noted to use its tongue to lap up the food and using the lower jaw as a scoop. However, these specimens were studied in captivity, so these habits may not be found in the wild.
Reproduction
Solenodons give birth in a nesting burrow to one or two young. The young remain with the mother for several months and initially follow the mother by hanging on to her elongated teats. Once they reach adulthood solenodons are solitary animals and rarely interact except to breed.
The reproductive rate of solenodons is relatively low, producing only two litters per year. Breeding can occur at any time. Males will not aid in the care for the young. The mother will nurse her offspring using her two nipples, which are placed toward the back of the animal. If the litter consists of three offspring, one will become malnourished and die. The nursing period can last for up to seventy-five days.
In their nesting burrows, solenodons give birth to one or two pups, displaying a distinctive reproductive behavior. For several months, the mother tends to them, and the young follow her, clinging on lengthy teats. Breeding can take place at any time, with a comparatively low reproductive rate of two litters annually, and males do not participate in the upbringing of their offspring. Up to 75 days of breastfeeding are dedicated to showcasing the interesting function that mothers play in solenodon reproduction.
Behavior
Solenodons make their homes in bushy areas in forests. During the daytime they seek refuge in caves, burrows, or hollow logs. They are easily provoked and can fly into a frenzy of squealing and biting with no warning. They run and climb quite fast, despite only ever touching the ground with toes. Solenodons are said to give off grunts similar to that of a pig or bird-call when feeling threatened.
Solenodons generate clicking noises similar to those of shews; the sound waves bounce off objects in their vicinity. This form of echolocation helps a solenodon navigate as well as find food. This well developed auditory ability combined with its above average sense of smell helps the solenodon survive despite its extremely small eyes and poor vision.
Solenodons have been described as both omnivorous and insectivorous. Their natural diet largely consists of insects including ants and roaches, grubs, vegetation, and fruit. However, they have also been observed to eat small vertebrates like mice and chicks, meat of large animals, as well as animal products such as eggs and milk, when fed these food items in captivity.
Status
Cuba
The Cuban solenodon is considered Endangered due to predation by the small Indian mongoose (Urva auropunctata), which was introduced in colonial times to hunt snakes and rats, as well as by feral cats and dogs. The Cuban solenodon was thought to have been extinct until a live specimen was found in 2003.
Haiti
The Hispaniolan solenodon was also once thought to be extinct, more due to its secretive and elusive behavior than to low population numbers. Recent studies have proven that the species is widely distributed through the island of Hispaniola, but it does not tolerate habitat degradation. A 1981 study of the Hispaniolan solenodon in Haiti found that the species was "functionally extinct", with the exception of a small population in the area of Massif de la Hotte. A follow-up study, in 2007, noted that the solenodon was still thriving in the area, even though the region has had an increase in human population density in recent years.
Dominican Republic
The Sierra de Bahoruco, a mountain range in the southwest of the Dominican Republic that straddles the border with Haiti, was examined by conservation teams looking for solenodons. The work occurred during the day when the animals were asleep in burrows so that they could be viewed with an infrared camera. When researchers search for solenodons in daylight, they look for the following clues to their presence:
nearby nose-poke holes that the creatures make in the ground with their long noses to probe the earth, as they look for insects they can hunt and eat. After a relatively long period of time they will be covered in leaves, but a fresh hole will be covered in moist soil.
nearby scratches in logs that were made with their long claws.
a strong musty goat-like smell seeping out of a burrow. The pungent odor indicates that the burrow is active, and a solenodon may be present sleeping.
A solenodon was captured in 2008 during a month-long expedition in the Dominican Republic, thereby allowing researchers the rare opportunity to examine it in daylight. The Durrell Wildlife Conservation Trust and the Ornithological Society of Hispaniola were able to take measurements and DNA from the creature before it was released. It was the only trapping made from the entire month-long expedition. The new information gathered was significant because little information was known about the solenodon's ecology, behavior, population status, and genetics, and without that knowledge it is difficult for researchers to design effective conservation. In a 2020 assessment from the IUCN, the Hispaniolan solenodon was found to be much more common on Hispaniola than previously thought, warranting its downlisting from "Endangered" to a "Least Concern" species.
Conservation
After the arrival of Europeans to the Caribbean, the existence of both species of solenodon has been threatened by introduced species, like dogs, cats, rats, and mongooses, as well as more dense human settlement. These factors were possibly the catalyst for the relatively recent extinction of two species, the giant solenodon (S. arredondoi) and the Marcano's solenodon (S. marcanoi). Native snakes and birds of prey are also threats. Solenodons have no known negative effects on human populations; in fact, they serve as both pest control, helping ecosystems by keeping down the population of invertebrates, and a means of spreading fruit seeds. Human activity has also had an adverse effect on the solenodon population. Human development on both Cuba and Hispaniola has resulted in fragmentation and habitat loss, further contributing to the reduction of the solenodon's range and numbers.
| Biology and health sciences | Eulipotyphla | Animals |
326201 | https://en.wikipedia.org/wiki/Psilocybin%20mushroom | Psilocybin mushroom | Psilocybin mushrooms, commonly known as magic mushrooms, shrooms, or broadly as hallucinogenic mushrooms, are a polyphyletic informal group of fungi that contain psilocybin, which turns into psilocin upon ingestion. The most potent species are members of genus Psilocybe, such as P. azurescens, P. semilanceata, and P. cyanescens, but psilocybin has also been isolated from approximately a dozen other genera, including Panaeolus (including Copelandia), Inocybe, Pluteus, Gymnopilus, and Pholiotina.
Amongst other cultural applications, psilocybin mushrooms are used as recreational drugs. They may be depicted in Stone Age rock art in Africa and Europe, but are more certainly represented in sculptures and glyphs seen throughout the Americas.
History
Psilocybin mushrooms have been and continue to be used in Mexican and Central American cultures in religious, divinatory, or spiritual contexts.
Early
Rock art from from Tassili, Algeria, is believed to depict psychedelic mushrooms and the transformation of the user under their influence. Prehistoric rock art near Villar del Humo in Spain suggests that Psilocybe hispanica was used in religious rituals 6,000 years ago. The hallucinogenic species of the Psilocybe genus have a history of use among the native peoples of Mesoamerica for religious communion, divination, and healing, from pre-Columbian times to the present day. Mushroom stones and motifs have been found in Guatemala. A statuette dating from depicting a mushroom strongly resembling Psilocybe mexicana was found in the west Mexican state of Colima in a shaft and chamber tomb. A Psilocybe species known to the Aztecs as teōnanācatl (literally "divine mushroom": the agglutinative form of teōtl (god, sacred) and nanācatl (mushroom) in Nahuatl language) was reportedly served at the coronation of the Aztec ruler Moctezuma II in 1502. Aztecs and Mazatecs referred to psilocybin mushrooms as genius mushrooms, divinatory mushrooms, and wondrous mushrooms when translated into English. Bernardino de Sahagún reported the ritualistic use of teonanácatl by the Aztecs when he traveled to Central America after the expedition of Hernán Cortés.
After the Spanish conquest, Catholic missionaries campaigned against the cultural tradition of the Aztecs, dismissing the Aztecs as idolaters, and the use of hallucinogenic plants and mushrooms, together with other pre-Christian traditions, was quickly suppressed. The Spanish believed the mushroom allowed the Aztecs and others to communicate with demons. Despite this history, the use of teonanácatl has persisted in some remote areas.
Modern
The first mention of hallucinogenic mushrooms in European medicinal literature was in the London Medical and Physical Journal in 1799: A man served Psilocybe semilanceata mushrooms he had picked for breakfast in London's Green Park to his family. The apothecary who treated them later described how the youngest child "was attacked with fits of immoderate laughter, nor could the threats of his father or mother refrain him."
In 1955, Valentina Pavlovna Wasson and R. Gordon Wasson became the first known European Americans to actively participate in an indigenous mushroom ceremony. The Wassons did much to publicize their experience, even publishing an article on their experiences in Life on May 13, 1957. In 1956, Roger Heim identified the psychoactive mushroom the Wassons brought back from Mexico as Psilocybe, and in 1958, Albert Hofmann first identified psilocybin and psilocin as the active compounds in these mushrooms.
Inspired by the Wassons' Life article, Timothy Leary traveled to Mexico to experience psilocybin mushrooms himself. When he returned to Harvard in 1960, he and Richard Alpert started the Harvard Psilocybin Project, promoting psychological and religious studies of psilocybin and other psychedelic drugs. Alpert and Leary sought to conduct research with psilocybin on prisoners in the 1960s, testing its effects on recidivism. This experiment reviewed the subjects six months later, and found that the recidivism rate had decreased beyond their expectation, below 40%. This, and another experiment administering psilocybin to graduate divinity students, showed controversy. Shortly after Leary and Alpert were dismissed from their jobs by Harvard in 1963, they turned their attention toward promoting the psychedelic experience to the nascent hippie counterculture.
The popularization of entheogens by the Wassons, Leary, Terence McKenna, Robert Anton Wilson, and many others led to an explosion in the use of psilocybin mushrooms throughout the world. By the early 1970s, many psilocybin mushroom species were described from temperate North America, Europe, and Asia and were widely collected. Books describing methods of cultivating large quantities of Psilocybe cubensis were also published. The availability of psilocybin mushrooms from wild and cultivated sources has made them one of the most widely used psychedelic drugs.
At present, psilocybin mushroom use has been reported among some groups spanning from central Mexico to Oaxaca, including groups of Nahua, Mixtecs, Mixe, Mazatecs, Zapotecs, and others. An important figure of mushroom usage in Mexico was María Sabina, who used native mushrooms, such as Psilocybe mexicana in her practice.
Occurrence
In a 2000 review on the worldwide distribution of psilocybin mushrooms, Gastón Guzmán and colleagues considered these distributed among the following genera: Psilocybe (116 species), Gymnopilus (14), Panaeolus (13), Copelandia (12), Pluteus (6) Inocybe (6), Pholiotina (4) and Galerina (1). Guzmán increased his estimate of the number of psilocybin-containing Psilocybe to 144 species in a 2005 review.
Many of them are found in Mexico (53 species), with the remainder distributed throughout Canada and the US (22), Europe (16), Asia (15), Africa (4), and Australia and associated islands (19). Generally, psilocybin-containing species are dark-spored, gilled mushrooms that grow in meadows and woods in the subtropics and tropics, usually in soils rich in humus and plant debris. Psilocybin mushrooms occur on all continents, but the majority of species are found in subtropical humid forests. P. cubensis is the most common Psilocybe in tropical areas. P. semilanceata, considered the world's most widely distributed psilocybin mushroom, is found in temperate parts of Europe, North America, Asia, South America, Australia and New Zealand, although it is absent from Mexico.
Composition
Magic mushroom composition varies from genus to genus and species to species. Its principal component is psilocybin, which is converted into psilocin to produce psychoactive effects. Besides psilocin, norpsilocin, baeocystin, norbaeocystin, and aeruginascin may also be present, which can modify the effects of magic mushrooms. Panaeolus subbalteatus, one species of magic mushroom, had the highest amount of psilocybin compared to the rest of the fruiting body. Certain mushrooms are found to produce beta-carbolines which inhibit monoamine oxidase, an enzyme that breaks down tryptamine alkaloids. They occur in different genera, such as Psilocybe,Cyclocybe, and Hygrophorus. Harmine, harmane, norharmane and a range of other l-tryptophan-derived were discovered in Psilocybe species.
Effects
The effects of psilocybin mushrooms come from psilocybin and psilocin. When psilocybin is ingested, it is broken down by the liver in a process called dephosphorylation. The resulting compound is called psilocin, responsible for the psychedelic effects. Psilocybin and psilocin create short-term increases in tolerance of users, thus making it difficult to misuse them because the more often they are taken within a short period, the weaker the resultant effects are. Psilocybin mushrooms have not been known to cause physical or psychological dependence (addiction). The psychedelic effects appear around 20 minutes after ingestion and can last up to 6 hours. Physical effects may occur, including nausea, vomiting, euphoria, muscle weakness or relaxation, drowsiness, and lack of coordination.
As with many psychedelic substances, the effects of psychedelic mushrooms are subjective and can vary considerably among individual users. The mind-altering effects of psilocybin-containing mushrooms typically last from three to eight hours, depending on dosage, preparation method, and personal metabolism. The first 3–4 hours after ingestion are typically referred to as the 'peak'—in which the user experiences more vivid visuals and distortions in reality. The effects can seem to last much longer for the user because of psilocybin's ability to alter time perception.
Sensory
Sensory effects include visual and auditory hallucinations followed by emotional changes and altered perception of time and space. Noticeable changes to the auditory, visual, and tactile senses may become apparent around 30 minutes to an hour after ingestion, although effects may take up to two hours to take place. These shifts in perception visually include enhancement and contrasting of colors, strange light phenomena (such as auras or "halos" around light sources), increased visual acuity, surfaces that seem to ripple, shimmer, or breathe; complex open and closed eye visuals of form constants or images, objects that warp, morph, or change solid colors; a sense of melting into the environment, and trails behind moving objects. Sounds may seem to have increased clarity—music, for example, can take on a profound sense of cadence and depth. Some users experience synesthesia, wherein they perceive, for example, a visualization of color upon hearing a particular sound.
Emotional
As with other psychedelics such as LSD, the experience, or 'trip,' is strongly dependent upon set and setting. Hilarity, lack of concentration, and muscular relaxation (including dilated pupils) are all normal effects, sometimes in the same trip. A negative environment could contribute to a bad trip, whereas a comfortable and familiar environment would set the stage for a pleasant experience. Psychedelics make experiences more intense, so if a person enters a trip in an anxious state of mind, they will likely experience heightened anxiety on their trip. Many users find it preferable to ingest the mushrooms with friends or people familiar with 'tripping.' The psychological consequences of psilocybin use include hallucinations and an inability to discern fantasy from reality. Panic reactions and psychosis also may occur, particularly if a user ingests a large dose.
Dosage
The dosage of mushrooms containing psilocybin depends on the psilocybin and psilocin content, which can vary significantly between and within the same species but is typically around 0.5–2.0% of the dried weight of the mushroom. Usual doses of the common species Psilocybe cubensis range around 1.0 to 2.5 g, while about 2.5 to 5.0 g dried mushroom material is considered a strong dose. Above 5 g is often considered a heavy dose, with 5.0 grams of dried mushroom often being referred to as a "heroic dose".
However, microdosing has become a popular technique for many users, which involves taking <1.0g for an experience that is not as intense or powerful, but recreationally enjoyable and even alleviating for symptoms of depression.
The concentration of active psilocybin mushroom compounds varies from species to species but also from mushroom to mushroom within a given species, subspecies or variety. The species Psilocybe azurescens contains the most psilocybin (up to 1.78%).
Toxicology
The species within the most commonly foraged and ingested genus of psilocybin mushrooms, the psilocybe, contains two primary hallucinogenic toxins; psilocybin and psilocin. The median lethal dose, also known as "LD50", of psilocybin is 280 mg/kg.
From a toxicological profile, it would be incredibly difficult to overdose on psilocybin mushrooms, given their primary toxin compounds. To consume such massive amounts of psilocybin, one must ingest more than of dried Psilocybe cubensis given 1-2% of the dried mushroom contains psilocybin.
Posing a more realistic threat than a lethal overdose, significantly elevated levels of psilocin can overstimulate the 5-HT2A receptors in the brain, causing acute serotonin syndrome. A 2015 study observed that a dose of 200 mg/kg psilocin induced symptoms of acute serotonin poisoning in mice.
Neurotoxicity-induced fatal events are uncommon with psilocybin mushroom overdose, as most patients admitted to critical care are released from the department only requiring moderate treatment. However, fatal events related to emotional distress and trip-induced psychosis can occur as a result of over-consumption of psilocybin mushrooms. In 2003, a 27-year-old man was found dead in an irrigation canal due to hypothermia. In his bedroom was found two cultivation pots of psilocybin mushrooms, but no report of toxicology was made.
Clinical research
Due partly to restrictions of the Controlled Substances Act, research in the United States was limited until the early 21st century when psilocybin mushrooms were tested for their potential to treat drug dependence, anxiety and mood disorders. In 2018–19, the Food and Drug Administration (FDA) granted Breakthrough Therapy Designation for studies of psilocybin in depressive disorders.
Legality
The legality of the cultivation, possession, and sale of psilocybin mushrooms and psilocybin and psilocin varies from country to country.
After Oregon Measure 109, in 2020, Oregon became the first US state to decriminalize psilocybin and legalize it for therapeutic use. However, selling psilocybin without being licensed may still attract fines or imprisonment. In 2022 Colorado legalized consumption, growing, and sharing for personal use, though sales are prohibited while regulations are being drafted. Other jurisdictions in the United States where psilocybin mushrooms are decriminalized include Ann Arbor and Detroit, Michigan; Oakland and Santa Cruz, California; Easthampton, Somerville, Northampton, and Cambridge, Massachusetts; Seattle, Washington; and Washington, DC.
Furthermore, buying spores of mushroom species containing psilocybin online in the United States is legal in all states except Georgia, Idaho and California. This is because only fruiting mushrooms and mycelium contain psilocybin, a federally banned substance. A technical caveat to consider, however, is that the distributed spores must not be intended to be used for cultivation, but allowed for microscopy purposes.
United Nations
Internationally, mescaline, DMT, and psilocin, are Schedule I drugs under the Convention on Psychotropic Substances. The Commentary on the Convention on Psychotropic Substances notes, however, that the plants containing them are not subject to international control:
| Biology and health sciences | Edible fungi | Plants |
326340 | https://en.wikipedia.org/wiki/Dromedary | Dromedary | The dromedary (Camelus dromedarius or ;), also known as the dromedary camel, Arabian camel, or one-humped camel, is a large camel, of the genus Camelus, with one hump on its back.
It is the tallest of the three species of camel; adult males stand at the shoulder, while females are tall. Males typically weigh between , and females weigh between .
The species' distinctive features include its long, curved neck, narrow chest, a single hump (compared with two on the Bactrian camel and wild Bactrian camel), and long hairs on the throat, shoulders and hump. The coat is generally a shade of brown. The hump, tall or more, is made of fat bound together by fibrous tissue.
Dromedaries are mainly active during daylight hours. They form herds of about 20 individuals, which are led by a dominant male. They feed on foliage and desert vegetation; several adaptations, such as the ability to tolerate losing more than 30% of its total water content, allow it to thrive in its desert habitat. Mating occurs annually and peaks in the rainy season; females bear a single calf after a gestation of 15 months.
The dromedary has not occurred naturally in the wild for nearly 2,000 years. It was probably first domesticated in the Arabian Peninsula about 4,000 years ago, or in Somalia where there are paintings in Laas Geel that figure it from 5,000 to 9,000 years ago. In the wild, the dromedary inhabited arid regions, including the Sahara Desert. The domesticated dromedary is generally found in the semi-arid to arid regions of the Old World, mainly in Africa and the Arabian Peninsula, and a significant feral population occurs in Australia. Products of the dromedary, including its meat and milk, support several North African tribes; it is also commonly used for riding and as a pack animal.
Etymology
The common name "dromedary" comes from the Old French or the Late Latin . These originated from the Greek word , (GEN (γενική) , ), meaning "running" or "runner", used in Greek in the combination (), literally "running camel", to refer to the dromedary. The first recorded use in English of the name "dromedary" occurred in the 14th century. The dromedary possibly originated in Arabia or Somalia and is therefore sometimes referred to as the Arabian or East African camel. The word "camel" generally refers either to the dromedary or the congeneric Bactrian; the word came into English via Old Norman, from the Latin word , from Ancient Greek (), ultimately from a Semitic source akin to Hebrew () and Arabic ().
Taxonomy and classification
The dromedary shares the genus Camelus with the Bactrian camel (C. bactrianus) and the wild Bactrian camel (C. ferus). The dromedary belongs to the family Camelidae. The ancient Greek philosopher Aristotle (4th century BC) was the first to describe the species of Camelus. He named two species in his History of Animals; the one-humped Arabian camel and the two-humped Bactrian camel. The dromedary was given its current binomial name Camelus dromedarius by Swedish zoologist Carl Linnaeus in his 1758 publication Systema Naturae. In 1927, British veterinarian Arnold Leese classified dromedaries by their basic habitats; the hill camels are small, muscular animals and efficient beasts of burden; the larger plains camels could be further divided into the desert type that can bear light burdens and are apt for riding, and the riverine type – slow animals that can bear heavy burdens; and those intermediate between these two types.
In 2007, Peng Cui of the Chinese Academy of Sciences and colleagues carried out a phylogenetic study of the evolutionary relationships between the two tribes of Camelidae; Camelini – consisting of the three Camelus species (the study considered the wild Bactrian camel as a subspecies of the Bactrian camel) – and Lamini, which consists of the alpaca (Vicugna pacos), the guanaco (Lama guanicoe), the llama (L. glama) and the vicuña (V. vicugna). The study showed the two tribes had diverged 25 million years ago (early Miocene), earlier than previously estimated from North American fossils.
The dromedary and the Bactrian camel often interbreed to produce fertile offspring. Where the ranges of the species overlap, such as in northern Punjab, Persia, and Afghanistan, the phenotypic differences between them tend to decrease as a result of extensive crossbreeding. The fertility of their hybrid has given rise to speculation that the dromedary and the Bactrian camel should be merged into a single species with two varieties. However, a 1994 analysis of the mitochondrial cytochrome b gene showed the species display 10.3% divergence in their sequences.
Genetics and hybrids
The dromedary has 74 diploid chromosomes, the same as other camelids. The autosomes consist of five pairs of small to medium-sized metacentrics and submetacentrics. The X chromosome is the largest in the metacentric and submetacentric group. There are 31 pairs of acrocentrics. The dromedary's karyotype is similar to that of the Bactrian camel.
Camel hybridization began in the first millennium BC. For about a thousand years, Bactrian camels and dromedaries have been successfully bred in regions where they are sympatric to form hybrids with either a long, slightly lopsided hump or two humps – one small and one large. These hybrids are larger and stronger than their parents – they can bear greater loads. A cross between a first generation female hybrid and a male Bactrian camel can also produce a hybrid. Hybrids from other combinations tend to be bad-tempered or runts.
Evolution
The extinct Protylopus, which occurred in North America during the upper Eocene, is the oldest and the smallest-known camel. During the transition from Pliocene to Pleistocene, several mammals faced extinction. This period marked the successful radiation of the Camelus species, which migrated over the Bering Strait and dispersed widely into Asia, eastern Europe and Africa. By the Pleistocene, ancestors of the dromedary occurred in the Middle East and northern Africa.
The modern dromedary probably evolved in the hotter, arid regions of western Asia from the Bactrian camel, which in turn was closely related to the earliest Old World camels. It was previously believed that this hypothesis was supported by the dromedary foetus having two humps, but modern studies have shown this to be false. A jawbone of a dromedary that dated from 8,200 BC was found in Saudi Arabia on the southern coast of the Red Sea.
In 1975, Richard Bulliet of Columbia University wrote that the dromedary exists in large numbers in areas from which the Bactrian camel has disappeared; the converse is also true to a great extent. He said this substitution could have taken place because of the heavy dependence on the milk, meat and wool of the dromedary by Syrian and Arabian nomads, while the Asiatic people domesticated the Bactrian camel but did not have to depend upon its products.
Characteristics
The dromedary is the tallest of the three camel species. Adult males range in height between at the shoulder; females range between . Males typically weigh between ; females range between . The distinctive features are its long, curved neck, narrow chest and single hump (the Bactrian camel has two), thick, double-layered eyelashes and bushy eyebrows. They have sharp vision and a good sense of smell. The male has a soft palate ( in Arabic) nearly long, which he inflates to produce a deep pink sac. The palate, which is often mistaken for the tongue, dangles from one side of the mouth and is used to attract females during the mating season.
The coat is generally brown but can range from black to nearly white. Leese reported piebald dromedaries in Kordofan and Darfur in Sudan. Piebald coloration in some camels is thought to be caused by the KITW1 allele of the KIT gene, though there is likely at least one other mutation that also causes white spotting. The hair is long and concentrated on the throat, shoulders and the hump. The large eyes are protected by prominent supraorbital ridges; the ears are small and rounded. The hump is at least high. The dromedary has long, powerful legs with two toes on each foot. The feet resemble flat, leathery pads. Like the giraffe, dromedaries move both legs on one side of the body at the same time.
Compared with the Bactrian camel, the dromedary has a lighter build, longer limbs, shorter hairs, a harder palate and an insignificant or absent ethmoidal fissure. Unlike the camelids of the genus Lama, the dromedary has a hump, and in comparison has a longer tail, smaller ears, squarer feet, and a greater height at the shoulder. The dromedary has four teats instead of the two in the Lama species.
Anatomy
The cranium of the dromedary consists of a postorbital bar, a tympanic bulla filled with spongiosa, a well-defined sagittal crest, a long facial part and an indented nasal bone. Typically, there are eight sternal and four non-sternal pairs of ribs. The spinal cord is nearly long; it terminates in the second and third sacral vertebra. The fibula is reduced to a malleolar bone. The dromedary is a digitigrade animal; it walks on its toes, which are known as digits. It lacks the second and fifth digits. The front feet are wide and long; they are larger than the hind feet, which measure wide and long.
The dromedary has 22 milk teeth, which are eventually replaced by 34 permanent teeth. The dental formula for permanent dentition is , and for milk dentition. In the juvenile, the lower first molars develop by 12 to 15 months and the permanent lower incisors appear at 4.5 to 6.5 years of age. All teeth are in use by 8 years. The lenses of the eyes contain crystallin, which constitutes 8 to 13% of the protein present there.
The skin is black; the epidermis is thick and the dermis is thick. The hump is composed of fat bound together by fibrous tissue. There are no glands on the face; males have glands that appear to be modified apocrine sweat glands that secrete pungent, coffee-coloured fluid during the rut, located on either side of the neck midline. The glands generally grow heavier during the rut, and range from . Each cover hair is associated with an arrector pilli muscle, a hair follicle, a ring of sebaceous glands and a sweat gland. Females have cone-shaped, four-chambered mammary glands that are long with a base diameter of . These glands can produce milk with up to 90% water content even if the mother is at risk of dehydration.
The heart weighs around ; it has two ventricles with the tip curving to the left. The pulse rate is 50 beats per minute. The dromedary is the only mammal with oval red blood corpuscles, which facilitates blood flow during dehydration. The pH of the blood varies from 7.1 to 7.6 (slightly alkaline). The individual's state of hydration and sex and the time of year can influence blood values. The lungs lack lobes. A dehydrated camel has a lower breathing rate. Each kidney has a capacity of , and can produce urine with high chloride concentrations. Like the horse, the dromedary has no gall bladder. The grayish violet, crescent-like spleen weighs less than . The triangular, four-chambered liver weighs ; its dimensions are .
Reproductive system
The ovaries are reddish, circular and flattened. They are enclosed in a conical bursa and have the dimensions during anestrus. The oviducts are long. The uterus is bicornuate. The vagina is long and has well-developed Bartholin's glands. The vulva is deep and has a small clitoris. The placenta is diffuse and epitheliochorial, with a crescent-like chorion.
The penis is covered by a triangular penile sheath that opens backwards; it is about long. The scrotum is located high in the perineum with the testicles in separate sacs. Testicles are long, deep and wide. The right testicle is often smaller than the left. The typical mass of either testicle is less than ; during the rut the mass increases from . The Cowper's gland is white, almond-shaped and lacks seminal vesicles; the prostate gland is dark yellow, disc-shaped and divided into two lobes.
The camel epididymis interstitium revealed several blood vessels harboring special regulatory devices such as the spiral arteries, spiral veins, and throttle arterioles.
Health and diseases
The dromedary generally suffers from fewer diseases than other domestic livestock such as goats and cattle. Temperature fluctuations occur throughout the day in a healthy dromedary – the temperature falls to its minimum at dawn, rises until sunset and falls during the night. Nervous camels may vomit if they are carelessly handled; this does not always indicate a disorder. Rutting males may develop nausea.
The dromedary is prone to trypanosomiasis, a disease caused by a parasite transmitted by the tsetse fly. The main symptoms are recurring fever, anaemia and weakness; the disease is typically fatal for the camel. Brucellosis is another prominent malady. In an observational study, the seroprevalence of this disease was generally low (2 to 5%) in nomadic or moderately free dromedaries, but it was higher (8 to 15%) in denser populations. Brucellosis is caused by different biotypes of Brucella abortus and B. melitensis. Other internal parasites include Fasciola gigantica (trematode), two types of cestode (tapeworm) and various nematodes (roundworms). Among external parasites, Sarcoptes species cause sarcoptic mange. In a 2000 study in Jordan, 83% of the 32 camels studied tested positive for sarcoptic mange. In another study, dromedaries were found to have natural antibodies against the rinderpest and ovine rinderpest viruses.
In 2013, a seroepidemiological study (a study investigating the patterns, causes and effects of a disease on a specific population on the basis of serologic tests) in Egypt was the first to show the dromedary might be a host for the Middle East respiratory syndrome coronavirus (MERS-CoV). A 2013–14 study of dromedaries in Saudi Arabia concluded the unusual genetic stability of MERS-CoV coupled with its high seroprevalence in the dromedary makes this camel a highly probable host for the virus. The full genome sequence of MERS-CoV from dromedaries in this study showed a 99.9% match to the genomes of human clade B MERS-CoV. Another study in Saudi Arabia showed the presence of MERS-CoV in 90% of the evaluated dromedaries and suggested that camels could be the animal source of MERS-CoV.
Fleas and ticks are common causes of physical irritation. Hyalomma dromedarii is especially adapted to arid conditions, changing its moulting process to complete more or all of its life cycle on a single host if stressed, and having an unusually wide host range. The larvae are not well understood but their questing phase is assumed to occur during the winter, which is also when rain arrives. The nymphs infest the host mostly in January, then the adults May to September. In a study in Egypt, H. dromedarii was dominant in dromedaries, comprising 95.6% of the adult ticks isolated from the camels. In Israel, the number of ticks per camel ranged from 20 to 105. Nine camels in the date palm plantations in Arava Valley were injected with ivermectin, which is not effective against Hyalomma tick infestations. Larvae of the camel nasal fly Cephalopsis titillator can cause possibly fatal brain compression and nervous disorders. Illnesses that can affect dromedary productivity are pyogenic diseases and wound infections caused by Corynebacterium and Streptococcus, pulmonary disorders caused by Pasteurella such as hemorrhagic septicemia and Rickettsia species, camelpox, anthrax, and cutaneous necrosis caused by Streptothrix and deficiency of salt in the diet.
Ecology
The dromedary is diurnal (active mainly during daylight); free-ranging herds feed and roam throughout the day, though they rest during the hottest hours around noon. The night is mainly spent resting. Dromedaries form cohesive groups of about 20 individuals, which consist of several females led by a dominant male. Females may also lead in turns. Some males either form bachelor groups or roam alone. Herds may congregate to form associations of hundreds of camels during migrations at the time of natural disasters. The males of the herd prevent female members from interacting with bachelor males by standing or walking between them and sometimes driving the bachelor males away. In Australia, short-term home ranges of feral dromedaries cover ; annual home ranges can spread over several thousand square kilometres.
Special behavioral features of the dromedary include snapping at others without biting them and showing displeasure by stamping their feet. They are generally non-aggressive, with the exception of rutting males. They appear to remember their homes; females, in particular, remember the places they first gave birth or suckled their offspring. Males become aggressive in the mating season, and sometimes wrestle. A 1980 study showed androgen levels in males influences their behavior. Between January and April when these levels are high during the rut, they become difficult to manage, blow out the palate from the mouth, vocalize and throw urine over their backs. Camels scratch parts of their bodies with their legs or with their lower incisors. They may also rub against tree bark and roll in the sand.
Free-ranging dromedaries face large predators typical of their regional distribution, which includes wolves, lions and tigers.
Diet
The dromedary's diet consists mostly of foliage, dry grasses and desert vegetation – mostly thorny plants. A study said the typical diet of the dromedary is dwarf shrubs (47.5%), trees (29.9%), grasses (11.2%), other herbs (0.2%) and vines (11%). The dromedary is primarily a browser; forbs and shrubs comprise 70% of its diet in summer and 90% of its diet in winter. The dromedary may also graze on tall, young, succulent grasses.
In the Sahara, 332 plant species have been recorded as food plants of the dromedary. These include Aristida pungens, Acacia tortilis, Panicum turgidum, Launaea arborescens and Balanites aegyptiaca. The dromedary eats Acacia, Atriplex and Salsola when they are available. Feral dromedaries in Australia prefer Trichodesma zeylanicum and Euphorbia tannensis. In India, dromedaries are fed with forage plants such as Vigna aconitifolia, V. mungo, Cyamopsis tetragonolaba, Melilotus parviflora, Eruca sativa, Trifolium species and Brassica campestris. Dromedaries keep their mouths open while chewing thorny food. They use their lips to grasp the food and chew each bite 40 to 50 times. Its long eyelashes, eyebrows, lockable nostrils, caudal opening of the prepuce and a relatively small vulva help the camel avoid injuries, especially while feeding. They graze for 8–12 hours per day and ruminate for an equal amount of time.
Biology of the Dromedary Camel
Adaptations
The dromedary is specially adapted to its desert habitat; these adaptations are aimed at conserving water and regulating body temperature. The bushy eyebrows and the double row of eyelashes prevent sand and dust from entering the eyes during strong windstorms, and shield them from the sun's glare. The dromedary is able to close its nostrils voluntarily; this assists in water conservation. The dromedary can conserve water by reducing perspiration by fluctuating the body temperature throughout the day from . The kidneys are specialized to minimize water loss through excretion. Groups of camels avoid excess heat from the environment by pressing against each other. The dromedary can tolerate greater than 30% water loss, which is generally impossible for other mammals. In temperatures between , it needs water every 10 to 15 days. In the hottest temperatures, the dromedary takes water every four to seven days. This camel has a quick rate of rehydration and can drink at per minute. The dromedary has a rete mirabile, a complex of arteries and veins lying very close to each other which uses countercurrent blood flow to cool blood flowing to the brain. This effectively controls the temperature of the brain.
The hump stores up to of fat, which the camel can break down into energy to meet its needs when resources are scarce; the hump also helps dissipate body heat. When this tissue is metabolized, through fat metabolization, it releases energy while causing water to evaporate from the lungs during respiration (as oxygen is required for the metabolic process): overall, there is a net decrease in water. If the hump is small, the animal can show signs of starvation. In a 2005 study, the mean volume of adipose tissues (in the external part of the hump that have cells to store lipids) is related to the dromedary's unique mechanism of food and water storage. In case of starvation, they can even eat fish and bones, and drink brackish and salty water. The hair is longer on the throat, hump and shoulders. Though the padded hooves effectively support the camel's weight on the ground, they are not suitable for walking on slippery and muddy surfaces.
Reproduction
Camels have a slow growth rate and reach sexual maturity slower than sheep or goat. The age of sexual maturity varies geographically and depends on the individual, as does the reproductive period. Both sexes might mature by three to five years of age, though successful breeding could take longer. Camels are described as atypical seasonal breeders; they exhibit spermatogenesis throughout the whole year with a reduction in spermatogenesis during the nonbreeding season compared to that in the
breeding season (Zayed et al., 1995). The breeding season in Egypt is during spring; the spring months. Mating occurs once a year, and peaks in the rainy season. The mating season lasts three to five months, but may last a year for older animals.
During the reproductive season, males splash their urine on their tails and nether regions. To attract females they extrude their soft palate – a trait unique to the dromedary. As the male gurgles, copious quantities of saliva turns to foam and covers the mouth. Males threaten each other for dominance over the female by trying to stand taller than the other, making low noises and a series of head movements including lowering, lifting and bending their necks backward. Males try to defeat other males by biting the opponent's legs and taking the head between his jaws. Copulation begins with foreplay; the male smells the female's genitalia and often bites her there or around her hump. The male forces the female to sit, then grasps her with his forelegs. Camelmen often aid the male insert his penis into the female's vulva. The male dromedary's ability to penetrate the female on his own is disputed, though feral populations in Australia reproduce naturally. Copulation takes from 7 to 35 minutes, averaging 11 to 15 minutes. Normally, three to four ejaculations occur. The semen of a Bikaneri dromedary is white and viscous, with a pH of around 7.8.
A single calf is born after a gestation period of 15 months. Calves move freely by the end of their first day. Nursing and maternal care continue for one to two years. In a study to find whether young could exist on milk substitutes, two male, month-old camels were separated from their mothers and were fed on milk substitutes prepared commercially for lambs, and they grew to normal weights for male calves after 30 days. Lactational yield can vary with species, breed, individual, region, diet, management conditions and lactating stage. The largest quantity of milk is produced during the early period of lactation. The lactation period can vary between nine and eighteen months.
Dromedaries are induced ovulators. Oestrus may be cued by the nutritional status of the camel and the length of day. If mating does not occur, the follicle, which grows during oestrus, usually regresses within a few days. In one study, 35 complete oestrous cycles were observed in five nonpregnant females over 15 months. The cycles were about 28 days long; follicles matured in six days, maintained their size for 13 days, and returned to their original size in eight days. In another study, ovulation could be best induced when the follicle reaches a size of . In another study, pregnancy in females could be recognized as early as 40 to 45 days of gestation by the swelling of the left uterine horn, where 99.5% of pregnancies were located.
Range
Its range included hot, arid regions of northern Africa, Ethiopia, the Near East, and western and central Asia. The dromedary typically thrives in areas with a long dry season and a short wet season. They are sensitive to cold and humidity, though some breeds can thrive in humid conditions.
The dromedary was first domesticated in the southern Arabian Peninsula around 4000–3000 BC. In the ninth or tenth century BC, the dromedary became popular in the Near East. The Persian invasion of Egypt under Cambyses in 525 BC introduced domesticated camels to the area. The Persian camels were not well-suited to trading or travel over the Sahara; journeys across the desert were made on chariots pulled by horses. The dromedary was introduced into Egypt from south-western Asia (Arabia and Persia). The popularity of dromedaries increased after the Islamic conquest of North Africa. While the invasion was accomplished largely on horseback, new links to the Middle East allowed camels to be imported en masse. These camels were well-suited to long desert journeys and could carry a great deal of cargo, allowing substantial trans-Saharan trade for the first time. In Libya, dromedaries were used for transport and their milk and meat constituted the local diet.
Dromedaries were also shipped from south-western Asia to Spain, Italy, Turkey, France, Canary Islands, the Americas and Australia. Dromedaries were introduced into Spain in 1020 AD and to Sicily in 1059 AD. Camels were exported to the Canary Islands in 1405 during the European colonisation of the area, and are still extant there, especially in Lanzarote and to the south of Fuerteventura. Attempts to introduce dromedaries into the Caribbean, Colombia, Peru, Bolivia and Brazil were made between the 17th and 19th centuries; some were imported to the western United States in the 1850s and some to Namibia in the early 1900s, but presently they exist in small numbers or are absent in these areas.
In 1840, about six camels were shipped from Tenerife to Adelaide, but only one survived the journey to arrive on 12 October that year. The animal, a male called Harry, was owned by the explorer John Ainsworth Horrocks. Harry was ill-tempered but was included in an expedition the following year because he could carry heavy loads. The next major group of camels were imported into Australia in 1860, and between 1860 and 1907 10 to 12 thousand were imported. These were used mainly for riding and transport.
Current distribution of captive animals
In the early 21st century, the domesticated dromedary is found in the semi-arid to arid regions of the Old World.
Africa
Africa has more than 80% of the world's total dromedary population; it occurs in almost every desert zone in the northern part of the continent. The Sahel marks the southern extreme of its range, where the annual rainfall is around . The Horn of Africa has nearly 35% of the world's dromedaries; most of the region's stock is in Somalia, followed by Sudan, Eritrea, and Ethiopia (as of the early 2000s). According to the Yearbook of the Food and Agriculture Organization (FAO) for 1984, eastern Africa had about 10 million dromedaries, the largest population of Africa. Western Africa followed with 2.14 million, while northern Africa had nearly 0.76 million. Populations in Africa increased by 16% from 1994 to 2005.
Asia
In Asia, nearly 70% of the population occurs in India and Pakistan. The combined population of the dromedary and the Bactrian camel decreased by around 21% between 1994 and 2004. The dromedary is sympatric with the Bactrian camel in Afghanistan, Pakistan, and central and southwestern Asia. India has a dromedary population of less than one million, with most (0.67 million) in the state of Rajasthan. Populations in Pakistan decreased from 1.1 million in 1994 to 0.8 million in 2005 – a 29% decline. According to the FAO, the dromedary population in six countries of the Persian Gulf was nearly 0.67 million in 2003. In the Persian Gulf region the dromedary is locally classified into breeds including Al-Majahem, Al-Hamrah, Al-Safrah, Al-Zarkah and Al-Shakha, based on coat colour. The UAE has three prominent breeds: Racing camel, Al-Arabiat and Al-Kazmiat.
Feral population
Feral dromedary populations occur in Australia, where they were introduced in 1840. The total dromedary population in Australia was 500,000 in 2005. Nearly 99% of the populations are feral, and they have an annual growth rate of 10%. Most of the Australian feral camels are dromedaries, with only a few Bactrian camels. Most of the dromedaries occur in Western Australia, with smaller populations in the Northern Territory, Western Queensland and northern South Australia.
Feral dromedary populations notwithstanding, the wild dromedary camel as opposed to the now domesticated species has been functionally extinct from the wild for the past 2,000 years.
Relationship with humans
The strength and docility of the dromedary make it popular as a domesticated animal. According to Richard Bulliet, they can be used for a wide variety of purposes: riding, transport, ploughing, and trading and as a source of milk, meat, wool and leather. The main attraction of the dromedary for nomadic desert-dwellers is the wide variety of resources they provide, which are crucial for their survival. It is important for several Bedouin pastoralist tribes of northern Arabia, such as the Ruwallah, the Rashaida, the Bani Sakhr and the Mutayr.
Camel urine and camel milk are used for medicinal purposes.
Riding camels
Although the role of the camel is diminishing with the advent of technology and modern means of transport, it is still an efficient mode of communication in remote and less-developed areas. The dromedary has been used in warfare since the 6th century BC. It is particularly prized for its capability to outrun horses in the deserts. Record of its use during the time of Alexander the Great indicate that the animal could cover up to 50 miles per day for a week and they could go for up to a month without water. An account by Aurelian also cited that, in his escape to Euphrates, Zenobia used a dromedary to outrun her pursuers after she was defeated at Palmyra.
The dromedary also remains popular for racing, particularly in the Arab world. Riding camels of Arabia, Egypt and the Sahara are locally known as the Dilool, the Hageen, and the Mehara respectively; several local breeds are included within these groups.
The ideal riding camel is strong, slender and long-legged with thin, supple skin. The special adaptations of the dromedary's feet allow it to walk with ease on sandy and rough terrain and on cold surfaces. The camels of the Bejas of Sudan and the Hedareb, Bilen, and the Tigre people of Eritrea and the Anafi camel bred in Sudan are common breeds used as riding camels.
According to Leese, the dromedary walks with four speeds or gaits: walk, jog, fast run and canter. The first is the typical speed of walking, around . Jog is the most common speed, nearly on level ground. He estimated a speed of during a fast run, by observing northern African and Arabian dromedaries. He gave no speed range to describe the canter, but implied it was a type of gallop that if induced could exhaust the camel and the rider. Canter could be used only for short periods of time, for example in races.
The ideal age to start training dromedaries for riding is three years, although they may be stubborn and unruly. At first the camel's head is controlled, and it is later trained to respond to sitting and standing commands, and to allow mounting. At this stage a camel will often try to escape when a trainer tries to mount it. The next stage involves training it to respond to reins. The animal must be given loads gradually and not forced to carry heavy loads before the age of six. Riding camels should not be struck on their necks, rather they should be struck behind the right leg of the rider. Leese described two types of saddles generally used in camel riding: the Arabian markloofa used by single riders and the Indian pakra used when two riders mount the same camel.
Baggage and draught camels
The baggage camel should be robust and heavy. Studies have recommended the camel should have either a small or a large head with a narrow aquiline nose, prominent eyes and large lips. The neck should be medium to long so the head is held high. The chest should be deep and the hump should be well-developed with sufficient space behind it to accommodate the saddle. The hindlegs should be heavy, muscular and sturdy. The dromedary can be trained to carry baggage from the age of five years, but must not be given heavy loads before the age of six. The hawia is a typical baggage saddle from Sudan. The methods of training the baggage camels are similar to those for riding camels.
Draught camels are used for several purposes including ploughing, processing in oil mills and pulling carts. There is no clear description for the ideal draught camel, though its strength, its ability to survive without water and the flatness of its feet could be indicators. It may be used for ploughing in pairs or in groups with buffaloes or bullocks. The draught camel can plough at around , and should not be used for more than six hours a day – four hours in the morning and two in the afternoon. The camel is not easily exhausted unless diseased or undernourished, and has remarkable endurance and hardiness.
Dairy products
Camel milk is a staple food of nomadic tribes living in deserts. It consists of 11.7% solids, 3% protein, 3.6% fat, 0.8% ash, 4.4% lactose and 0.13% acidity (pH 6.5). The quantities of sodium, potassium, zinc, iron, copper, manganese, niacin and vitamin C were relatively higher than the amounts in cow milk. However, the levels of thiamin, riboflavin, folacin, vitamin B12, pantothenic acid, vitamin A, lysine, and tryptophan were lower than those in cow milk. The molar percentages of the fatty acids in milk fat were 26.7% for palmitic acid, 25.5% oleic acid, 11.4% myristic acid and 11% palmitoleic acid. Camel milk has higher thermal stability compared with cow milk, but it does not compare favourably with sheep milk.
Daily milk yield generally varies from and from 1.3% to 7.8% of the body weight. Milk yield varies geographically and depends upon the animals' diet and living conditions. At the peak of lactation, a healthy female would typically provide milk per day. Leese estimated a lactating female would yield besides the amount ingested by the calf. The Pakistani dromedary, which is considered a better milker and bigger, can yield when well-fed. Dromedaries in Somalia may be milked between two and four times a day, while those in Afar, Ethiopia, may be milked up to seven times a day.
The acidity of dromedary milk stored at increases at a slower rate than that of cow milk. Though the preparation of butter from dromedary milk is difficult, it is produced in small amounts by nomads, optimized at 22.5% fat in the cream. In 2001, the ability of dromedary milk to form curd was studied; coagulation did not show curd formation, and had a pH of 4.4. It was much different from curd produced from cow milk, and had a fragile, heterogeneous composition probably composed of casein flakes. Nevertheless, cheese and other dairy products can be made from camel milk. A study found bovine calf rennet could be used to coagulate dromedary milk. A special factory has been set up in Nouakchott to pasteurise and make cheese from camel milk. Mystical beliefs surround the use of camel milk in some places; for example, it may be used as an aphrodisiac in Ethiopia.
Meat
The meat of a five-year-old dromedary has a typical composition of 76% water, 22% protein, 1% fat, and 1% ash. The carcass, weighing for a five-year-old dromedary, is composed of nearly 57% muscle, 26% bone and 17% fat. A seven-to-eight-year-old camel can produce a carcass of . The meat is bright red to a dark brown or maroon, while the fat is white. It has the taste and texture of beef. A study of the meat of Iranian dromedaries showed its high glycogen content, which makes it taste sweet like horse meat. The carcasses of well-fed camels were found to be covered with a thin layer of good quality fat. In a study of the fatty acid composition of raw meat taken from the hind legs of seven one-to-three years old males, 51.5% of the fatty acids were saturated, 29.9% mono-unsaturated, and 18.6% polyunsaturated. The major fatty acids in the meat were palmitic acid (26.0%), oleic acid (18.9%) and linoleic acid (12.1%). In the hump, palmitic acid was dominant (34.4%), followed by oleic acid (28.2%), myristic acid (10.3%) and stearic acid (10%).
Dromedary slaughter is more difficult than the slaughter of other domestic livestock such as cattle because of the size of the animal and the significant manual work involved. More males than females are slaughtered. Though less affected by mishandling than other livestock, the pre-slaughter handling of the dromedary plays a crucial role in determining the quality of meat obtained; mishandling can often disfigure the hump. The animal is stunned, seated in a crouching position with the head in a caudal position and slaughtered. The dressing percentage – the percentage of the mass of the animal that forms the carcass – is 55–70%, more than the 45–50% of cattle. Camel meat is often eaten by African camel herders, who use it only during severe food scarcity or for rituals. Camel meat is processed into food items such as burgers, patties, sausages and shawarma. Dromedaries can be slaughtered between four and ten years of age. As the animal ages, the meat grows tougher and deteriorates in taste and quality. In Somalian and Djiboutian culture, the dromedary is a staple food and can be found in many recipes and dishes.
A 2005 report issued jointly by the Ministry of Health (Saudi Arabia) and the United States Centers for Disease Control and Prevention details five cases of bubonic plague in humans resulting from the ingestion of raw camel liver. Four of the five patients had severe pharyngitis and submandibular lymphadenitis. Yersinia pestis was isolated from the camel's bone marrow, from the jird (Meriones libycus) and from fleas (Xenopsylla cheopis) captured at the camel's corral.
Camel hair, wool and hides
Camels in hot climates generally do not develop long coats. Camel hair is light, and has low thermal conductivity and durability, and is thus suitable for manufacturing warm clothes, blankets, tents, and rugs. Hair of highest quality is typically obtained from juvenile or feral camels. In India, camels are clipped usually in spring and around hair is produced per clipping. In colder regions one clipping can yield as much as . A dromedary can produce wool per year, whereas a Bactrian camel has an annual yield of nearly . Dromedaries under the age of two years have a fine undercoat that tends to fall off and should be cropped by hand. Little information about camel hides has been collected but they are usually of inferior quality and are less preferred for manufacturing leather.
| Biology and health sciences | Artiodactyla | null |
984081 | https://en.wikipedia.org/wiki/Space%20rendezvous | Space rendezvous | A space rendezvous () is a set of orbital maneuvers during which two spacecraft, one of which is often a space station, arrive at the same orbit and approach to a very close distance (e.g. within visual contact). Rendezvous requires a precise match of the orbital velocities and position vectors of the two spacecraft, allowing them to remain at a constant distance through orbital station-keeping. Rendezvous may or may not be followed by docking or berthing, procedures which bring the spacecraft into physical contact and create a link between them.
The same rendezvous technique can be used for spacecraft "landing" on natural objects with a weak gravitational field, e.g. landing on one of the Martian moons would require the same matching of orbital velocities, followed by a "descent" that shares some similarities with docking.
History
In its first human spaceflight program Vostok, the Soviet Union launched pairs of spacecraft from the same launch pad, one or two days apart (Vostok 3 and 4 in 1962, and Vostok 5 and 6 in 1963). In each case, the launch vehicles' guidance systems inserted the two craft into nearly identical orbits; however, this was not nearly precise enough to achieve rendezvous, as the Vostok lacked maneuvering thrusters to adjust its orbit to match that of its twin. The initial separation distances were in the range of , and slowly diverged to thousands of kilometers (over a thousand miles) over the course of the missions.
In early 1964 the Soviet Union were able to guide two unmanned satellites designated Polyot 1 and Polyot 2 within 5km, and the crafts were able to establish radio communication.
In 1963 Buzz Aldrin submitted his doctoral thesis titled, Line-Of-Sight Guidance Techniques For Manned Orbital Rendezvous. As a NASA astronaut, Aldrin worked to "translate complex orbital mechanics into relatively simple flight plans for my colleagues."
First attempt failed
NASA's first attempt at rendezvous was made on June 3, 1965, when US astronaut Jim McDivitt tried to maneuver his Gemini 4 craft to meet its spent Titan II launch vehicle's upper stage. McDivitt was unable to get close enough to achieve station-keeping, due to depth-perception problems, and stage propellant venting which kept moving it around.
However, the Gemini 4 attempts at rendezvous were unsuccessful largely because NASA engineers had yet to learn the orbital mechanics involved in the process. Simply pointing the active vehicle's nose at the target and thrusting was unsuccessful. If the target is ahead in the orbit and the tracking vehicle increases speed, its altitude also increases, actually moving it away from the target. The higher altitude then increases orbital period due to Kepler's third law, putting the tracker not only above, but also behind the target. The proper technique requires changing the tracking vehicle's orbit to allow the rendezvous target to either catch up or be caught up with, and then at the correct moment changing to the same orbit as the target with no relative motion between the vehicles (for example, putting the tracker into a lower orbit, which has a shorter orbital period allowing it to catch up, then executing a Hohmann transfer back to the original orbital height).
First successful rendezvous
Rendezvous was first successfully accomplished by US astronaut Wally Schirra on December 15, 1965. Schirra maneuvered the Gemini 6 spacecraft within of its sister craft Gemini 7. The spacecraft were not equipped to dock with each other, but maintained station-keeping for more than 20 minutes. Schirra later commented:
Schirra used another metaphor to describe the difference between the two nations' achievements:
First docking
The first docking of two spacecraft was achieved on March 16, 1966 when Gemini 8, under the command of Neil Armstrong, rendezvoused and docked with an uncrewed Agena Target Vehicle. Gemini 6 was to have been the first docking mission, but had to be cancelled when that mission's Agena vehicle was destroyed during launch.
The Soviets carried out the first automated, uncrewed docking between Cosmos 186 and Cosmos 188 on October 30, 1967.
The first Soviet cosmonaut to attempt a manual docking was Georgy Beregovoy who unsuccessfully tried to dock his Soyuz 3 craft with the uncrewed Soyuz 2 in October 1968. Automated systems brought the craft to within , while Beregovoy brought this closer with manual control.
The first successful crewed docking occurred on January 16, 1969 when Soyuz 4 and Soyuz 5 docked, collecting the two crew members of Soyuz 5, which had to perform an extravehicular activity to reach Soyuz 4.
In March 1969 Apollo 9 achieved the first internal transfer of crew members between two docked spacecraft.
The first rendezvous of two spacecraft from different countries took place in 1975, when an Apollo spacecraft docked with a Soyuz spacecraft as part of the Apollo–Soyuz mission.
The first multiple space docking took place when both Soyuz 26 and Soyuz 27 were docked to the Salyut 6 space station during January 1978.
Uses
[[File:Mir collision damage STS086-720-091.JPG|right|thumb|Damaged solar arrays on Mir'''s Spektr module following a collision with an uncrewed Progress spacecraft in September 1997 as part of Shuttle-Mir. The Progress spacecraft were used for re-supplying the station. In this space rendezvous gone wrong, the Progress collided with Mir, beginning a depressurization that was halted by closing the hatch to Spektr.|alt=A gold-coloured solar array, bent and twisted out of shape and with several holes. The edge of a module can be seen to the right of the image, and Earth is visible in the background.]]
A rendezvous takes place each time a spacecraft brings crew members or supplies to an orbiting space station. The first spacecraft to do this was Soyuz 11, which successfully docked with the Salyut 1 station on June 7, 1971. Human spaceflight missions have successfully made rendezvous with six Salyut stations, with Skylab, with Mir and with the International Space Station (ISS). Currently Soyuz spacecraft are used at approximately six month intervals to transport crew members to and from ISS. With the introduction of NASA's Commercial Crew Program, the US is able to use their own launch vehicle along with the Soyuz, an updated version of SpaceX's Cargo Dragon; Crew Dragon.
Robotic spacecraft are also used to rendezvous with and resupply space stations. Soyuz and Progress spacecraft have automatically docked with both Mir and the ISS using the Kurs docking system, Europe's Automated Transfer Vehicle also used this system to dock with the Russian segment of the ISS. Several uncrewed spacecraft use NASA's berthing mechanism rather than a docking port. The Japanese H-II Transfer Vehicle (HTV), SpaceX Dragon, and Orbital Sciences' Cygnus spacecraft all maneuver to a close rendezvous and maintain station-keeping, allowing the ISS Canadarm2 to grapple and move the spacecraft to a berthing port on the US segment. However the updated version of Cargo Dragon will no longer need to berth but instead will autonomously dock directly to the space station. The Russian segment only uses docking ports so it is not possible for HTV, Dragon and Cygnus to find a berth there.
Space rendezvous has been used for a variety of other purposes, including recent service missions to the Hubble Space Telescope. Historically, for the missions of Project Apollo that landed astronauts on the Moon, the ascent stage of the Apollo Lunar Module would rendezvous and dock with the Apollo Command/Service Module in lunar orbit rendezvous maneuvers. Also, the STS-49 crew rendezvoused with and attached a rocket motor to the Intelsat VI F-3 communications satellite to allow it to make an orbital maneuver.
Possible future rendezvous may be made by a yet to be developed automated Hubble Robotic Vehicle (HRV), and by the CX-OLEV, which is being developed for rendezvous with a geosynchronous satellite that has run out of fuel. The CX-OLEV would take over orbital stationkeeping and/or finally bring the satellite to a graveyard orbit, after which the CX-OLEV can possibly be reused for another satellite. Gradual transfer from the geostationary transfer orbit to the geosynchronous orbit will take a number of months, using Hall effect thrusters.
Alternatively the two spacecraft are already together, and just undock and dock in a different way:
Soyuz spacecraft from one docking point to another on the ISS or Salyut
In the Apollo spacecraft, a maneuver known as transposition, docking, and extraction was performed an hour or so after Trans Lunar Injection of the sequence third stage of the Saturn V rocket / LM inside LM adapter / CSM (in order from bottom to top at launch, also the order from back to front with respect to the current motion), with CSM crewed, LM at this stage uncrewed:
the CSM separated, while the four upper panels of the LM adapter were disposed of
the CSM turned 180 degrees (from engine backward, toward LM, to forward)
the CSM connected to the LM while that was still connected to the third stage
the CSM/LM combination then separated from the third stage
NASA sometimes refers to "Rendezvous, Proximity-Operations, Docking, and Undocking" (RPODU) for the set of all spaceflight procedures that are typically needed around spacecraft operations where two spacecraft work in proximity to one another with intent to connect to one another.
Phases and methods
The standard technique for rendezvous and docking is to dock an active vehicle, the "chaser", with a passive "target". This technique has been used successfully for the Gemini, Apollo, Apollo/Soyuz, Salyut, Skylab, Mir, ISS, and Tiangong programs.
To properly understand spacecraft rendezvous it is essential to understand the relation between spacecraft velocity and orbit. A spacecraft in a certain orbit cannot arbitrarily alter its velocity. Each orbit correlates to a certain orbital velocity. If the spacecraft fires thrusters and increases (or decreases) its velocity it will obtain a different orbit, one with a higher or lower altitude. In circular orbits, higher orbits have a lower orbital velocity. Lower orbits have a higher orbital velocity.
For orbital rendezvous to occur, both spacecraft must be in the same orbital plane, and the phase of the orbit (the position of the spacecraft in the orbit) must be matched. For docking, the speed of the two vehicles must also be matched. The "chaser" is placed in a slightly lower orbit than the target. The lower the orbit, the higher the orbital velocity. The difference in orbital velocities of chaser and target is therefore such that the chaser is faster than the target, and catches up with it.
Once the two spacecraft are sufficiently close, the chaser's orbit is synchronized with the target's orbit. That is, the chaser will be accelerated. This increase in velocity carries the chaser to a higher orbit. The increase in velocity is chosen such that the chaser approximately assumes the orbit of the target. Stepwise, the chaser closes in on the target, until proximity operations (see below) can be started.
In the very final phase, the closure rate is reduced by use of the active vehicle's reaction control system.
Docking typically occurs at a rate of to .
Rendezvous phases
Space rendezvous of an active, or "chaser", spacecraft with an (assumed) passive spacecraft may be divided into several phases, and typically starts with the two spacecraft in separate orbits, typically separated by more than :
A variety of techniques may be used to effect the translational and rotational maneuvers necessary for proximity operations and docking.
Methods of approach
The two most common methods of approach for proximity operations are in-line with the flight path of the spacecraft (called V-bar, as it is along the velocity vector of the target) and perpendicular to the flight path along the line of the radius of the orbit (called R-bar, as it is along the radial vector, with respect to Earth, of the target).
The chosen method of approach depends on safety, spacecraft / thruster design, mission timeline, and, especially for docking with the ISS, on the location of the assigned docking port.
V-bar approach
The V-bar approach is an approach of the "chaser" horizontally along the passive spacecraft's velocity vector. That is, from behind or from ahead, and in the same direction as the orbital motion of the passive target. The motion is parallel to the target's orbital velocity.
In the V-bar approach from behind, the chaser fires small thrusters to increase its velocity in the direction of the target. This, of course, also drives the chaser to a higher orbit. To keep the chaser on the V-vector, other thrusters are fired in the radial direction. If this is omitted (for example due to a thruster failure), the chaser will be carried to a higher orbit, which is associated with an orbital velocity lower than the target's. Consequently, the target moves faster than the chaser and the distance between them increases. This is called a natural braking effect, and is a natural safeguard in case of a thruster failure.
STS-104 was the third Space Shuttle mission to conduct a V-bar arrival at the International Space Station. The V-bar, or velocity vector, extends along a line directly ahead of the station. Shuttles approach the ISS along the V-bar when docking at the PMA-2 docking port.
R-bar approach
The R-bar approach consists of the chaser moving below or above the target spacecraft, along its radial vector. The motion is orthogonal to the orbital velocity of the passive spacecraft.
When below the target the chaser fires radial thrusters to close in on the target. By this it increases its altitude. However, the orbital velocity of the chaser remains unchanged (thruster firings in the radial direction have no effect on the orbital velocity). Now in a slightly higher position, but with an orbital velocity that does not correspond to the local circular velocity, the chaser slightly falls behind the target. Small rocket pulses in the orbital velocity direction are necessary to keep the chaser along the radial vector of the target. If these rocket pulses are not executed (for example due to a thruster failure), the chaser will move away from the target. This is a natural braking effect''. For the R-bar approach, this effect is stronger than for the V-bar approach, making the R-bar approach the safer one of the two.
Generally, the R-bar approach from below is preferable, as the chaser is in a lower (faster) orbit than the target, and thus "catches up" with it. For the R-bar approach from above, the chaser is in a higher (slower) orbit than the target, and thus has to wait for the target to approach it.
Astrotech proposed meeting ISS cargo needs with a vehicle which would approach the station, "using a traditional nadir R-bar approach." The nadir R-bar approach is also used for flights to the ISS of H-II Transfer Vehicles, and of SpaceX Dragon vehicles.
Z-bar approach
An approach of the active, or "chaser", spacecraft horizontally from the side and orthogonal to the orbital plane of the passive spacecraft—that is, from the side and out-of-plane of the orbit of the passive spacecraft—is called a Z-bar approach.
Surface rendezvous
Apollo 12, the second crewed lunar landing, performed the first ever rendezvous outside of Low Earth Orbit by landing close to Surveyor 3 and taking parts of it back to Earth.
| Physical sciences | Orbital mechanics | Astronomy |
985483 | https://en.wikipedia.org/wiki/Cenote | Cenote | A cenote ( or ; ) is a natural pit, or sinkhole, resulting when a collapse of limestone bedrock exposes groundwater. The term originated on the Yucatán Peninsula of Mexico, where the ancient Maya commonly used cenotes for water supplies, and occasionally for sacrificial offerings. The name derives from a word used by the lowland Yucatec Maya——to refer to any location with accessible groundwater.
In Mexico the Yucatán Peninsula alone has an estimated 10,000 cenotes, water-filled sinkholes naturally formed by the collapse of limestone, and located across the peninsula. Some of these cenotes are at risk from the construction of the new tourist Maya Train.
Cenotes are common geological forms in low-altitude regions, particularly on islands (such as Cefalonia, Greece), coastlines, and platforms with young post-Paleozoic limestone with little soil development. The term cenote, originally applying only to the features in Yucatán, has since been applied by researchers to similar karst features in other places such as in Cuba, Australia, Europe, and the United States.
Definition and description
Cenotes are surface connections to subterranean water bodies. While the best-known cenotes are large open-water pools measuring tens of meters in diameter, such as those at Chichen Itza in Mexico, the greatest number of cenotes are smaller sheltered sites and do not necessarily have any surface exposed water. Some cenotes are only found through small <1 m (3 ft) diameter holes created by tree roots, with human access through enlarged holes, such as the cenotes Choo-Ha, Tankach-Ha, and Multum-Ha near Tulum. There are at least 6,000 cenotes in the Yucatán Peninsula of Mexico. Cenote water is often very clear, as the water comes from rain water filtering slowly through the ground, and therefore contains very little suspended particulate matter. The groundwater flow rate within a cenote may be very slow. In many cases, cenotes are areas where sections of the cave roof have collapsed revealing an underlying cave system, and the water flow rates may be much faster: up to per day.
The Yucatan cenotes attract cavern and cave divers who have documented extensive flooded cave systems, some of which have been explored for lengths of or more.
Geology and hydrology
Cenotes are formed by the dissolution of rock and the resulting subsurface void, which may or may not be linked to an active cave system, and the subsequent structural collapse. Rock that falls into the water below is slowly removed by further dissolution, creating space for more collapse blocks. Likely, the rate of collapse increases during periods when the water table is below the ceiling of the void since the rock ceiling is no longer buoyantly supported by the water in the void.
Cenotes may be fully collapsed, creating an open water pool, or partially collapsed with some portion of a rock overhanging above the water. The stereotypical cenotes often resemble small circular ponds, measuring some tens of meters in diameter with sheer rock walls. Most cenotes, however, require some degree of stooping or crawling to access the water.
Penetration and extent
In the north and northwest of the Yucatán Peninsula in Mexico, the cenotes generally overlie vertical voids penetrating below the modern water table. However, very few of these cenotes appear to be connected with horizontally extensive underground river systems, with water flow through them being more likely dominated by aquifer matrix and fracture flows.
In contrast, the cenotes along the Caribbean coast of the Yucatán Peninsula (within the state of Quintana Roo) often provide access to extensive underwater cave systems, such as Sistema Ox Bel Ha, Sistema Sac Actun/Sistema Nohoch Nah Chich and Sistema Dos Ojos.
Freshwater/seawater interface
The Yucatán Peninsula contains a vast coastal aquifer system, which is typically density-stratified. The infiltrating meteoric water (i.e., rainwater) floats on top of higher-density saline water intruding from the coastal margins. The whole aquifer is therefore an anchialine system (one that is land-locked but connected to an ocean). Where a cenote, or the flooded cave to which it is an opening, provides deep enough access into the aquifer, the interface between the fresh and saline water may be reached. The density interface between the fresh and saline waters is a halocline, which means a sharp change in salt concentration over a small change in depth. Mixing of the fresh and saline water results in a blurry swirling effect caused by refraction between the different densities of fresh and saline waters.
The depth of the halocline is a function of several factors: climate and specifically how much meteoric water recharges the aquifer, hydraulic conductivity of the host rock, distribution and connectivity of existing cave systems, and how effective these are at draining water to the coast, and the distance from the coast. In general, the halocline is deeper further from the coast, and in the Yucatán Peninsula this depth is below the water table at the coast, and below the water table in the middle of the peninsula, with saline water underlying the whole of the peninsula.
Types
In 1936, a simple morphometry-based classification system for cenotes was presented.
Cenotes-cántaro (Jug or pit cenotes) are those with a surface connection narrower than the diameter of the water body;
Cenotes-cilíndricos (Cylinder cenotes) are those with strictly vertical walls;
Cenotes-aguadas (Basin cenotes) are those with shallow water basins; and
Grutas (Cave cenotes) are those having a horizontal entrance with dry sections.
The classification scheme was based on morphometric observations above the water table, and therefore incompletely reflects the processes by which the cenotes formed and the inherent hydrogeochemical relationship with the underlying flooded cave networks, which were only discovered in the 1980s and later with the initiation of cave diving exploration.
Flora and fauna
Flora and fauna are generally scarcer than in the open ocean; however, marine animals do thrive in caves. In caverns, one can spot mojarras, mollies, guppies, catfish, small eels and frogs. In the most secluded and darker cenotes, the fauna has evolved to resemble those of many cave-dwelling species. For example, many animals don't have pigmentation and are often blind, so they are equipped with long feelers to find food and make their way around in the dark.
Chicxulub crater
Although cenotes are found widely throughout much of the Yucatán Peninsula, a higher-density circular alignment of cenotes overlies the measured rim of the Chicxulub crater. This crater structure, identified from the alignment of cenotes, but also subsequently mapped using geophysical methods (including gravity mapping) and also drilled into with core recovery, has been dated to the boundary between the Cretaceous and Paleogene geologic periods, 66 million years ago. This meteorite impact at the Cretaceous–Paleogene boundary is therefore associated with the mass extinction of the non-avian dinosaurs and is also known as the Cretaceous–Paleogene extinction event.
Archaeology and anthropology
In 2001–2002 expeditions led by Arturo H. González and Carmen Rojas Sandoval in the Yucatán discovered three human skeletons; one of them, Eve of Naharon, was carbon-dated to be 13,600 years old. In March 2008, three members of the Proyecto Espeleológico de Tulum and Global Underwater Explorers dive team, Alex Alvarez, Franco Attolini, and Alberto Nava, explored a section of Sistema Aktun Hu (part of Sistema Sac Actun) known as the pit Hoyo Negro. At a depth of the divers located the remains of a mastodon and a human skull (at ) that might be the oldest evidence of human habitation in the region.
The Yucatán Peninsula has almost no rivers and only a few lakes, and those are often marshy. The widely distributed cenotes are the only perennial source of potable water and have long been the principal source of water in much of the region. Major Maya settlements required access to adequate water supplies, and therefore cities, including the famous Chichen Itza, were built around these natural wells. Many cenotes like the Sacred Cenote in Chichen Itza played an important role in Maya rites. The Maya believed that cenotes were portals to Xibalba or the afterlife, and home to the rain god, Chaac. The Maya often deposited human remains as well as ceremonial artifacts in these cenotes.
The discovery of golden sacrificial artifacts in some cenotes led to the archaeological exploration of most cenotes in the first part of the 20th century. Edward Herbert Thompson (1857–1935), an American diplomat who had bought the Chichen Itza site, began dredging the Sacred Cenote there in 1904. He discovered human skeletons and sacrificial objects confirming a local legend, the Cult of the Cenote, involving human sacrifice to the rain god Chaac by the ritual casting of victims and objects into the cenote. However, not all cenotes were sites of human sacrifice. The cenote at Punta Laguna has been extensively studied and none of the approximately 120 individuals show signs of sacrifice.
The remains of this cultural heritage are protected by the UNESCO Convention on the Protection of the Underwater Cultural Heritage.
Scuba diving
Cenotes have attracted cavern and cave divers, and there are organized efforts to explore and map these underwater systems. They are public or private and sometimes considered "National Natural Parks". Great care should be taken to avoid spoiling this fragile ecosystem when diving. In Mexico, the Quintana Roo Speleological Survey maintains a list of the longest and deepest water-filled and dry caves within the state boundaries. When cavern diving, one must be able to see natural light the entire time that one is exploring the cavern (e.g., Kukulkan cenote near Tulum, Mexico). During a cave dive, one passes the point where daylight can penetrate, and one follows a safety guideline to exit the cave. Things change quite dramatically once moving from a cavern dive into a cave dive.
Contrary to cenote cavern diving, cenote cave diving requires special equipment and training (certification for cave diving). However, both cavern and cave diving require detailed briefings, diving experience, and weight adjustment to freshwater buoyancy. The cenotes are usually filled with rather cool fresh water. Cenote divers must be wary of possible halocline; this produces blurred vision until they reach a more homogeneous area.
Notable cenotes
Australia
Ewens Ponds, near Mount Gambier, South Australia
Kilsby sinkhole, near Mount Gambier, South Australia
Little Blue Lake, near Mount Schank, South Australia
Bahamas
Thunderball Grotto, on Staniel Cay
Belize
Great Blue Hole
Canada
Devil's Bath is the largest cenote in Canada at a size of 1178 ft (359m) in diameter and 144 ft (44m) in depth. It is located near the village of Port Alice, British Columbia on the northwest coastline of Vancouver Island. Devil's Bath is continuously fed by an underground spring and is connected by underwater tunnel to the Benson River Cave.
Dominican Republic
Hoyo Azul (Punta Cana)
Los Tres Ojos
Ojos Indigenas (Punta Cana)
Greece
Melissani Cave, Kefalonia
Jamaica
Blue Hole (Ocho Rios)
Mexico
Yucatán Peninsula
Dos Ojos, Municipality of Tulum
Dzibilchaltun, Yucatán
Ik Kil, Yucatan
Gran Cenote, Municipality of Tulum
Hubiku, Yucatan
Sacred Cenote, Chichen Itza
Xtacunbilxunan, Bolonchén
Cenote Azul, Playa del Carmen
Jardin Del Eden, Bacalar
Choo-Ha, Coba
Zaci, Valladolid
El Zapote, the site of the Hells Bells bell-like rock formation
United States
Blue Hole, Santa Rosa, New Mexico
Blue Hole, Castalia, Ohio
Bottomless Lakes, near Roswell, New Mexico
Montezuma Well, Verde Valley, Arizona
Hamilton Pool, Austin, Texas
Zimbabwe
Chinhoyi Caves in Zimbabwe
| Physical sciences | Caves | Earth science |
985963 | https://en.wikipedia.org/wiki/Lambda-CDM%20model | Lambda-CDM model | The Lambda-CDM, Lambda cold dark matter, or ΛCDM model is a mathematical model of the Big Bang theory with three major components:
a cosmological constant, denoted by lambda (Λ), associated with dark energy;
the postulated cold dark matter, denoted by CDM;
ordinary matter.
It is the current standard model of Big Bang cosmology, as it is the simplest model that provides a reasonably good account of:
the existence and structure of the cosmic microwave background;
the large-scale structure in the distribution of galaxies;
the observed abundances of hydrogen (including deuterium), helium, and lithium;
the accelerating expansion of the universe observed in the light from distant galaxies and supernovae.
The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period of time when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe.
The ΛCDM model has been successful in modeling broad collection of astronomical observations over decades. Remaining issues have lead to many alternative models and challenges the assumptions of the ΛCDM model.
Overview
The ΛCDM model is based on three postulates on the structure of spacetime:
The cosmological principle, that the universe is the same everywhere and in all directions, and that it is expanding,
A postulate by Hermann Weyl that the lines of spacetime (geodesics) intersect at only one point, where time along each line can be synchronized; the behavior resembles an expanding fluid,
general relativity that relates the geometry of spacetime to the distribution of matter and energy.
This combination greatly simplifies the equations of general relativity in to a form called the Friedmann equations. These equations specify the evolution of the scale factor the universe in terms of the pressure and density of a perfect fluid. The evolving density is composed of different kinds of energy and matter, each with its own role in affecting the scale factor. For example, a model might include baryons, photons, neutrinos, and dark matter. These component densities become parameters extracted when the model constrained to match astrophysical observations.
The most accurate observations which are sensitive to the component densities are consequences of statistical inhomogeneity called "perturbations" in the early universe. Since the Friedmann equations assume homogeneity, additional theory must be added before comparison to experiments. Inflation is a simple model producing perturbations by postulating an extremely rapid expansion early in the universe that separates quantum fluctuations before they can equilibrate. The perturbations are characterized by additional parameters also determined by matching observations.
Finally, the light which will become astronomical observations must pass through the universe. The latter part of that journey will pass through ionized space, where the electrons can scatter the light, altering the anisotropies. This effect is characterized by one additional parameter.
The ΛCDM model includes an expansion of metric space that is well documented, both as the redshift of prominent spectral absorption or emission lines in the light from distant galaxies, and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. Also, since it originates from ordinary general relativity, it, like general relativity, allows for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light.
The letter Λ (lambda) represents the cosmological constant, which is associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, , which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, , is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia supernovae or based on the 2018 release of Planck satellite data, or more than 68.3% (2018 estimate) of the mass–energy density of the universe.
Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "non-keplerian" rotation curves of galaxies; the gravitational lensing of light by galaxy clusters; and the enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter.
The ΛCDM model proposes specifically cold dark matter, hypothesized as:
Non-baryonic: Consists of matter other than protons and neutrons (and electrons, by convention, although electrons are not baryons)
Cold: Its velocity is far less than the speed of light at the epoch of radiation–matter equality (thus neutrinos are excluded, being non-baryonic but not cold)
Dissipationless: Cannot cool by radiating photons
Collisionless: Dark matter particles interact with each other and other particles only through gravity and possibly the weak force
Dark matter constitutes about 26.5% of the mass–energy density of the universe. The remaining 4.9% comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10% of the ordinary matter contribution to the mass–energy density of the universe.
The model includes a single originating event, the "Big Bang", which was not an explosion but the abrupt appearance of expanding spacetime containing radiation at temperatures of around 1015 K. This was immediately (within 10−29 seconds) followed by an exponential expansion of space by a scale multiplier of 1027 or more, known as cosmic inflation. The early universe remained hot (above 10 000 K) for several hundred thousand years, a state that is detectable as a residual cosmic microwave background, or CMB, a very low-energy radiation emanating from all parts of the sky. The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only cosmological model consistent with the observed continuing expansion of space, the observed distribution of lighter elements in the universe (hydrogen, helium, and lithium), and the spatial texture of minute irregularities (anisotropies) in the CMB radiation. Cosmic inflation also addresses the "horizon problem" in the CMB; indeed, it seems likely that the universe is larger than the observable particle horizon.
The model uses the Friedmann–Lemaître–Robertson–Walker metric, the Friedmann equations, and the cosmological equations of state to describe the observable universe from approximately 0.1 s to the present.
Cosmic expansion history
The expansion of the universe is parameterized by a dimensionless scale factor (with time counted from the birth of the universe), defined relative to the present time, so ; the usual convention in cosmology is that subscript 0 denotes present-day values, so denotes the age of the universe. The scale factor is related to the observed redshift of the light emitted at time by
The expansion rate is described by the time-dependent Hubble parameter, , defined as
where is the time-derivative of the scale factor. The first Friedmann equation gives the expansion rate in terms of the matter+radiation density the curvature and the cosmological constant
where, as usual is the speed of light and is the gravitational constant.
A critical density is the present-day density, which gives zero curvature , assuming the cosmological constant is zero, regardless of its actual value. Substituting these conditions to the Friedmann equation gives
where is the reduced Hubble constant.
If the cosmological constant were actually zero, the critical density would also mark the dividing line between eventual recollapse of the universe to a Big Crunch, or unlimited expansion. For the Lambda-CDM model with a positive cosmological constant (as observed), the universe is predicted to expand forever regardless of whether the total density is slightly above or below the critical density; though other outcomes are possible in extended models where the dark energy is not constant but actually time-dependent.
The present-day density parameter for various species is defined as the dimensionless ratio
where the subscript is one of for baryons, for cold dark matter, for radiation (photons plus relativistic neutrinos), and for dark energy.
Since the densities of various species scale as different powers of , e.g. for matter etc.,
the Friedmann equation can be conveniently rewritten in terms of the various density parameters as
where is the equation of state parameter of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various parameters add up to by construction. In the general case this is integrated by computer to give the expansion history and also observable distance–redshift relations for any chosen values of the cosmological parameters, which can then be compared with observations such as supernovae and baryon acoustic oscillations.
In the minimal 6-parameter Lambda-CDM model, it is assumed that curvature is zero and , so this simplifies to
Observations show that the radiation density is very small today, ; if this term is neglected
the above has an analytic solution
where
this is fairly accurate for or million years.
Solving for gives the present age of the universe in terms of the other parameters.
It follows that the transition from decelerating to accelerating expansion (the second derivative crossing zero) occurred when
which evaluates to or for the best-fit parameters estimated from the Planck spacecraft.
Parameters
Multiple variants of the ΛCDM model are used with some differences in parameters. One such set is outlined in the table below.
The Planck collaboration version of the ΛCDM model is based on six parameters: baryon density parameter; dark matter density parameter; scalar spectral index; two parameters related to curvature fluctuation amplitude; and the probability that photons from the early universe will be scattered once on route (called reionization optical depth). Six is the smallest number of parameters needed to give an acceptable fit to the observations; other possible parameters are fixed at "natural" values, e.g. total density parameter = 1.00, dark energy equation of state = −1.
The parameter values, and uncertainties, are estimated using computer searches to locate the region of parameter space providing an acceptable match to cosmological observations. From these six parameters, the other model values, such as the Hubble constant and the dark energy density, can be calculated.
Historical development
The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the universe, and in particular, whether the total density is above or below the so-called critical density.
During the 1970s, most attention focused on pure-baryonic models, but there were serious challenges explaining the formation of galaxies, given the small anisotropies in the CMB (upper limits at that time). In the early 1980s, it was realized that this could be resolved if cold dark matter dominated over the baryons, and the theory of cosmic inflation motivated models with critical density.
During the 1980s, most research focused on cold dark matter with critical density in matter, around 95% CDM and 5% baryons: these showed success at forming galaxies and clusters of galaxies, but problems remained; notably, the model required a Hubble constant lower than preferred by observations, and observations around 1988–1990 showed more large-scale galaxy clustering than predicted.
These difficulties sharpened with the discovery of CMB anisotropy by the Cosmic Background Explorer in 1992, and several modified CDM models, including ΛCDM and mixed cold and hot dark matter, came under active consideration through the mid-1990s. The ΛCDM model then became the leading model following the observations of accelerating expansion in 1998, and was quickly supported by other observations: in 2000, the BOOMERanG microwave background experiment measured the total (matter–energy) density to be close to 100% of critical, whereas in 2001 the 2dFGRS galaxy redshift survey measured the matter density to be near 25%; the large difference between these values supports a positive Λ or dark energy. Much more precise spacecraft measurements of the microwave background from WMAP in 2003–2010 and Planck in 2013–2015 have continued to support the model and pin down the parameter values, most of which are constrained below 1 percent uncertainty.
Successes
Among all cosmological models, the ΛCDM model has been the most successful; it describes a wide range of astronomical observations with remarkable accuracy. The notable successes include:
Accurate modeling the high-precision CMB angular distribution measure by the Planck mission and Atacama Cosmology Telescope.
Accurate description of the linear E-mode polarization of the CMB radiation due to fluctuations on the surface of last scattering events.
Prediction of the observed B-mode polarization of the CMB light due to primordial gravitational waves.
Observations of H2O emission spectra from a galaxy 12.8 billion light years away consistent with molecules excited by cosmic background radiation much more energetic – 16-20K – than the CMB we observe now, 3K.
Predictions of the primordial abundance of deuterium as a result of Big bang nucleosynthesis. The observed abundance matches the one derived from the nucleosynthesis model with the value for baryon density derived from CMB measurements.
In addition to explaining many pre-2000 observations, the model has made a number of successful predictions: notably the existence of the baryon acoustic oscillation feature, discovered in 2005 in the predicted location; and the statistics of weak gravitational lensing, first observed in 2000 by several teams. The polarization of the CMB, discovered in 2002 by DASI, has been successfully predicted by the model: in the 2015 Planck data release, there are seven observed peaks in the temperature (TT) power spectrum, six peaks in the temperature–polarization (TE) cross spectrum, and five peaks in the polarization (EE) spectrum. The six free parameters can be well constrained by the TT spectrum alone, and then the TE and EE spectra can be predicted theoretically to few-percent precision with no further adjustments allowed.
Challenges
Despite the widespread success of ΛCDM in matching observations of our universe, cosmologists believe that the model may be an approximation of a more fundamental model.
Lack of detection
Extensive searches for dark matter particles have so far shown no well-agreed detection, while dark energy may be almost impossible to detect in a laboratory, and its value is extremely small compared to vacuum energy theoretical predictions.
Violations of the cosmological principle
The ΛCDM model, like all models built on the Friedmann–Lemaître–Robertson–Walker metric, assume that the universe looks the same in all directions (isotropy) and from every location (homogeneity) if you look at a large enough scale: "the universe looks the same whoever and wherever you are." This cosmological principle allows a metric, Friedmann–Lemaître–Robertson–Walker metric, to be derived and developed into a theory to compare to experiments. Without the principle, a metric would need to be extracted from astronomical data, which may not be possible. The assumptions were carried over into the ΛCDM model. However, some findings suggested violations of the cosmological principle.
Violations of isotropy
Evidence from galaxy clusters, quasars, and type Ia supernovae suggest that isotropy is violated on large scales.
Data from the Planck Mission shows hemispheric bias in the cosmic microwave background in two respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities). The European Space Agency (the governing body of the Planck Mission) has concluded that these anisotropies in the CMB are, in fact, statistically significant and can no longer be ignored.
Already in 1967, Dennis Sciama predicted that the cosmic microwave background has a significant dipole anisotropy. In recent years, the CMB dipole has been tested, and the results suggest our motion with respect to distant radio galaxies and quasars differs from our motion with respect to the cosmic microwave background. The same conclusion has been reached in recent studies of the Hubble diagram of Type Ia supernovae and quasars. This contradicts the cosmological principle.
The CMB dipole is hinted at through a number of other observations. First, even within the cosmic microwave background, there are curious directional alignments and an anomalous parity asymmetry that may have an origin in the CMB dipole. Separately, the CMB dipole direction has emerged as a preferred direction in studies of alignments in quasar polarizations, scaling relations in galaxy clusters, strong lensing time delay, Type Ia supernovae, and quasars and gamma-ray bursts as standard candles. The fact that all these independent observables, based on different physics, are tracking the CMB dipole direction suggests that the Universe is anisotropic in the direction of the CMB dipole.
Nevertheless, some authors have stated that the universe around Earth is isotropic at high significance by studies of the cosmic microwave background temperature maps.
Violations of homogeneity
Based on N-body simulations in ΛCDM, Yadav and his colleagues showed that the spatial distribution of galaxies is statistically homogeneous if averaged over scales 260/h Mpc or more. However, many large-scale structures have been discovered, and some authors have reported some of the structures to be in conflict with the predicted scale of homogeneity for ΛCDM, including
The Clowes–Campusano LQG, discovered in 1991, which has a length of 580 Mpc
The Sloan Great Wall, discovered in 2003, which has a length of 423 Mpc
U1.11, a large quasar group discovered in 2011, which has a length of 780 Mpc
The Huge-LQG, discovered in 2012, which is three times longer than and twice as wide as is predicted possible according to ΛCDM
The Hercules–Corona Borealis Great Wall, discovered in November 2013, which has a length of 2000–3000 Mpc (more than seven times that of the SGW)
The Giant Arc, discovered in June 2021, which has a length of 1000 Mpc
The Big Ring, reported in 2024, which has a diameter of 399 Mpc and is shaped like a ring
Other authors claim that the existence of structures larger than the scale of homogeneity in the ΛCDM model does not necessarily violate the cosmological principle in the ΛCDM model.
El Gordo galaxy cluster collision
El Gordo is a massive interacting galaxy cluster in the early Universe (). The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong () tension with the ΛCDM model. The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation.
KBC void
The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. Some authors have said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at or Einstein's theory of general relativity, either of which would violate the ΛCDM model, while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model.
Hubble tension
Statistically significant differences remain in values of the Hubble constant derived by matching the ΛCDM model to data from the "early universe", like the cosmic background radiation compared to values derived from stellar distance measurements, called the "late universe". While systematic error in the measurements remains a possibility, many different kinds of observations agree with one of these two values of the constant. This difference, called the Hubble tension, widely acknowledged to be a major problem for the ΛCDM model.
Dozens of proposals for modifications of ΛCDM or completely new models have been published to explain the Hubble tension. Among these models are many that modify the properties of dark energy or of dark matter over time, interactions between dark energy and dark matter, unified dark energy and matter, other forms of dark radiation like sterile neutrinos, modifications to the properties of gravity, or the modification of the effects of inflation, changes to the properties of elementary particles in the early universe, among others. None of these models can simultaneously explain the breadth of other cosmological data as well as ΛCDM.
S8 tension
The tension in cosmology is another major problem for the ΛCDM model. The parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as
Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of . However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction.
Some values for are (2020 Planck), (2021 KIDS), (2022 DES), (2023 DES+KIDS), – (2023 HSC-SSP), (2024 EROSITA). Values have also obtained using peculiar velocities, (2020) and (2020), among other methods.
Axis of evil
Cosmological lithium problem
The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4. If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed.
Shape of the universe
The ΛCDM model assumes that the shape of the universe is of zero curvature (is flat) and has an undetermined topology. In 2019, interpretation of Planck data suggested that the curvature of the universe might be positive (often called "closed"), which would contradict the ΛCDM model. Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being globally a 3-manifold of positive curvature.
Violations of the strong equivalence principle
The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. and as inconsistent with similar analysis of other galaxies.
Cold dark matter discrepancies
Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model.
Milgrom, McGaugh, and Kroupa have criticized the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as theories (see Galilean invariance), brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity.
Cuspy halo problem
The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves.
Dwarf galaxy problem
Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way.
Satellite disk problem
Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time.
High-velocity galaxy problem
Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy.
Galaxy morphology problem
If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80% of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years.
Fast galaxy bar problem
If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast.
Small scale crisis
Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model.
High redshift galaxies
Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4.
Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, or the hypothesized long-sought Population III stars.
Missing baryon problem
Massimo Persic and Paolo Salucci first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies.
They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following ), weighted with the luminosity function over the previously mentioned classes of astrophysical objects:
The result was:
where .
Note that this value is much lower than the prediction of standard cosmic nucleosynthesis , so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10% of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons".
The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90% of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies. Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe measurements.
Unfalsifiability
It has been argued that the ΛCDM model is built upon a foundation of conventionalist stratagems, rendering it unfalsifiable in the sense defined by Karl Popper.
Extended models
Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature ( may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations (gravitational waves). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted ), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models.
Allowing additional variable parameter(s) will generally increase the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value.
Some researchers have suggested that there is a running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio should be between 0 and 0.3, and the latest results are within those limits.
| Physical sciences | Physical cosmology | Astronomy |
986096 | https://en.wikipedia.org/wiki/Fermi%20surface | Fermi surface | In condensed matter physics, the Fermi surface is the surface in reciprocal space which separates occupied electron states from unoccupied electron states at zero temperature. The shape of the Fermi surface is derived from the periodicity and symmetry of the crystalline lattice and from the occupation of electronic energy bands. The existence of a Fermi surface is a direct consequence of the Pauli exclusion principle, which allows a maximum of one electron per quantum state. The study of the Fermi surfaces of materials is called fermiology.
Theory
Consider a spin-less ideal Fermi gas of particles. According to Fermi–Dirac statistics, the mean occupation number of a state with energy is given by
where
is the mean occupation number of the th state
is the kinetic energy of the th state
is the chemical potential (at zero temperature, this is the maximum kinetic energy the particle can have, i.e. Fermi energy )
is the absolute temperature
is the Boltzmann constant
Suppose we consider the limit . Then we have,
By the Pauli exclusion principle, no two fermions can be in the same state. Additionally, at zero temperature the enthalpy of the electrons must be minimal, meaning that they cannot change state. If, for a particle in some state, there existed an unoccupied lower state that it could occupy, then the energy difference between those states would give the electron an additional enthalpy. Hence, the enthalpy of the electron would not be minimal. Therefore, at zero temperature all the lowest energy states must be saturated. For a large ensemble the Fermi level will be approximately equal to the chemical potential of the system, and hence every state below this energy must be occupied. Thus, particles fill up all energy levels below the Fermi level at absolute zero, which is equivalent to saying that is the energy level below which there are exactly states.
In momentum space, these particles fill up a ball of radius , the surface of which is called the Fermi surface.
The linear response of a metal to an electric, magnetic, or thermal gradient is determined by the shape of the Fermi surface, because currents are due to changes in the occupancy of states near the Fermi energy. In reciprocal space, the Fermi surface of an ideal Fermi gas is a sphere of radius
,
determined by the valence electron concentration where is the reduced Planck constant. A material whose Fermi level falls in a gap between bands is an insulator or semiconductor depending on the size of the bandgap. When a material's Fermi level falls in a bandgap, there is no Fermi surface.
Materials with complex crystal structures can have quite intricate Fermi surfaces. Figure 2 illustrates the anisotropic Fermi surface of graphite, which has both electron and hole pockets in its Fermi surface due to multiple bands crossing the Fermi energy along the direction. Often in a metal, the Fermi surface radius is larger than the size of the first Brillouin zone, which results in a portion of the Fermi surface lying in the second (or higher) zones. As with the band structure itself, the Fermi surface can be displayed in an extended-zone scheme where is allowed to have arbitrarily large values or a reduced-zone scheme where wavevectors are shown modulo (in the 1-dimensional case) where a is the lattice constant. In the three-dimensional case the reduced zone scheme means that from any wavevector there is an appropriate number of reciprocal lattice vectors subtracted that the new now is closer to the origin in -space than to any . Solids with a large density of states at the Fermi level become unstable at low temperatures and tend to form ground states where the condensation energy comes from opening a gap at the Fermi surface. Examples of such ground states are superconductors, ferromagnets, Jahn–Teller distortions and spin density waves.
The state occupancy of fermions like electrons is governed by Fermi–Dirac statistics so at finite temperatures the Fermi surface is accordingly broadened. In principle all fermion energy level populations are bound by a Fermi surface although the term is not generally used outside of condensed-matter physics.
Experimental determination
Electronic Fermi surfaces have been measured through observation of the oscillation of transport properties in magnetic fields , for example the de Haas–van Alphen effect (dHvA) and the Shubnikov–de Haas effect (SdH). The former is an oscillation in magnetic susceptibility and the latter in resistivity. The oscillations are periodic versus and occur because of the quantization of energy levels in the plane perpendicular to a magnetic field, a phenomenon first predicted by Lev Landau. The new states are called Landau levels and are separated by an energy where is called the cyclotron frequency, is the electronic charge, is the electron effective mass and is the speed of light. In a famous result, Lars Onsager proved that the period of oscillation is related to the cross-section of the Fermi surface (typically given in Å−2) perpendicular to the magnetic field direction by the equation. Thus the determination of the periods of oscillation for various applied field directions allows mapping of the Fermi surface. Observation of the dHvA and SdH oscillations requires magnetic fields large enough that the circumference of the cyclotron orbit is smaller than a mean free path. Therefore, dHvA and SdH experiments are usually performed at high-field facilities like the High Field Magnet Laboratory in Netherlands, Grenoble High Magnetic Field Laboratory in France, the Tsukuba Magnet Laboratory in Japan or the National High Magnetic Field Laboratory in the United States.
The most direct experimental technique to resolve the electronic structure of crystals in the momentum-energy space (see reciprocal lattice), and, consequently, the Fermi surface, is the angle-resolved photoemission spectroscopy (ARPES). An example of the Fermi surface of superconducting cuprates measured by ARPES is shown in Figure 3.
With positron annihilation it is also possible to determine the Fermi surface as the annihilation process conserves the momentum of the initial particle. Since a positron in a solid will thermalize prior to annihilation, the annihilation radiation carries the information about the electron momentum. The corresponding experimental technique is called angular correlation of electron positron annihilation radiation (ACAR) as it measures the angular deviation from of both annihilation quanta. In this way it is possible to probe the electron momentum density of a solid and determine the Fermi surface. Furthermore, using spin polarized positrons, the momentum distribution for the two spin states in magnetized materials can be obtained. ACAR has many advantages and disadvantages compared to other experimental techniques: It does not rely on UHV conditions, cryogenic temperatures, high magnetic fields or fully ordered alloys. However, ACAR needs samples with a low vacancy concentration as they act as effective traps for positrons. In this way, the first determination of a smeared Fermi surface in a 30% alloy was obtained in 1978.
| Physical sciences | Crystallography | Physics |
986413 | https://en.wikipedia.org/wiki/Hair%20dryer | Hair dryer | A hair dryer (the handheld type also referred to as a blow dryer) is an electromechanical device that blows ambient air in hot or warm settings for styling or drying hair. Hair dryers enable better control over the shape and style of hair, by accelerating and controlling the formation of temporary hydrogen bonds within each strand. These bonds are powerful, but are temporary and extremely vulnerable to humidity. They disappear with a single washing of the hair.
Hairstyles using hair dryers usually have volume and discipline, which can be further improved with styling products, hairbrushes, and combs during drying to add tension, hold and lift. Hair dryers were invented in the late 19th century. The first model was created in 1911 by Gabriel Ghazanchyan. Handheld, household hair dryers first appeared in 1920. Hair dryers are used in beauty salons by professional stylists, as well as by consumers at home.
History
In 1888 the first hair dryer was invented by French stylist Alexandre Godefroy. His invention was a large, seated version that consisted of a bonnet that attached to the chimney pipe of a gas stove. Godefroy invented it for use in his hair salon in France, and it was not portable or handheld. It could only be used by having the person sit underneath it.
Armenian American inventor Gabriel Kazanjian was the first to patent a hair dryer in the United States, in 1911. Around 1920, hair dryers began to go on the market in handheld form. This was due to innovations by National Stamping and Electricworks under the white cross brand, and later U.S. Racine Universal Motor Company and the Hamilton Beach Co., which allowed the dryer to be small enough to be held by hand. Even in the 1920s, the new dryers were often heavy, weighing in at approximately , and were difficult to use. They also had many instances of overheating and electrocution. Hair dryers were only capable of using 100 watts, which increased the amount of time needed to dry hair (the average dryer today can use up to 2000 watts of heat).
Since the 1920s, development of the hair dryer has mainly focused on improving the wattage and superficial exterior and material changes. In fact, the mechanism of the dryer has not had any significant changes since its inception. One of the more important changes for the hair dryer is to be made of plastic, so that it is more lightweight. This really caught on in the 1960s with the introduction of better electrical motors and the improvement of plastics. Another important change happened in 1954 when GEC changed the design of the dryer to move the motor inside the casing.
The bonnet dryer was introduced to consumers in 1951. This type worked by having the dryer, usually in a small portable box, connected to a tube that went into a bonnet with holes in it that could be placed on top of a person's head. This worked by giving an even amount of heat to the whole head at once.
The 1950s also saw the introduction of the rigid-hood hair dryer which is the type most frequently seen in salons. It had a hard plastic helmet that wraps around the person's head. This dryer works similarly to the bonnet dryer of the 1950s but at a much higher wattage.
In the 1970s, the U.S. Consumer Product Safety Commission set up guidelines that hair dryers had to meet to be considered safe to manufacture. Since 1991 the CPSC has mandated that all dryers must use a ground fault circuit interrupter so that it cannot electrocute a person if it gets wet. By 2000, deaths by blowdryers had dropped to fewer than four people a year, a stark difference to the hundreds of cases of electrocution accidents during the mid-20th century.
Function
Most hair dryers consist of electric heating coils and a fan that blows the air (usually powered by a universal motor). The heating element in most dryers is a bare, coiled nichrome wire that is wrapped around mica insulators. Nichrome is used due to its high resistivity, and low tendency to corrode when heated.
A survey of stores in 2007 showed that most hair dryers had ceramic heating elements (like ceramic heaters) because of their "instant heat" capability. This means that it takes less time for the dryers to heat up and for the hair to dry.
Many of these dryers have "normal mode" buttons that turn off the heater and blow room-temperature air while the button is pressed. This function helps to maintain the hairstyle by setting it. The colder air reduces frizz and can help to promote shine in the hair.
Many feature "ionic" operation, to reduce the build-up of static electricity in the hair, though the efficacy of ionic technology is of some debate. Manufacturers claim this makes the hair "smoother".
Hair dryers are available with attachments, such as diffusers, airflow concentrators, and comb nozzles.
A diffuser is an attachment that is used on hair that is fine, colored, permed or naturally curly. It diffuses the jet of air, so that the hair is not blown around while it dries. The hair dries more slowly, at a cooler temperature, and with less physical disturbance. This makes it so that the hair is less likely to frizz and it gives the hair more volume.
An airflow concentrator does the opposite of a diffuser. It makes the end of the hair dryer narrower and thus helps to concentrate the heat into one spot to make it dry rapidly.
The comb nozzle attachment is the same as the airflow concentrator, but it ends with comb-like teeth so that the user can dry the hair using the dryer without a brush or comb.
Hair dryers have been cited as an effective treatment for head lice.
Types
Today there are two major types of hair dryers: the handheld and the rigid-hood dryer.
A hood dryer has a hard plastic dome that fits over a person's head to dry their hair. Hot air is blown out through tiny openings around the inside of the dome so the hair is dried evenly. Hood dryers are mainly found in hair salons.
Hair dryer brush
A hair dryer brush (also called "hot air brush" and "round brush hair dryer" and "hair styler") has the shape of a brush and it is used as a volumizer too.
There are two types of round brush hair dryers – rotating and static. Rotating round brush hair dryers have barrels that rotate automatically while static round brush hair dryers don't.
Cultural references
The British historical drama television series Downton Abbey made note of the invention of the portable hair dryer when a character purchased one in Series 6 Episode 9, set in the year 1925.
Gallery
| Technology | Household appliances | null |
986871 | https://en.wikipedia.org/wiki/Drug%20test | Drug test | A drug test (also often toxicology screen or tox screen) is a technical analysis of a biological specimen, for example urine, hair, blood, breath, sweat, or oral fluid/saliva—to determine the presence or absence of specified parent drugs or their metabolites. Major applications of drug testing include detection of the presence of performance enhancing steroids in sport, employers and parole/probation officers screening for drugs prohibited by law (such as cocaine, methamphetamine, and heroin) and police officers testing for the presence and concentration of alcohol (ethanol) in the blood commonly referred to as BAC (blood alcohol content). BAC tests are typically administered via a breathalyzer while urinalysis is used for the vast majority of drug testing in sports and the workplace. Numerous other methods with varying degrees of accuracy, sensitivity (detection threshold/cutoff), and detection periods exist.
A drug test may also refer to a test that provides quantitative chemical analysis of an illegal drug, typically intended to help with responsible drug use.
Detection periods
The detection windows depend upon multiple factors: drug class, amount and frequency of use, metabolic rate, body mass, age, overall health, and urine pH. For ease of use, the detection times of metabolites have been incorporated into each parent drug. For example, heroin and cocaine can only be detected for a few hours after use, but their metabolites can be detected for several days in urine. The chart depicts the longer detection times of the metabolites. In the case of hair testing, the metabolytes are permanently embedded into hair, and the detection time is determined by the length of the hair sample used in the analysis. The standard length of head hair used in the test is 1.5", which corresponds to about 3 months. Body/pubic hair grows slower, and the same 1.5" would result in a longer detection time.
Oral fluid or saliva testing results for the most part mimic that of blood. The only exceptions are THC (tetrahydrocannabinol) and benzodiazepines. Oral fluid will likely detect THC from ingestion up to a maximum period of 6–12 hours. This continues to cause difficulty in oral fluid detection of THC and benzodiazepines.
Breath air for the most part mimics blood tests as well. Due to the very low levels of substances in the breath air, liquid chromatography—mass spectrometry has to be used to analyze the sample according to a recent publication wherein 12 analytes were investigated.
Rapid oral fluid products are not approved for use in workplace drug testing programs and are not FDA cleared. Using rapid oral fluid drug tests in the workplace is prohibited in only:
California
Kansas
Maine
Minnesota
New York
Vermont
The following chart gives approximate detection periods for each substance by test type.
Types
Urine drug screen
Urine analysis is primarily used because of its low cost. Urine drug testing is one of the most common testing methods used. The enzyme-multiplied immune test is the most frequently used urinalysis. Complaints have been made about the relatively high rates of false positives using this test.
Urine drug tests screen the urine for the presence of a parent drug or its metabolites. The level of drug or its metabolites is not predictive of when the drug was taken or how much the patient used.
Urine drug testing is an immunoassay based on the principle of competitive binding. Drugs which may be present in the urine specimen compete against their respective drug conjugate for binding sites on their specific antibody. During testing, a urine specimen migrates upward by capillary action. A drug, if present in the urine specimen below its cut-off concentration, will not saturate the binding sites of its specific antibody. The antibody will then react with the drug-protein conjugate and a visible colored line will show up in the test line region of the specific drug strip.
A common misconception is that a drug test that is testing for a class of drugs, for example, opioids, will detect all drugs of that class. However, most opioid tests will not reliably detect oxycodone, oxymorphone, meperidine, or fentanyl. Likewise, most benzodiazepine drug tests will not reliably detect lorazepam. However, urine drug screens that test for a specific drug, rather than an entire class, are often available.
When an employer requests a drug test from an employee, or a physician requests a drug test from a patient, the employee or patient is typically instructed to go to a collection site or their home. The urine sample goes through a specified 'chain of custody' to ensure that it is not tampered with or invalidated through lab or employee error. The patient or employee's urine is collected at a remote location in a specially designed secure cup, sealed with tamper-resistant tape, and sent to a testing laboratory to be screened for drugs (typically the Substance Abuse and Mental Health Services Administration 5 panel). The first step at the testing site is to split the urine into two aliquots. One aliquot is first screened for drugs using an analyzer that performs immunoassay as the initial screen. To ensure the specimen integrity and to detect possible adulterants, additional parameters are tested for. Some test the properties of normal urine, such as, urine creatinine, pH, and specific gravity. Others are intended to catch substances added to the urine to alter the test result, such as, oxidants (including bleach), nitrites, and gluteraldehyde. If the urine screen is positive then another aliquot of the sample is used to confirm the findings by gas chromatography—mass spectrometry (GC-MS) or liquid chromatography - mass spectrometry methodology. If requested by the physician or employer, certain drugs are screened for individually; these are generally drugs part of a chemical class that are, for one of many reasons, considered more habit-forming or of concern. For instance, oxycodone and diamorphine may be tested, both sedative analgesics. If such a test is not requested specifically, the more general test (in the preceding case, the test for opioids) will detect most of the drugs of a class, but the employer or physician will not have the benefit of the identity of the drug.
Employment-related test results are relayed to a medical review office (MRO) where a medical physician reviews the results. If the result of the screen is negative, the MRO informs the employer that the employee has no detectable drug in the urine, typically within 24 hours. However, if the test result of the immunoassay and GC-MS are non-negative and show a concentration level of parent drug or metabolite above the established limit, the MRO contacts the employee to determine if there is any legitimate reason—such as a medical treatment or prescription.
On-site instant drug testing is a more cost-efficient method of effectively detecting substance use amongst employees, as well as in rehabilitation programs to monitor patient progress. These instant tests can be used for both urine and saliva testing. Although the accuracy of such tests varies with the manufacturer, some kits have rates of accuracy correlating closely with laboratory test results.
Breath test
Breath test is a widespread method for quickly determining alcohol intoxication. A breath test measures the alcohol concentration in the body by a deep-lung breath. There are different instruments used for measuring the alcohol content of an individual though their breath. Breathalyzer is a widely known instrument which was developed in 1954 and contained chemicals unlike other breath-testing instruments. More modernly used instruments are the infrared light-absorption devices and fuel cell detectors, these two testers are microprocessor controlled meaning the operator only has to press the start button.
To get accurate readings on a breath-testing device the individual must blow for approximately 6 seconds and need to contain roughly 1.1 to 1.5 liters of breath. For a breath-test to result accurately and truly an operator must take steps such as avoiding measuring "mouth alcohol" which is a result from regurgitation, belching, or recent intake of an alcoholic beverage. To avoid measuring "mouth alcohol" the operator must not allow the individual that's taking the test to consume any materials for at least fifteen minutes before the breath test. When pulled over for a driving violation if an individual in the United States refuses to take a breath test that individual's driver's license can be suspended for a 6 to 12 months time period.
Hair testing
Hair analysis to detect addictive substances has been used by court systems in the United States, United Kingdom, Canada, and other countries worldwide. In the United States, hair testing has been accepted in court cases as forensic evidence following the Frye Rule, the Federal Rules of Evidence, and the Daubert Rule. As such, hair testing results are legally and scientifically recognized as admissible evidence. Hair testing is commonly used in the USA as pre-employment drug test. The detection time for this test is roughly 3 months, which is the time, that takes head hair to grow ca. 1.5 inches, that are collected as a specimen. Longer detection times are possible with longer hair samples.
A 2014 collaborative US study of 359 adults with moderate-risk drug use found, that a large number of participants, who reported drug use in the last 3 months, had negative hair tests. The tests were done using an immunoassay followed by a confirmatory GC-MS.
For marijuana, only about half of self-disclosed users had a positive hair test. Under-identification of drug use by hair testing (or over-reporting) was also widespread for cocaine, amphetamines, and opioids. Because such under-identification was more common among participants, who self-reported an infrequent use, the authors suggested, that the immunoassay did not have the sensitivity required for such infrequent uses. It is worth noting, that most earlier studies reported, that hair tests found ca. 50-fold higher prevalence of illicit drug use, than self reports.
In late 2022 the US Federal Motor Carrier Safety Administration denied a petition to recognize hair samples as an alternative (to the currently used urine samples) drug-testing method for truckers. The agency did not comment on the test validity, but rather stated, that it lacks the statutory authority to adopt new analytical methods.
Although some lower courts may have accepted hair test evidence, there is no controlling judicial ruling in either the federal or any state system declaring any type of hair test as reliable.
Hair testing is now recognized in both the UK and US judicial systems. There are guidelines for hair testing that have been published by the Society of Hair Testing (a private company in France) that specify the markers to be tested for and the cutoff concentrations that need to be tested. Addictive substances that can be detected include Cannabis, Cocaine, Amphetamines and drugs new to the UK such as Mephedrone.
Alcohol
In contrast to other drugs consumed, alcohol is deposited directly in the hair. For this reason the investigation procedure looks for direct products of ethanol metabolism. The main part of alcohol is oxidized in the human body. This means it is released as water and carbon dioxide. One part of the alcohol reacts with fatty acids to produce esters. The sum of the concentrations of four of these fatty acid ethyl esters (FAEEs: ethyl myristate, ethyl palmitate, ethyl oleate and ethyl stearate) are used as indicators of the alcohol consumption. The amounts found in hair are measured in nanograms (one nanogram equals only one billionth of a gram), however with the benefit of modern technology, it is possible to detect such small amounts. In the detection of ethyl glucuronide, or EtG, testing can detect amounts in picograms (one picogram equals 0.001 nanograms).
However, there is one major difference between most drugs and alcohol metabolites in the way in which they enter into the hair: on the one hand like other drugs FAEEs enter into the hair via the keratinocytes, the cells responsible for hair growth. These cells form the hair in the root and then grow through the skin surface taking any substances with them. On the other hand, the sebaceous glands produce FAEEs in the scalp and these migrate together with the sebum along the hair shaft (Auwärter et al., 2001, Pragst et al., 2004). So these glands lubricate not only the part of the hair that is just growing at 0.3 mm per day on the skin surface, but also the more mature hair growth, providing it with a protective layer of fat.
FAEEs (nanogram = one billionth of a gram) appear in hair in almost one order of magnitude lower than (the relevant order of magnitude of) EtG (picogram = one trillionth of a gram). It has been technically possible to measure FAEEs since 1993, and the first study reporting the detection of EtG in hair was done by Sachs in 1993.
In practice, most hair which is sent for analysis has been cosmetically treated in some way (bleached, permed etc.). It has been proven that FAEEs are not significantly affected by such treatments (Hartwig et al., 2003a). FAEE concentrations in hair from other body sites can be interpreted in a similar fashion as scalp hair (Hartwig et al., 2003b).
Presumptive substance testing
Presumptive substance tests attempt to identify a suspicious substance, material or surface where traces of drugs are thought to be, instead of testing individuals through biological methods such as urine or hair testing. The test involves mixing the suspicious material with a chemical in order to trigger a color change to indicate if a drug is present. Most are now available over-the-counter for consumer use, and do not require a lab to read results.
Benefits to this method include that the person who is suspected of drug use does not need to be confronted or aware of testing. Only a very small amount of material is needed to obtain results, and can be used to test powder, pills, capsules, crystals, or organic material. There is also the ability to detect illicit material when mixed with other non-illicit materials. The tests are used for general screening purposes, offering a generic result for the presence of a wide range of drugs, including Heroin, Cocaine, Methamphetamine, Amphetamine, Ecstasy/MDMA, Methadone, Ketamine, PCP, PMA, DMT, MDPV, and may detect rapidly evolving synthetic designer drugs. Separate tests for Marijuana/Hashish are also available.
There are five primary color-tests reagents used for general screening purposes. The Marquis reagent turns into a variety of colors when in the presence of different substances. Dille-Koppanyi reagent uses two chemical solutions which turns a violet-blue color in the presence of barbiturates. Duquenois-Levine reagent is a series of chemical solutions that turn to the color of purple when the vegetation of marijuana is added. Van Urk reagent turns blue-purple when in the presence of LSD. Scott test's chemical solution shows up as a faint blue for cocaine base.
In recent years, the use of presumptive test kits in the criminal justice system has come under great scrutiny due to the lack to forensic studies, questioned reliability, rendering of false positives with legal substances, and wrongful arrests.
Saliva drug screen / Oral fluid-based drug screen
Saliva / oral fluid-based drug tests can generally detect use during the previous few days. It is better at detecting very recent use of a substance. THC may only be detectable for 2–24 hours in most cases. On site drug tests are allowed per the Department of Labor.
Detection in saliva tests begins almost immediately upon use of the following substances, and lasts for approximately the following times:
Alcohol: 6-12 h
Marijuana: 1-24h
A disadvantage of saliva based drug testing is that it is not approved by FDA or SAMHSA for use with DOT / Federal Mandated Drug Testing. Oral fluid is not considered a bio-hazard unless there is visible blood; however, it should be treated with care.
Sweat drug screen
Sweat patches are attached to the skin to collect sweat over a long period of time (up to 14 days). These are used by child protective services, parole departments, and other government institutions concerned with drug use over long periods, when urine testing is not practical.
There are also surface drug tests that test for the metabolite of parent drug groups in the residue of drugs left in sweat. An example of a rapid, non-invasive, sweat-based drug test is fingerprint drug screening. This 10 minute fingerprint test is in use by a variety of organisations in the UK and beyond, including within workplaces, drug treatment and family safeguarding services at airport border control (to detect drug mules) and in mortuaries to assist in investigations into cause of death.
Blood
Drug-testing a blood sample measures whether or not a drug or a metabolite is in the body at a particular time. These types of tests are considered to be the most accurate way of telling if a person is intoxicated. Blood drug tests are not used very often because they need specialized equipment and medically trained administrators.
Depending on how much marijuana was consumed, it can usually be detected in blood tests within six hours of consumption. After six hours has passed, the concentration of marijuana in the blood decreases significantly. It generally disappears completely within 30 days.
Random drug testing
Can occur at any time, usually when the investigator has reason to believe that a substance is possibly being used by the subject by behavior or immediately after an employee-related incident occurs during work hours. Testing protocol typically conforms to the national medical standard, candidates are given up to 120 minutes to reasonably produce a urine sample from the time of commencement (in some instances this time frame may be extended at the examiner's discretion).
Diagnostic screening
In the case of life-threatening symptoms, unconsciousness, or bizarre behavior in an emergency situation, screening for common drugs and toxins may help find the cause, called a toxicology test or tox screen to denote the broader area of possible substances beyond just self-administered drugs. These tests can also be done post-mortem during an autopsy in cases where a death was not expected. The test is usually done within 96 hours (4 days) after the desire for the test is realized. Both a urine sample and a blood sample may be tested. A blood sample is routinely used to detect ethanol/methanol and ASA/paracetamol intoxication. Various panels are used for screening urine samples for common substances, e.g. triage 8 that detects amphetamines, benzodiazepines, cocaine, methadone, opiates, cannabis, barbiturates and tricyclic antidepressants. Results are given in 10–15 min.
Similar screenings may be used to evaluate the possible use of date rape drugs. This is usually done on a urine sample.
Optional harm reduction scheme
Drug checks/tests (also known as pill testing) are provided at some events such as concerts and music festivals. Attendees can voluntarily hand over a sample of any drug or drugs in their possession to be tested to check what the drug is and its purity. The scheme is used as a harm reduction technique so people are more aware of what they are taking and the potential risks.
Occupational harm reduction strategies
Drug and alcohol impairment while at work increases the risk of work-place accidents and decreases productivity. Employers such as the commercial driving and airline industry may conduct random drug tests on employees with the goal of deterring use to improve safety. There is some evidence that increasing the use of random drug testing in the airline industry reduces the percentage of people who test positive, however, it is unclear if this decrease is associated with a corresponding decrease in fatal or non-fatal injuries, other accidents, number of days absent from work. It is also not clear if there are other unwanted side effects that may result from random drug and alcohol testing in the workplace.
Commonly tested substances
Anabolic steroids
Anabolic steroids are used to enhance performance in sports and as they are prohibited in most high-level competitions drug testing is used extensively in order to enforce this prohibition. This is particularly so in individual (rather than team) sports such as athletics and cycling.
Methodologies
Before testing samples, the tamper-evident seal is checked for integrity. If it appears to have been tampered with or damaged, the laboratory rejects the sample and does not test it.
Next, the sample must be made testable. Urine and oral fluid can be used "as is" for some tests, but other tests require the drugs to be extracted from urine. Strands of hair, patches, and blood must be prepared before testing. Hair is washed in order to eliminate second-hand sources of drugs on the surface of the hair, then the keratin is broken down using enzymes. Blood plasma may need to be separated by centrifuge from blood cells prior to testing. Sweat patches are opened and the sweat collection component is removed and soaked in a solvent to dissolve any drugs present.
Laboratory-based drug testing is done in two steps. The first step is the screening test, which is an immunoassay based test applied to all samples. The second step, known as the confirmation test, is usually undertaken by a laboratory using highly specific chromatographic techniques and only applied to samples that test positive during the screening test. Screening tests are usually done by immunoassay (EMIT, ELISA, and RIA are the most common). A "dipstick" drug testing method which could provide screening test capabilities to field investigators has been developed at the University of Illinois.
After a suspected positive sample is detected during screening, the sample is tested using a confirmation test. Samples that are negative on the screening test are discarded and reported as negative. The confirmation test in most laboratories (and all SAMHSA certified labs) is performed using mass spectrometry, and is precise but expensive. False positive samples from the screening test will almost always be negative on the confirmation test. Samples testing positive during both screening and confirmation tests are reported as positive to the entity that ordered the test. Most laboratories save positive samples for some period of months or years in the event of a disputed result or lawsuit. For workplace drug testing, a positive result is generally not confirmed without a review by a Medical Review Officer who will normally interview the subject of the drug test.
Urine drug testing
Urine drug test kits are available as on-site tests, or laboratory analysis. Urinalysis is the most common test type and used by federally mandated drug testing programs and is considered the Gold Standard of drug testing. Urine based tests have been upheld in most courts for more than 30 years. However, urinalysis conducted by the Department of Defense has been challenged for reliability of testing the metabolite of cocaine. There are two associated metabolites of cocaine, benzoylecgonine (BZ) and ecgonine methyl ester (EME), the first (BZ) is created by the presence of cocaine in an aqueous solution with a pH greater than 7.0, while the second (EME) results from the actual human metabolic process. The presence of EME confirms actual ingestion of cocaine by a human being, while the presence of BZ is indicative only. BZ without EME is evidence of sample contamination, however, the US Department of Defense has chosen not to test for EME in its urinalysis program.
A number of different analyses (defined as the unknown substance being tested for) are available on Urine Drug Screens.
Spray drug testing
Spray (sweat) drug test kits are non-invasive. It is a simple process to collect the required specimen, no bathroom is needed, no laboratory is required for analysis, and the tests themselves are difficult to manipulate and relatively tamper-resistant. The detection window is long and can detect recent drug use within several hours.
There are also some disadvantages to spray or sweat testing. There is not much variety in these drug tests, only a limited number of drugs can be detected, prices tend to be higher, and inconclusive results can be produced by variations in sweat production rates in donors. They also have a relatively long specimen collection period and are more vulnerable to contamination than other common forms of testing.
Hair drug testing
Hair drug testing is a method that can detect drug use over a much longer period of time than saliva, sweat or urine tests. Hair testing is also more robust with respect to tampering. Thus, hair sampling is preferred by the US military and by many large corporations, which are subject to Drug-Free Workplace Act of 1988.
Head hair normally growth at the rate of 0.5 inches per month. Thus, the most common hair sample length of 1.5" from the scalp would detect drug use within the last 90-100 days. 80-120 strands of hair are sufficient for the test. In the absence of hair on the head, body hair can be used as an acceptable substitute. This includes facial hair, the underarms, arms, and legs or even pubic hair. Because body hair usually grows slower than head hair, drugs can often be detected in body hair for longer periods, e.g. up to 12 months. Currently, most entities that use hair testing have prescribed consequences for individuals removing hair to avoid a hair drug test.
Most drugs are analysed in hair samples not as the original psychoactive molecules, but rather as their metabolytes. For example, ethanol is determined as ethyl glucuronide, while cocaine use is confirmed using ecgonine. Testing for metabolytes reduces the likelihood of false positive results due to contamination. One disadvantage of hair testing is, that it cannot detect recent drug use, because it takes at least a week after a drug intake for the metabolytes to show up in a growing hair above the skin. Urine tests are better suited for detecting recent (within a week) drug use.
In a practical test, hair sample is usually washed with a low polarity solvent (such as dichloromethane) to remove surface contaminations. Then, the sample is pulverized and extracted with a more polar solvent, such as methanol.
Although thousand different substances can be determined in a single gas chromatography–mass spectrometry or liquid chromatography–mass spectrometry experiment, due to the low concentration of analytes, practical measurements (see selective ion monitoring) are limited to a smaller number (10-20) of analytes. Designer drugs are usually missed in such measurements, because the analyst must know in advance what chemicals to look for.
Most hair testing laboratories use the aforementioned chromato-mass-spectrometry methods for confirmation or for rarely tested drugs only. Mass screening (preliminary or final) is usually done with immunoassays, because of their lower cost.
Legality, ethics and politics
The results of federally mandating drug testing were similar to the effects of simply extending to the trucking industry the right to perform drug tests, and it has been argued that the latter approach would have been as effective at lower cost.
Psychologist Tony Buon has criticized the use of workplace drug testing on a number of grounds, including:
Flawed technology: The real world performance of testing is much lower than that claimed by its promoters. Buon suggest that tests are probably adequate for rehabilitation and treatment situations, possibly adequate for pre-employment situations, but not for dismissing employees.
Ethical Issues: Because of the fairly simple ways that an employee can invalidate the test, drug testing must be strictly monitored. This means that the specimen must be observed leaving the body. Many legal objections currently being raised in the courts about drug testing are pointing to legal requirements of prior notice, consent, due process, and cause.
Wrong focus: As has been shown with Employee Assistance Programs, the focus of management concern should be on work performance decline. Buon suggests effective management practices are an infinitely better approach to managing workplace alcohol and other drug issues.
Tony Buon has also reported by the CIPD as stating that "drug testing captures the stupid—experienced drug users know how to beat the tests".
From a penological standpoint, one purpose of drug testing is to help classify the
people taking the drug test within risk groups so that those who pose more of a danger to the public can be incapacitated through incarceration or other restrictions on liberty. Thus, the drug testing serves a crime control purpose even if there is no expectation of rehabilitating the drug user through treatment, deterring drug use through sanctions, or sending a message that drug use is a deviant behavior that will not be tolerated.
United Kingdom
A study in 2004 by the Independent Inquiry into Drug Testing at Work found that attempts by employers to force employees to take drug tests could potentially be challenged as a violation of privacy under the Human Rights Act 1998 and Article 8 of the European Convention of Human Rights. However, this does not apply to industries where drug testing is a matter of personal and public safety or security rather than productivity.
United States
In consultation with Dr. Carlton Turner, President Ronald Reagan issued Executive Order 12564. In doing so, he instituted mandatory drug-testing for all safety-sensitive executive-level and civil-service Federal employees. This was challenged in the courts by the National Treasury Employees Union. In 1988, this challenge was considered by the US Supreme Court. A similar challenge resulted in the Court extending the drug-free workplace concept to the private sector. These decisions were then incorporated into the White House Drug Control Strategy directive issued by President George H.W. Bush in 1989. All defendants serving on federal probation or federal supervised release are required to submit to at least three drug tests. Failing a drug test can be construed as possession of a controlled substance, resulting in mandatory revocation and imprisonment.
There have been inconsistent evaluation results as to whether continued pretrial drug testing has beneficial effects.
Testing positive can lead to bail not being granted, or if bail has already been granted, to bail revocation or other sanctions. Arizona also adopted a law in 1987 authorizing mandatory drug testing of felony arrestees for the purpose of informing the pretrial release decision, and the District of Columbia has had a similar law since the 1970s. It has been argued that one of the problems with such testing is that there is often not enough time between the arrest and the bail decision to confirm positive results using GC/MS technology. It has also been argued that such testing potentially implicates the Fifth Amendment privilege against self-incrimination, the right to due process (including the prohibition against gathering evidence in a manner that shocks the conscience or constitutes outrageous government conduct), and the prohibition against unreasonable searches and seizures contained in the Fourth Amendment.
According to Henriksson, the anti-drug appeals of the Reagan administration "created an environment in which many employers felt compelled to implement drug testing programs because failure to do so might be perceived as condoning drug use. This fear was easily exploited by aggressive marketing and sales forces, who often overstated the value of testing and painted a bleak picture of the consequences of failing to use the drug testing product or service being offered." On March 10, 1986, the Commission on Organized Crime asked all U.S. companies to test employees for drug use. By 1987, nearly 25% of the Fortune 500 companies used drug tests.
According to an uncontrolled self-report study done by DATIA and Society for Human Resource Management in 2012 (sample of 6,000 randomly selected human resource professionals), human resource professionals reported the following results after implementing a drug testing program: 19% of companies reported a subjective increase in employee productivity, 16% reported a decrease in employee turnover (8% reported an increase), and unspecified percentages reported decreases in absenteeism and improvement of workers' compensation incidence rates.
According to US Chamber of Commerce 70% of all illicit drug users are employed. Some industries have high rates of employee drug use such as construction (12.8%), repair (11.1%), and hospitality (7.9-16.3%).
Australia
A person conducting a business or undertaking (PCBU—the new term that includes employers) has duties under the work health and safety (WHS) legislation to ensure a worker affected by alcohol or other drugs does not place themselves or other persons at risk of injury while at work. Workplace policies and prevention programs can help change the norms and culture around substance use.
All organisations—large and small—can benefit from an agreed policy on alcohol and drug misuse that applies to all workers. Such a policy should form part of an organisations overall health and safety management system. PCBUs are encouraged to establish a policy and procedure, in consultation with workers, to constructively manage alcohol and other drug related hazards in their workplace. A comprehensive workplace alcohol and other drug policy should apply to everyone in the workplace and include prevention, education, counselling and rehabilitation arrangements. In addition, the roles and responsibilities of managers and supervisors should be clearly outlined.
All Australian workplace drug testing must comply with Australian standard AS/NZS4308:2008.
In Victoria, roadside saliva tests detect drugs that contain:
THC (Delta-9 tetrahydrocannabinol), the active component in cannabis.
methamphetamine, also known as "ice", "crystal" and "crank".
MDMA (Methylenedioxymethamphetamine), which is known as ecstasy.
In February 2016 a New South Wales magistrate "acquitted a man who tested positive for cannabis". He had been arrested and charged after testing positive during a roadside drug test, despite not having smoked for nine days. He was relying on advice previously given to him by police.
Refusal
In the United States federal criminal system, refusing to take a drug test triggers an automatic revocation of probation or supervised release.
In Victoria, Australia the driver of the car has the option to refuse the drug test. Refusing to undergo a drug test or refusing to undergo a secondary drug test after the first one, triggers an automatic suspension and disqualification for a period of two years and a fine of AUD$1000. The second refusal triggers an automatic suspension and disqualification for a period of four years and an even larger fine.
Historical cases
In 2000, an Australian Mining Company South Blackwater Coal Ltd with 400 employees, imposed drug-testing procedures, and the trade unions advised their members to refuse to take the tests, partly because a positive result does not necessarily indicate present impairment; the workers were stood-down by the company without pay for a week.
In 2003, sixteen members of the Chicago White Sox considered refusing to take a drug test, in hopes of making steroid testing mandatory.
In 2006, Levy County, Florida, volunteer librarians resigned en masse rather than take drug tests.
In 2010, Iranian super heavyweight class weightlifters refused to submit to a drug test authorized by the Iran Weightlifting League.
| Biology and health sciences | Diagnostics | Health |
987039 | https://en.wikipedia.org/wiki/Siberian%20Traps | Siberian Traps | The Siberian Traps () are a large region of volcanic rock, known as a large igneous province, in Siberia, Russia. The massive eruptive event that formed the traps is one of the largest known volcanic events in the last years.
The eruptions continued for roughly two million years and spanned the Permian–Triassic boundary, or P–T boundary, which occurred around 251.9 million years ago. The Siberian Traps are believed to be the primary cause of the Permian–Triassic extinction event, the most severe extinction event in the geologic record. Subsequent periods of Siberian Traps activity have been linked to a number of smaller biotic crises, including the Smithian-Spathian, Olenekian-Anisian, Middle-Late Anisian, and Anisian-Ladinian extinction events.
Large volumes of basaltic lava covered a large expanse of Siberia in a flood basalt event. Today, the area is covered by about of basaltic rock, with a volume of around .
Etymology
The term "trap" has been used in geology since 1785–1795 for such rock formations. It is derived from the Swedish word for stairs ("trappa") and refers to the step-like hills forming the landscape of the region.
Formation
The source of the Siberian Traps basaltic rock has been attributed to a mantle plume, which rose until it reached the bottom of the Earth's crust, producing volcanic eruptions through the Siberian Craton. It has been suggested that, as the Earth's lithospheric plates moved over the mantle plume (the Iceland plume), the plume produced the Siberian Traps in the Permian and Triassic periods, after earlier producing the Viluy Traps to the east, and later going on to produce volcanic activity on the floor of the Arctic Ocean in the Jurassic and Cretaceous, and then generating volcanic activity in Iceland. Other plate tectonic causes have also been suggested. Another possible cause may be the impact that formed the Wilkes Land crater in Antarctica, which is estimated to have occurred around the same time and been nearly antipodal to the traps.
The main source of rock in this formation is basalt, but both mafic and felsic rocks are present, so this formation is officially called a Flood Basalt Province. The inclusion of mafic and felsic rock indicates multiple other eruptions that occurred and coincided with the one-million-year-long set of eruptions that created the majority of the basaltic layers. The traps are divided into sections based on their chemical, stratigraphical, and petrographical composition.
The Siberian traps are underlain by the Tungus Syneclise, a large sedimentary basin containing thick sequences of Early-Mid Paleozoic aged carbonate and evaporite deposits, as well as Carboniferous-Permian aged coal bearing clastic rocks. When heated, such as by igneous intrusions, these rocks are capable of emitting large amounts of toxic and greenhouse gases.
Effects on prehistoric life
One of the major questions is whether the Siberian Traps were directly responsible for the Permian–Triassic mass extinction event that occurred 250 million years ago, or if they were themselves caused by some other, larger event, such as an asteroid impact. One hypothesis put forward is that the volcanism triggered the growth of Methanosarcina, a microbe that then emitted large amounts of methane into Earth's atmosphere, ultimately altering the Earth's carbon cycle based on observations such as a significant increase of inorganic carbon reservoirs in marine environments. Recent research has highlighted the impact of vegetative deposition in the preceding Carboniferous period on the severity of the disruption to the carbon cycle.
This extinction event, also colloquially called the Great Dying, affected all life on Earth, and is estimated to have led to the extinction of about 81% of all marine species and 70% of terrestrial vertebrate species living at the time. Some of the disastrous events that affected the Earth continued to repeat themselves five to six million years after the initial extinction occurred. Over time a small portion of the life that survived the extinction was able to repopulate and expand starting with low trophic levels (producers) until the higher trophic levels (consumers) were able to be re-established. Calculations of sea water temperature from δ18O measurements indicate that at the peak of the extinction, the Earth underwent lethally hot global warming, in which equatorial ocean temperatures exceeded . It took roughly eight to nine million years for any diverse ecosystem to be re-established; however, new classes of animals were established after the extinction that did not exist beforehand.
Palaeontological evidence further indicates that the global distribution of tetrapods vanished between latitudes approximating 40° south to 30° north, with very rare exceptions in the region of Pangaea that is today Utah. This tetrapod gap of equatorial Pangaea coincides with an end-Permian to Middle Triassic global "coal gap" that indicates the loss of peat swamps. Peat formation, a product of high plant productivity, was reestablished only in the Anisian stage of the Triassic, and even then only in high southern latitudes, although gymnosperm forests appeared earlier (in the Early Spathian), but again only in northern and southern higher latitudes. In equatorial Pangaea, the establishment of conifer-dominated forests was not until the end of the Spathian, and the first coals at these latitudes did not appear until the Carnian, around 15 million years after their end-Permian disappearance. These signals suggest equatorial temperatures exceeded their thermal tolerance for many marine vertebrates at least during two thermal maxima, whereas terrestrial equatorial temperatures were sufficiently severe to suppress plant and animal abundance during most of the Early Triassic.
Dating
The volcanism that occurred in the Siberian Traps resulted in copious amounts of magma being ejected from the Earth's crust—leaving permanent traces of rock from the same time period of the mass extinction that can be examined today. More specifically, zircon is found in some of the volcanic rocks. To further the accuracy of the age of the zircon, several varying aged pieces of zircon were organized into a timeline based on when they crystallized. The CA-TIMS technique, a chemical abrasion age-dating technique that eliminates variability in accuracy due to lead depletion in zircon over time, was then used to accurately determine the age of the zircons found in the Siberian Traps. Eliminating the variability due to lead, the CA-TIMS age-dating technique allowed uranium within the zircon to be the centre focus in linking the volcanism in the Siberian Traps that resulted in high amounts of magmatic material with the Permian–Triassic mass extinction.
To further the connection with the Permian–Triassic extinction event, other disastrous events occurred around the same time period, such as sea level changes, meteor impacts and volcanism. Specifically focusing on volcanism, rock samples from the Siberian Traps and other southern regions were obtained and compared. Basalts and gabbro samples from several southern regions close to and from the Siberian Traps were dated based on argon isotope 40 and argon isotope 39 age-dating methods. Feldspar and biotite was specifically used to focus on the samples' age and duration of the presence of magma from the volcanic event in the Siberian Traps. The majority of the basalt and gabbro samples dated to 250 million years ago, covered a surface area of five million square kilometres on the Siberian Traps and occurred within a short period of time with rapid rock solidification/cooling. Studies confirmed that samples of gabbro and basalt from the same time period of the Permian–Triassic event from the other southern regions also matched the age of samples within the Siberian Traps. This confirms the assumption of the linkage between the age of volcanic rocks within the Siberian Traps, along with rock samples from other southern regions to the Permian–Triassic mass extinction event.
Mineral deposits
The giant Norilsk-Talnakh nickel–copper–palladium deposit formed within the magma conduits in the most complete part of the Siberian Traps. It has been linked to the Permian–Triassic extinction event, which occurred approximately 251.4 million years ago, based on large amounts of nickel and other elements found in rock beds that were laid down after the extinction occurred. The method used to correlate the extinction event with the surplus amount of nickel located in the Siberian Traps compares the timeline of the magmatism within the traps and the timeline of the extinction itself. Before the linkage between magmatism and the extinction event was discovered, it was hypothesized that the mass extinction and volcanism occurred at the same time due to the linkages in rock composition.
| Physical sciences | Geologic features | Earth science |
987544 | https://en.wikipedia.org/wiki/NFPA%20704 | NFPA 704 | "NFPA 704: Standard System for the Identification of the Hazards of Materials for Emergency Response" is a standard maintained by the U.S.-based National Fire Protection Association. First "tentatively adopted as a guide" in 1960, and revised several times since then, it defines the "Safety Square" or "Fire Diamond" which is used to quickly and easily identify the risks posed by hazardous materials. This helps determine what, if any, special equipment should be used, procedures followed, or precautions taken during the initial stages of an emergency response. It is an internationally accepted safety standard, and is crucial while transporting chemicals.
Codes
The four divisions are typically color-coded with red on top indicating flammability, blue on the left indicating level of health hazard, yellow on the right for chemical reactivity, and white containing codes for special hazards. Each of health, flammability and reactivity is rated on a scale from 0 (no hazard) to 4 (severe hazard). The latest version of NFPA 704 sections 5, 6, 7 and 8 for the specifications of each classification are listed below. The numeric values in the first column are designated in the standard by "Degree of Hazard" using Arabic numerals (0, 1, 2, 3, 4), not to be confused with other classification systems, such as that in the NFPA 30 Flammable and Combustible Liquids Code, where flammable and combustible liquid categories are designated by "Class", using Roman numerals (I, II, III).
History
The development of NFPA 704 is credited to the Charlotte Fire Department after a fire at the Charlotte Chemical Company in 1959 led to severe injuries to many of the firefighters. Upon arrival, the fire crew found a fire burning inside a vat that firefighters assumed to be burning kerosene. The crew tried to suppress the fire, which resulted in the vat exploding due to metallic sodium being stored in the kerosene. Thirteen firefighters were injured, several of whom had critical injuries while one lost both ears and most of his face from the incident.
At the time, such vats were not labelled with the materials they contained, so firefighters did not have the necessary information to recognize that hazardous materials were present, which required a specific response. In this case, sodium was able to react with water to release hydrogen gas and large amounts of heat, which has the potential to explode.
The Charlotte Fire Department developed training to respond to fires involving hazardous materials, ensured that protective clothing was available to those responding, and expanded the fire prevention inspection program. Fire Marshal J. F. Morris developed the diamond shaped placard as a marking system to indicate when a building contained hazardous materials, with their levels of flammability, reactivity and health effects.
| Physical sciences | Basics: General | Chemistry |
987546 | https://en.wikipedia.org/wiki/Harmattan | Harmattan | The Harmattan is a season in West Africa that occurs between the end of November and the middle of March. It is characterized by the dry and dusty northeasterly trade wind, of the same name, which blows from the Sahara over West Africa into the Gulf of Guinea. The name is related to the word in the Twi language. The temperature is cold mostly at night in some places but can be very hot in certain places during daytime. Generally, temperature differences can also depend on local circumstances.
The Harmattan blows during the dry season, which occurs during the months with the lowest sun. In this season, the subtropical ridge of high pressure stays over the central Sahara and the low-pressure Intertropical Convergence Zone (ITCZ) stays over the Gulf of Guinea. On its passage over the Sahara, the Harmattan picks fine dust and sand particles (between 0.5 and 10 microns). It is also known as the "doctor wind", because of its invigorating dryness compared with humid tropical air.
Effects
This season differs from winter because it is characterized by cold, dry, dust-laden wind, and also wide fluctuations in the ambient temperatures of the day and night. Temperatures can easily be as low as all day, but sometimes in the afternoon the temperature can also soar to as high as , while the relative humidity drops under 5%. It can also be hot in some regions, like in the Sahara.
The air is particularly dry and desiccating when the Harmattan blows over the region. The Harmattan brings desert-like weather conditions: it lowers the humidity, dissipates cloud cover, prevents rainfall formation and sometimes creates big clouds of dust which can result in dust storms or sandstorms. The wind can increase fire risk and cause severe crop damage. The interaction of the Harmattan with monsoon winds can cause tornadoes.
Harmattan haze
In some countries in West Africa, the heavy amount of dust in the air can severely limit visibility and block the sun for several days, comparable to a heavy fog. This effect is known as the Harmattan haze. It costs airlines millions of dollars in cancelled and diverted flights each year. When the haze is weak, the skies are clear. The extreme dryness of the air may cause branches of trees to die.
Health
A 2024 study found that dust carried by the Harmattan increases infant and child mortality, as well as has persistent adverse health impacts on surviving children.
Humidity can drop lower than 15%, which can result in spontaneous nosebleeds for some people. Other health effects on humans may include conditions of the skin (dryness of the skin), dried or chapped lips, eyes, and respiratory system, including aggravation of asthma.
| Physical sciences | Seasons | Earth science |
158859 | https://en.wikipedia.org/wiki/Plug%20and%20play | Plug and play | In computing, a plug and play (PnP) device or computer bus is one with a specification that facilitates the recognition of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts. The term "plug and play" has since been expanded to a wide variety of applications to which the same lack of user setup applies.
Expansion devices are controlled and exchange data with the host system through defined memory or I/O space port addresses, direct memory access channels, interrupt request lines and other mechanisms, which must be uniquely associated with a particular device to operate. Some computers provided unique combinations of these resources to each slot of a motherboard or backplane. Other designs provided all resources to all slots, and each peripheral device had its own address decoding for the registers or memory blocks it needed to communicate with the host system. Since fixed assignments made expansion of a system difficult, devices used several manual methods for assigning addresses and other resources, such as hard-wired jumpers, pins that could be connected with wire or removable straps, or switches that could be set for particular addresses. As microprocessors made mass-market computers affordable, software configuration of I/O devices was advantageous to allow installation by non-specialist users. Early systems for software configuration of devices included the MSX standard, NuBus, Amiga Autoconfig, and IBM Microchannel. Initially all expansion cards for the IBM PC required physical selection of I/O configuration on the board with jumper straps or DIP switches, but increasingly ISA bus devices were arranged for software configuration. By 1995, Microsoft Windows included a comprehensive method of enumerating hardware at boot time and allocating resources, which was called the "Plug and Play" standard.
Plug and play devices can have resources allocated at boot-time only, or may be hotplug systems such as USB and IEEE 1394 (FireWire).
History of device configuration
Some early microcomputer peripheral devices required the end user physically to cut some wires and solder together others in order to make configuration changes; such changes were intended to be largely permanent for the life of the hardware.
As computers became more accessible to the general public, the need developed for more frequent changes to be made by computer users unskilled with using soldering irons. Rather than cutting and soldering connections, configuration was accomplished by jumpers or DIP switches.
Later on this configuration process was automated: Plug and Play.
MSX
The MSX system, released in 1983, was designed to be plug and play from the ground up, and achieved this by a system of slots and subslots, where each had its own virtual address space, thus eliminating device addressing conflicts in its very source. No jumpers or any manual configuration was required, and the independent address space for each slot allowed very cheap and commonplace chips to be used, alongside cheap glue logic.
On the software side, the drivers and extensions were supplied in the card's own ROM, thus requiring no disks or any kind of user intervention to configure the software. The ROM extensions abstracted any hardware differences and offered standard APIs as specified by ASCII Corporation.
NuBus
In 1984, the NuBus architecture was developed by the Massachusetts Institute of Technology (MIT) as a platform agnostic peripheral interface that fully automated device configuration. The specification was sufficiently intelligent that it could work with both big endian and little endian computer platforms that had previously been mutually incompatible. However, this agnostic approach increased interfacing complexity and required support chips on every device which in the 1980s was expensive to do, and apart from its use in Apple Macintoshes and NeXT machines, the technology was not widely adopted.
Amiga Autoconfig and Zorro bus
In 1984, Commodore developed the Autoconfig protocol and the Zorro expansion bus for its Amiga line of expandable computers. The first public appearance was in the CES computer show at Las Vegas in 1985, with the so-called "Lorraine" prototype. Like NuBus, Zorro devices had absolutely no jumpers or DIP switches. Configuration information was stored on a read-only device on each peripheral, and at boot time the host system allocated the requested resources to the installed card. The Zorro architecture did not spread to general computing use outside of the Amiga product line, but was eventually upgraded as Zorro II and Zorro III for the later iteration of Amiga computers.
Micro-Channel Architecture
In 1987, IBM released an update to the IBM PC known as the Personal System/2 line of computers using the Micro Channel Architecture. The PS/2 was capable of totally automatic self-configuration. Every piece of expansion hardware was issued with a floppy disk containing a special file used to auto-configure the hardware to work with the computer. The user would install the device, turn on the computer, load the configuration information from the disk, and the hardware automatically assigned interrupts, DMA, and other needed settings.
However, the disks posed a problem if they were damaged or lost, as the only options at the time to obtain replacements were via postal mail or IBM's dial-up BBS service. Without the disks, any new hardware would be completely useless and the computer would occasionally not boot at all until the unconfigured device was removed.
Micro Channel did not gain widespread support, because IBM wanted to exclude clone manufacturers from this next-generation computing platform. Anyone developing for MCA had to sign non-disclosure agreements and pay royalties to IBM for each device sold, putting a price premium on MCA devices. End-users and clone manufacturers revolted against IBM and developed their own open standards bus, known as EISA. Consequently, MCA usage languished except in IBM's mainframes.
ISA and PCI self-configuration
In time, many Industry Standard Architecture (ISA) cards incorporated, through proprietary and varied techniques, hardware to self-configure or to provide for software configuration; often, the card came with a configuration program on disk that could automatically set the software-configurable (but not itself self-configuring) hardware. Some cards had both jumpers and software-configuration, with some settings controlled by each; this compromise reduced the number of jumpers that had to be set, while avoiding great expense for certain settings, e.g. nonvolatile registers for a base address setting. The problems of required jumpers continued on, but slowly diminished as more and more devices, both ISA and other types, included extra self-configuration hardware. However, these efforts still did not solve the problem of making sure the end-user has the appropriate software driver for the hardware.
ISA PnP or (legacy) Plug & Play ISA was a plug-and-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. It was superseded by the PCI bus during the mid-1990s.
The PCI plug and play (autoconfiguration) is based on the PCI BIOS Specification in 1990s, the PCI BIOS Specification is superseded by the ACPI in 2000s.
Legacy Plug and Play
In 1995, Microsoft released Windows 95, which tried to automate device detection and configuration as much as possible, but could still fall back to manual settings if necessary. During the initial install process of Windows 95, it would attempt to automatically detect all devices installed in the system. Since full auto-detection of everything was a new process without full industry support, the detection process constantly wrote to a progress tracking log file during the detection process. In the event that device probing would fail and the system would freeze, the end-user could reboot the computer, restart the detection process, and the installer would use the tracking log to skip past the point that caused the previous freeze.
At the time, there could be a mix of devices in a system, some capable of automatic configuration, and some still using fully manual settings via jumpers and DIP switches. The old world of DOS still lurked underneath Windows 95, and systems could be configured to load devices in three different ways:
through Windows 95 Device Manager drivers only
using DOS drivers loaded in the CONFIG.SYS and AUTOEXEC.BAT configuration files
using a combination of DOS drivers and Windows 95 Device Manager drivers
Microsoft could not assert full control over all device settings, so configuration files could include a mix of driver entries inserted by the Windows 95 automatic configuration process, and could also include driver entries inserted or modified manually by the computer users themselves. The Windows 95 Device Manager also could offer users a choice of several semi-automatic configurations to try to free up resources for devices that still needed manual configuration.
Also, although some later ISA devices were capable of automatic configuration, it was common for PC ISA expansion cards to limit themselves to a very small number of choices for interrupt request lines. For example, a network interface might limit itself to only interrupts 3, 7, and 10, while a sound card might limit itself to interrupts 5, 7, and 12. This results in few configuration choices if some of those interrupts are already used by some other device.
The hardware of PC computers additionally limited device expansion options because interrupts could not be shared, and some multifunction expansion cards would use multiple interrupts for different card functions, such as a dual-port serial card requiring a separate interrupt for each serial port.
Because of this complex operating environment, the autodetection process sometimes produced incorrect results, especially in systems with large numbers of expansion devices. This led to device conflicts within Windows 95, resulting in devices which were supposed to be fully self-configuring failing to work. The unreliability of the device installation process led to Plug and Play being sometimes referred to as Plug and Pray.
Until approximately 2000, PC computers could still be purchased with a mix of ISA and PCI slots, so it was still possible that manual ISA device configuration might be necessary. But with successive releases of new operating systems like Windows 2000 and Windows XP, Microsoft had sufficient clout to say that drivers would no longer be provided for older devices that did not support auto-detection. In some cases, the user was forced to purchase new expansion devices or a whole new system to support the next operating system release.
Current plug and play interfaces
Several completely automated computer interfaces are currently used, each of which requires no device configuration or other action on the part of the computer user, apart from software installation, for the self-configuring devices. These interfaces include:
IEEE 1394 (FireWire)
PCI, Mini PCI
PCI Express, Mini PCI Express, Thunderbolt
PCMCIA, PC Card, ExpressCard
SATA, Serial Attached SCSI
USB
DVI, HDMI
For most of these interfaces, very little technical information is available to the end user about the performance of the interface. Although both FireWire and USB have bandwidth that must be shared by all devices, most modern operating systems are unable to monitor and report the amount of bandwidth being used or available, or to identify which devices are currently using the interface.
| Technology | Computer hardware | null |
158901 | https://en.wikipedia.org/wiki/Chlorite | Chlorite | The chlorite ion, or chlorine dioxide anion, is the halite with the chemical formula of . A chlorite (compound) is a compound that contains this group, with chlorine in the oxidation state of +3. Chlorites are also known as salts of chlorous acid.
Compounds
The free acid, chlorous acid HClO2, is the least stable oxoacid of chlorine and has only been observed as an aqueous solution at low concentrations. Since it cannot be concentrated, it is not a commercial product. The alkali metal and alkaline earth metal compounds are all colorless or pale yellow, with sodium chlorite (NaClO2) being the only commercially important chlorite. Heavy metal chlorites (Ag+, Hg+, Tl+, Pb2+, and also Cu2+ and ) are unstable and decompose explosively with heat or shock.
Sodium chlorite is derived indirectly from sodium chlorate, NaClO3. First, the explosively unstable gas chlorine dioxide, ClO2 is produced by reducing sodium chlorate with a suitable reducing agent such as methanol, hydrogen peroxide, hydrochloric acid or sulfur dioxide.
Structure and properties
The chlorite ion adopts a bent molecular geometry, due to the effects of the lone pairs on the chlorine atom, with an O–Cl–O bond angle of 111° and Cl–O bond lengths of 156 pm.
Chlorite is the strongest oxidiser of the chlorine oxyanions on the basis of standard half cell potentials.
Uses
The most important chlorite is sodium chlorite (NaClO2), used in the bleaching of textiles, pulp, and paper. However, despite its strongly oxidizing nature, it is often not used directly, being instead used to generate the neutral species chlorine dioxide (ClO2), normally via a reaction with HCl:
5 NaClO2 + 4 HCl → 5 NaCl + 4 ClO2 + 2 H2O
Health risks
In 2009, the California Office of Environmental Health Hazard Assessment, or OEHHA, released a public health goal of maintaining amounts lower than 50 parts per billion for chlorite in drinking water after scientists in the state reported that exposure to higher levels of chlorite affect sperm and thyroid function, cause stomach ulcers, and caused red blood cell damage in laboratory animals. Some studies have indicated that at certain levels chlorite may also be carcinogenic.
The federal legal limit in the United States allows chlorite up to levels of 1,000 parts per billion in drinking water, 20 times as much chlorite as California’s public health goal.
Other oxyanions
Several oxyanions of chlorine exist, in which it can assume oxidation states of −1, +1, +3, +5, or +7 within the corresponding anions Cl−, ClO−, , , or , known commonly and respectively as chloride, hypochlorite, chlorite, chlorate, and perchlorate. These are part of a greater family of other chlorine oxides.
| Physical sciences | Halide oxyanions | Chemistry |
158914 | https://en.wikipedia.org/wiki/Arsenopyrite | Arsenopyrite | Arsenopyrite (IMA symbol: Apy) is an iron arsenic sulfide (FeAsS). It is a hard (Mohs 5.5–6) metallic, opaque, steel grey to silver white mineral with a relatively high specific gravity of 6.1.
When dissolved in nitric acid, it releases elemental sulfur. When arsenopyrite is heated, it produces sulfur and arsenic vapor. With 46% arsenic content, arsenopyrite, along with orpiment, is a principal ore of arsenic. When deposits of arsenopyrite become exposed to the atmosphere, the mineral slowly converts into iron arsenates. Arsenopyrite is generally an acid-consuming sulfide mineral, unlike iron pyrite which can lead to acid mine drainage.
The crystal habit, hardness, density, and garlic odour when struck are diagnostic. Arsenopyrite in older literature may be referred to as mispickel, a name of German origin. It is also sometimes referred to as mundic, a word derived from Cornish dialect and which also refers to a copper ore, as well as a form of deterioration in aggregate concrete made with mine tailings.
Arsenopyrite also can be associated with significant amounts of gold. Consequently, it serves as an indicator of gold bearing reefs. Many arsenopyrite gold ores are refractory, i.e. the gold is not easily cyanide leached from the mineral matrix.
Arsenopyrite is found in high temperature hydrothermal veins, in pegmatites, and in areas of contact metamorphism or metasomatism.
Crystallography
Arsenopyrite crystallizes in the monoclinic crystal system and often shows prismatic crystal or columnar forms with striations and twinning common. Arsenopyrite may be referred to in older references as orthorhombic, but it has been shown to be monoclinic. In terms of its atomic structure, each Fe center is linked to three As atoms and three S atoms. The material can be described as Fe3+ with the diatomic trianion AsS3−. The connectivity of the atoms is more similar to that in marcasite than pyrite. The ion description is imperfect because the material is semiconducting and the Fe-As and Fe-S bonds are highly covalent.
Related minerals
Various transition group metals can substitute for iron in arsenopyrite. The arsenopyrite group includes the following rare minerals:
Clinosafflorite:
Gudmundite:
Glaucodot or alloclasite: or
Iridarsenite:
Osarsite or ruarsite: or
| Physical sciences | Minerals | Earth science |
158916 | https://en.wikipedia.org/wiki/Chalcopyrite | Chalcopyrite | Chalcopyrite ( ) is a copper iron sulfide mineral and the most abundant copper ore mineral. It has the chemical formula CuFeS2 and crystallizes in the tetragonal system. It has a brassy to golden yellow color and a hardness of 3.5 to 4 on the Mohs scale. Its streak is diagnostic as green-tinged black.
On exposure to air, chalcopyrite tarnishes to a variety of oxides, hydroxides, and sulfates. Associated copper minerals include the sulfides bornite (Cu5FeS4), chalcocite (Cu2S), covellite (CuS), digenite (Cu9S5); carbonates such as malachite and azurite, and rarely oxides such as cuprite (Cu2O). It is rarely found in association with native copper. Chalcopyrite is a conductor of electricity.
Copper can be extracted from chalcopyrite ore using various methods. The two predominant methods are pyrometallurgy and hydrometallurgy, the former being the most commercially viable.
Etymology
The name chalcopyrite comes from the Greek words , which means copper, and , which means striking fire. It was sometimes historically referred to as "yellow copper".
Identification
Chalcopyrite is often confused with pyrite and gold since all three of these minerals have a yellowish color and a metallic luster. Some important mineral characteristics that help distinguish these minerals are hardness and streak. Chalcopyrite is much softer than pyrite and can be scratched with a knife, whereas pyrite cannot be scratched by a knife. However, chalcopyrite is harder than gold, which, if pure, can be scratched by copper. Additionally, gold is malleable, while chalcopyrite is brittle. Chalcopyrite has a distinctive black streak with green flecks in it. Pyrite has a black streak and gold has a yellow streak.
Chemistry
Natural chalcopyrite has no solid solution series with any other sulfide minerals. There is limited substitution of zinc with copper despite chalcopyrite having the same crystal structure as sphalerite.
Minor amounts of elements such as silver, gold, cadmium, cobalt, nickel, lead, tin, and zinc can be measured (at parts per million levels), likely substituting for copper and iron. Selenium, bismuth, tellurium, and arsenic may substitute for sulfur in minor amounts. Chalcopyrite can be oxidized to form malachite, azurite, and cuprite.
Structure
Chalcopyrite is a member of the tetragonal crystal system. Crystallographically the structure of chalcopyrite is closely related to that of zinc blende ZnS (sphalerite). The unit cell is twice as large, reflecting an alternation of Cu+ and Fe3+ ions replacing Zn2+ ions in adjacent cells. In contrast to the pyrite structure chalcopyrite has single S2− sulfide anions rather than disulfide pairs. Another difference is that the iron cation is not diamagnetic low spin Fe(II) as in pyrite.
In the crystal structure, each metal ion is tetrahedrally coordinated to 4 sulfur anions. Each sulfur anion is bonded to two copper atoms and two iron atoms.
Paragenesis
Chalcopyrite is present with many ore-bearing environments via a variety of ore forming processes.
Chalcopyrite is present in volcanogenic massive sulfide ore deposits and sedimentary exhalative deposits, formed by deposition of copper during hydrothermal circulation. Chalcopyrite is concentrated in this environment via fluid transport. Porphyry copper ore deposits are formed by concentration of copper within a granitic stock during the ascent and crystallisation of a magma. Chalcopyrite in this environment is produced by concentration within a magmatic system.
Chalcopyrite is an accessory mineral in Kambalda type komatiitic nickel ore deposits, formed from an immiscible sulfide liquid in sulfide-saturated ultramafic lavas. In this environment chalcopyrite is formed by a sulfide liquid stripping copper from an immiscible silicate liquid.
Chalcopyrite has been the most important ore of copper since the Bronze Age.
Occurrence
Even though Chalcopyrite does not contain the most copper in its structure relative to other minerals, it is the most important copper ore since it can be found in many localities. Chalcopyrite ore occurs in a variety of ore types, from huge masses as at Timmins, Ontario, to irregular veins and disseminations associated with granitic to dioritic intrusives as in the porphyry copper deposits of Broken Hill, the American Cordillera and the Andes. The largest deposit of nearly pure chalcopyrite ever discovered in Canada was at the southern end of the Temagami Greenstone Belt where Copperfields Mine extracted the high-grade copper.
Chalcopyrite is present in the supergiant Olympic Dam Cu-Au-U deposit in South Australia.
Chalcopyrite may also be found in coal seams associated with pyrite nodules, and as disseminations in carbonate sedimentary rocks.
Extraction of copper
Copper metal is predominantly extracted from chalcopyrite ore using two methods: pyrometallurgy and hydrometallurgy. The most common and commercially viable method, pyrometallurgy, involves "crushing, grinding, flotation, smelting, refining, and electro-refining" techniques. Crushing, leaching, solvent extraction, and electrowinning are techniques used in hydrometallurgy. Specifically in the case of chalcopyrite, pressure oxidation leaching is practiced.
Pyrometallurgical processes
The most important method for copper extraction from chalcopyrite is pyrometallurgy. Pyrometallurgy is commonly used for large scale, copper rich operations with high-grade ores. This is because Cu-Fe-S ores, such as chalcopyrite, are difficult to dissolve in aqueous solutions. The extraction process using this method undergoes four stages:
Isolating desired elements from ore using froth flotation to create a concentration
Creating a high-Cu sulfide matte by smelting the concentration
Oxidizing/converting the sulfide matte, resulting in an impure molten copper
Refining by fire and electrowinning techniques to increase purity of resultant copper
Chalcopyrite ore is not directly smelted. This is because the ore is primarily composed of non-economically valuable material, or waste rock, with low concentrations of copper. The abundance of waste material results in a lot of hydrocarbon fuel being required to heat and melt the ore. Alternatively, copper is isolated from the ore first using a technique called froth flotation. Essentially, reagents are used to make the copper water-repellent, thus the Cu is able to concentrate in a flotation cell by floating on air bubbles. In contrast to the 0.5–2% copper in chalcopyrite ore, froth flotation results in a concentrate containing about 30% copper.
The concentrate then undergoes a process called matte smelting. Matte smelting oxidizes the sulfur and iron by melting the flotation concentrate in a 1250°C furnace to create a new concentrate (matte) with about 45–75% copper. This process is typically done in flash furnaces. To reduce the amount of copper in the slag material, the slag is kept molten with an addition of SiO2 flux to promote immiscibility between concentration and slag. In terms of byproducts, matte smelting copper can produce SO2 gas which is harmful to the environment, thus it is captured in the form of sulfuric acid. Example reactions are as follows:
2CuFeS2 (s) +3.25O2(g) → Cu2S-0.5FeS(l) + 1.5FeO(s) + 2.5SO2(g)
2FeO(s) + SiO2(s) → Fe2SiO4(l)
Converting involves oxidizing the matte once more to further remove sulfur and iron; however, the product is 99% molten copper. Converting occurs in two stages: the slag forming stage and the copper forming stage. In the slag forming stage, iron and sulfur are reduced to concentrations of less than 1% and 0.02%, respectively. The concentrate from matte smelting is poured into a converter that is then rotated, supplying the slag with oxygen through tuyeres. The reaction is as follows:
2FeS(l)+3O2(g)+SiO2(s) -> Fe2SiO4(l) + 2SO2(g) + heat
In the copper forming stage, the matte produced from the slag stage undergoes charging (inputting the matte in the converter), blowing (blasting more oxygen), and skimming (retrieving impure molten copper known as blister copper). The reaction is as follows:
Cu2S(l) + O2(g) -> 2Cu(l) + SO2(g) + heat
Finally, the blister copper undergoes refinement through fire, electrorefining or both. In this stage, copper is refined to a high-purity cathode.
Hydrometallurgical processes
Chalcopyrite is an exception to most copper bearing minerals. In contrast to the majority of copper minerals which can be leached at atmospheric conditions, such as through heap leaching, chalcopyrite is a refractory mineral that requires elevated temperatures as well as oxidizing conditions to release its copper into solution. This is because of the extracting challenges which arise from the 1:1 presence of iron to copper, resulting in slow leaching kinetics. Elevated temperatures and pressures create an abundance of oxygen in solution, which facilitates faster reaction speeds in terms of breaking down chalcopyrite's crystal lattice. A hydrometallurgical process which elevates temperature with oxidizing conditions required for chalcopyrite is known as pressure oxidation leaching. A typical reaction series of chalcopyrite under oxidizing, high temperature conditions is as follows:
i) 2CuFeS2 + 4Fe2(SO4)3 -> 2Cu2++ 2SO42- + 10FeSO4+4S
ii) 4FeSO4 + O2 + 2H2SO4 -> 2Fe2(SO4)3 +2H2O
iii) 2S + 3O2 +2H2O -> 2H2SO4
(overall) 4CuFeS2+ 17O2 + 4H2O -> 4Cu2++ 2Fe2O3 + 4H2SO4
Pressure oxidation leaching is particularly useful for low grade chalcopyrite. This is because it can "process concentrate product from flotation" rather than having to process whole ore. Additionally, it can be used as an alternative method to pyrometallurgy for variable ore. Other advantages hydrometallurgical processes have in regards to copper extraction over pyrometallurgical processes (smelting) include:
The highly variable cost of smelting
Depending on the location, the amount of smelting availability is limited
High cost of installing smelting infrastructure
Ability to treat high-impurity concentrates
Increase of recovery due to ability of treating lower-grade deposits on site
Lower transport costs (shipping concentrate not necessary)
Overall lower cost of copper production
Although hydrometallurgy has its advantages, it continues to face challenges in the commercial setting. In turn, smelting continues to remain the most commercially viable method of copper extraction.
| Physical sciences | Minerals | Earth science |
159034 | https://en.wikipedia.org/wiki/Labradorite | Labradorite | Labradorite ((Ca, Na)(Al, Si)4O8) is a calcium-enriched feldspar mineral first identified in Labrador, Canada, which can display an iridescent effect (schiller).
Labradorite is an intermediate to calcic member of the plagioclase series. It has an anorthite percentage (%An) of between 50 and 70. The specific gravity ranges from 2.68 to 2.72. The streak is white, like most silicates. The refractive index ranges from 1.559 to 1.573 and twinning is common. As with all plagioclase members, the crystal system is triclinic, and three directions of cleavage are present, two of which are nearly at right angles and are more obvious, being of good to perfect quality (while the third direction is poor). It occurs as clear, white to gray, blocky to lath shaped grains in common mafic igneous rocks such as basalt and gabbro, as well as in anorthosites.
Occurrence
The geological type area for labradorite is Paul's Island near the town of Nain in Labrador, Canada. It has also been reported in Poland, Norway, Finland and various other locations worldwide, with notable distribution in Madagascar, China, Australia, Slovakia and the United States.
Labradorite occurs in mafic igneous rocks and is the feldspar variety most common in basalt and gabbro. The uncommon anorthosite bodies are composed almost entirely of labradorite. It also is found in metamorphic amphibolites and as a detrital component of some sediments. Common mineral associates in igneous rocks include olivine, pyroxenes, amphiboles and magnetite.
Labradorescence
Labradorite can display an iridescent optical effect (or schiller) known as labradorescence. The term labradorescence was coined by Ove Balthasar Bøggild, who defined it (labradorization) as follows:
Contributions to the understanding of the origin and cause of the effect were made by Robert Strutt, 4th Baron Rayleigh (1923), and by Bøggild (1924).
The cause of this optical phenomenon is phase exsolution lamellar structure, occurring in the Bøggild miscibility gap. The effect is visible when the lamellar separation is between ; the lamellae are not necessarily parallel; and the lamellar structure is found to lack long range order.
The lamellar separation only occurs in plagioclases of a certain composition; those of calcic labradorite (50–70% anorthite) and bytownite (formula: , i.e., with an anorthite content of ~70 to 90%) particularly exemplify this. Another requirement for the lamellar separation is a very slow cooling of the rock containing the plagioclase. Slow cooling is required to allow the Ca, Na, Si, and Al ions to diffuse through the plagioclase and produce the lamellar separation. Therefore, not all labradorites exhibit labradorescence (they might not have the correct composition, cooled too quickly, or both), and not all plagioclases that exhibit labradorescence are labradorites (they may be bytownite).
Some gemstone varieties of labradorite exhibiting a high degree of labradorescence are called spectrolite.
Gallery
| Physical sciences | Silicate minerals | Earth science |
159081 | https://en.wikipedia.org/wiki/Shock%20%28mechanics%29 | Shock (mechanics) | In mechanics and physics, shock is a sudden acceleration caused, for example, by impact, drop, kick, earthquake, or explosion. Shock is a transient physical excitation.
Shock describes matter subject to extreme rates of force with respect to time. Shock is a vector that has units of an acceleration (rate of change of velocity). The unit g (or g) represents multiples of the standard acceleration of gravity and is conventionally used.
A shock pulse can be characterised by its peak acceleration, the duration, and the shape of the shock pulse (half sine, triangular, trapezoidal, etc.). The shock response spectrum is a method for further evaluating a mechanical shock.
Shock measurement
Shock measurement is of interest in several fields such as
Propagation of heel shock through a runner's body
Measure the magnitude of a shock need to cause damage to an item: fragility.
Measure shock attenuation through athletic flooring
Measuring the effectiveness of a shock absorber
Measuring the shock absorbing ability of package cushioning
Measure the ability of an athletic helmet to protect people
Measure the effectiveness of shock mounts
Determining the ability of structures to resist seismic shock: earthquakes, etc.
Determining whether personal protective fabric attenuates or amplifies shocks
Verifying that a Naval ship and its equipment can survive explosive shocks
Shocks are usually measured by accelerometers but other transducers and high speed imaging are also used. A wide variety of laboratory instrumentation is available; stand-alone shock data loggers are also used.
Field shocks are highly variable and often have very uneven shapes. Even laboratory controlled shocks often have uneven shapes and include short duration spikes; Noise can be reduced by appropriate digital or analog filtering.
Governing test methods and specifications provide detail about the conduct of shock tests. Proper placement of measuring instruments is critical. Fragile items and packaged goods respond with variation to uniform laboratory shocks; Replicate testing is often called for. For example, MIL-STD-810G Method 516.6 indicates: ''at least three times in both directions along each of three orthogonal axes".
Shock testing
Shock testing typically falls into two categories, classical shock testing and pyroshock or ballistic shock testing. Classical shock testing consists of the following shock impulses: half sine, haversine, sawtooth wave, and trapezoid. Pyroshock and ballistic shock tests are specialized and are not considered classical shocks. Classical shocks can be performed on Electro Dynamic (ED) Shakers, Free Fall Drop Tower or Pneumatic Shock Machines. A classical shock impulse is created when the shock machine table changes direction abruptly. This abrupt change in direction causes a rapid velocity change which creates the shock impulse. Testing the effects of shock are sometimes conducted on end-use applications: for example, automobile crash tests.
Use of proper test methods and Verification and validation protocols are important for all phases of testing and evaluation.
Effects of shock
Mechanical shock has the potential for damaging an item (e.g., an entire light bulb) or an element of the item (e.g. a filament in an Incandescent light bulb):
A brittle or fragile item can fracture. For example, two crystal wine glasses may shatter when impacted against each other. A shear pin in an engine is designed to fracture with a specific magnitude of shock. Note that a soft ductile material may sometimes exhibit brittle failure during shock due to time-temperature superposition.
A malleable item can be bent by a shock. For example, a copper pitcher may bend when dropped on the floor.
Some items may appear to be not damaged by a single shock but will experience fatigue failure with numerous repeated low-level shocks.
A shock may result in only minor damage which may not be critical for use. However, cumulative minor damage from several shocks will eventually result in the item being unusable.
A shock may not produce immediate apparent damage but might cause the service life of the product to be shortened: the reliability is reduced.
A shock may cause an item to become out of adjustment. For example, when a precision scientific instrument is subjected to a moderate shock, good metrology practice may be to have it recalibrated before further use.
Some materials such as primary high explosives may detonate with mechanical shock or impact.
When glass bottles of liquid are dropped or subjected to shock, the water hammer effect may cause hydrodynamic glass breakage.
Considerations
When laboratory testing, field experience, or engineering judgement indicates that an item could be damaged by mechanical shock, several courses of action might be considered:
Reduce and control the input shock at the source.
Modify the item to improve its toughness or support it to better handle shocks.
Use shock absorbers, shock mounts, or cushions to control the shock transmitted to the item. Cushioning reduces the peak acceleration by extending the duration of the shock.
Plan for failures: accept certain losses. Have redundant systems available, etc.
| Physical sciences | Basics_4 | Physics |
159151 | https://en.wikipedia.org/wiki/Chemical%20composition | Chemical composition | A chemical composition specifies the identity, arrangement, and ratio of the chemical elements making up a compound by way of chemical and atomic bonds.
Chemical formulas can be used to describe the relative amounts of elements present in a compound. For example, the chemical formula for water is H2O: this means that each molecule of water is constituted by 2 atoms of hydrogen (H) and 1 atom of oxygen (O). The chemical composition of water may be interpreted as a 2:1 ratio of hydrogen atoms to oxygen atoms. Different types of chemical formulas are used to convey composition information, such as an empirical or molecular formula.
Nomenclature can be used to express not only the elements present in a compound but their arrangement within the molecules of the compound. In this way, compounds will have unique names which can describe their elemental composition.
Composite mixture
The chemical composition of a mixture can be defined as the distribution of the individual substances that constitute the mixture, called "components". In other words, it is equivalent to quantifying the concentration of each component. Because there are different ways to define the concentration of a component, there are also different ways to define the composition of a mixture. It may be expressed as molar fraction, volume fraction, mass fraction, molality, molarity or normality or mixing ratio.
Chemical composition of a mixture can be represented graphically in plots like ternary plot and quaternary plot.
| Physical sciences | Substance | Chemistry |
159191 | https://en.wikipedia.org/wiki/Arecibo%20Observatory | Arecibo Observatory | The Arecibo Observatory, also known as the National Astronomy and Ionosphere Center (NAIC) and formerly known as the Arecibo Ionosphere Observatory, is an observatory in Barrio Esperanza, Arecibo, Puerto Rico owned by the US National Science Foundation (NSF).
The observatory's main instrument was the Arecibo Telescope, a spherical reflector dish built into a natural sinkhole, with a cable-mount steerable receiver and several radar transmitters for emitting signals mounted above the dish. Completed in 1963, it was the world's largest single-aperture telescope for 53 years, surpassed in July 2016 by the Five-hundred-meter Aperture Spherical Telescope (FAST) in China. On August 10 and November 6 2020, two of the receiver's support cables broke and the NSF announced that it would decommission the telescope. The telescope collapsed on December 1, 2020. In 2022, the NSF announced the telescope will not be rebuilt, with an educational facility to be established on the site.
The observatory also includes a smaller radio telescope, a LIDAR facility, and a visitor center, which remained operational after the telescope's collapse. The asteroid 4337 Arecibo is named after the observatory by Steven J. Ostro, in recognition of the observatory's contributions to the characterization of Solar System bodies.
History
In the 1950s, the United States Department of Defense (DoD)'s Advanced Research Projects Agency (ARPA) was seeking a means to detect missiles in Earth's ionosphere. On November 6, 1959, Cornell University entered into a contract with ARPA to carry out development studies for a large-scale ionospheric radar probe. The Arecibo Telescope was consequently built to study the ionosphere as well as to serve as a general-purpose radio telescope. Construction of the telescope was started in September 1960. The telescope and supporting observatory were formally opened as the Arecibo Ionospheric Observatory on November 1, 1963.
DoD transferred the observatory to the National Science Foundation on October 1, 1969. NSF appointed Cornell University to manage the observatory. By September 1971, NSF had renamed the observatory the National Astronomy and Ionosphere Center (NAIC) and had made it a federally funded research and development center (FFRDC). NASA began contributing funds to the observatory alongside NSF for its planetary radar mission.
In the early 2000s, NASA eliminated funding for the Arecibo Observatory. In 2006, NSF indicated that it would reduce funding for the observatory, and decommission it if other funding could not be found. Academics and politicians lobbied to stave off its closure, and NASA recommitted funding in 2011 for study of near-earth objects. In 2011, NSF delisted Arecibo as an FFRDC, which allowed the observatory to seek funding from a wider variety of sources; the agency also replaced Cornell as the site operator with a team led by SRI International.
Damage to the telescope by 2017's Hurricane Maria led NSF again to suggest closing the observatory. A consortium led by the University of Central Florida (UCF) proposed to manage the observatory and cover much of the operations and maintenance costs, and in 2018, NSF made UCF's consortium the new site operators, though no specific actions or funding were announced.
On August 6, 2020, an auxiliary cable broke on the telescope, followed by a main cable on November 7. The NSF announced that they would decommission the telescope through controlled demolition, but that the other facilities on the observatory would remain operational. Before demolition could occur, remaining support cables from one tower rapidly failed in the morning of December 1, 2020, causing the instrument platform to crash through the dish, shearing off the tops of the support towers, and partially damaging some of the other buildings, though with no injuries. NSF officials said in 2020 that they aimed to have the other observatory facilities operational as soon as possible and were considering rebuilding a new telescope instrument in its place. However, in 2022, the NSF announced the telescope will not be rebuilt but an educational facility would be established on the site. The following year, NSF picked a consortium of universitiesCold Spring Harbor Laboratory in New York; the University of Maryland, Baltimore County; the University of Puerto Rico, Río Piedras Campus in San Juan; and the University of the Sacred Heart, also in San Juanto set up and run an education center called Arecibo C3 (Arecibo Center for Culturally Relevant and Inclusive Science Education, Computational Skills, and Community Engagement).
Facilities
Arecibo Telescope
The observatory's main feature was its large radio telescope, whose main collecting dish was an inverted spherical dome in diameter with an radius of curvature, constructed inside a karst sinkhole. The dish's surface was made of 38,778 perforated aluminum panels, each about , supported by a mesh of steel cables. The ground beneath supported shade-tolerant vegetation.
Since its completion in November 1963, the Telescope had been used for radar astronomy and radio astronomy, and had been part of the Search for extraterrestrial intelligence (SETI) program. It was also used by NASA for Near-Earth object detection. Since around 2006, NSF funding support for the telescope had waned as the Foundation directed funds to newer instruments, though academics petitioned to the NSF and Congress to continue support for the telescope. Numerous hurricanes, including Hurricane Maria, had damaged parts of the telescope, straining the reduced budget.
Two cable breaks, one in August 2020 and a second in November 2020, threatened the structural integrity of the support structure for the suspended platform and damaged the dish. The NSF determined in November 2020 that it was safer to decommission the telescope rather than to try to repair it, but the telescope collapsed before a controlled demolition could be carried out. The remaining support cables from one tower failed around 7:56 a.m. local time on December 1, 2020, causing the receiver platform to fall into the dish and collapsing the telescope.
NASA led an extensive failure investigation and reported the findings, along with a technical bulletin with industry recommendations. The investigation concluded that "a combination of low socket design margin and a high percentage of sustained loading revealed an unexpected vulnerability to zinc creep and environments, resulting in long-term cumulative damage and progressive zinc/wire failure".
Additional telescopes
The Arecibo Observatory also has other facilities beyond the main telescope, including a radio telescope intended for very-long-baseline interferometry (VLBI) with the main telescope; and a LIDAR facility whose research has continued since the main telescope's collapse.
Ángel Ramos Foundation Visitor Center
Opened in 1997, the Ángel Ramos Foundation Visitor Center features interactive exhibits and displays about the operations of the radio telescope, astronomy and atmospheric sciences. The center is named after the financial foundation that honors Ángel Ramos, owner of the El Mundo newspaper and founder of Telemundo. The Foundation provided half of the funds to build the Visitor Center, with the remainder received from private donations and Cornell University.
The center, in collaboration with the Caribbean Astronomical Society, hosts a series of Astronomical Nights throughout the year, which feature diverse discussions regarding exoplanets, astronomical phenomena, and discoveries (such as Comet ISON). The purposes of the center are to increase public interest in astronomy, the observatory's research successes, and space endeavors.
List of directors
Source(s):
1960–1965: William E. Gordon
1965–1966: John W. Findlay
1966–1968: Frank Drake
1968–1971: Gordon Pettengill
1971–1973: Tor Hagfors
1973–1982: Harold D. Craft Jr.
1982–1987: Donald B. Campbell
1987–1988: Riccardo Giovanelli
1988–1992: Michael M. Davis
1992–2003: Daniel R. Altschuler
2003–2006: Sixto A. González
2006–2007: Timothy H. Hankins
2007–2008: Robert B. Kerr
2008–2011: Michael C. Nolan
2011–2015: Robert B. Kerr
2016–2022: Francisco Córdova
2022–2023: Olga Figueroa
Arecibo C3, A STEM Education Center : 2023–present: Wanda Liz Díaz Merced
| Technology | Ground-based observatories | null |
159209 | https://en.wikipedia.org/wiki/Conduct%20disorder | Conduct disorder | Conduct disorder (CD) is a mental disorder diagnosed in childhood or adolescence that presents itself through a repetitive and persistent pattern of behavior that includes theft, lies, physical violence that may lead to destruction, and reckless breaking of rules, in which the basic rights of others or major age-appropriate norms are violated. These behaviors are often referred to as "antisocial behaviors", and is often seen as the precursor to antisocial personality disorder; however, the latter, by definition, cannot be diagnosed until the individual is 18 years old. Conduct disorder may result from parental rejection and neglect and in such cases can be treated with family therapy, as well as behavioral modifications and pharmacotherapy. It may also be caused by environmental lead exposure. Conduct disorder is estimated to affect 51.1 million people globally
Signs and symptoms
One of the symptoms of conduct disorder is a lower level of fear. Research performed on the impact of toddlers exposed to fear and distress shows that negative emotionality (fear) predicts toddlers' empathy-related response to distress. The findings support that if a caregiver is able to respond to infant cues, the toddler has a better ability to respond to fear and distress. If a child does not learn how to handle fear or distress the child will be more likely to lash out at other children. If the caregiver is able to provide therapeutic intervention teaching children at risk better empathy skills, the child will have a lower incident level of conduct disorder.
The condition is also linked to a rise in violent and antisocial behaviour; examples may range from pushing, hitting and biting when the child is young, progressing towards beating and inflicted cruelty as the child becomes older. Additionally, self-harm has been observed in children with conduct disorder (CD). A predisposition towards impulsivity and lowered emotional intelligence have been cited as contributing factors to this phenomenon. However, in order to determine direct causal links further studies must be conducted.
Conduct disorder can present with limited prosocial emotions, lack of remorse or guilt, lack of empathy, lack of concern for performance, and shallow or deficient affect. Symptoms vary by individual, but the four main groups of symptoms are described below.
Aggression to people and animals
Often bullies, threatens or intimidates others
Often initiates physical fights
Has used a weapon that can cause serious physical harm to others (e.g., a bat, brick, broken bottle, knife, gun)
Has been physically cruel to people
Has been physically cruel to animals
Has stolen while confronting a victim (e.g., mugging, purse snatching, extortion, armed robbery)
Feels no remorse or empathy towards the harm, fear, or pain they may have inflicted on others
Destruction of property
Has deliberately engaged in fire setting with the intention of causing serious damage
Has deliberately destroyed others' property (other than by fire setting)
Deceitfulness or theft
Has broken into someone else's house, other building, car, other vehicle, etc
Often lies to obtain goods or favors or to avoid obligations (i.e., "cons" others)
Has stolen items of nontrivial value without confronting a victim (e.g., shoplifting, but without breaking and entering; forgery)
Serious violations of rules
Often stays out at night despite parental prohibitions, beginning before age 13
Has run away from home overnight at least twice while living in parental or parental surrogate home (or once without returning for a lengthy period)
Is often truant from school, beginning before age 13
The lack of empathy these individuals have and the aggression that accompanies this carelessness for the consequences is dangerous, not only for the individual but for those around them.
Developmental course
Currently, two possible developmental courses are thought to lead to conduct disorder. The first is known as the "childhood-onset type" and occurs when conduct disorder symptoms are present before the age of 10 years. This course is often linked to a more persistent life course and more pervasive behaviors. Specifically, children in this group have greater levels of ADHD symptoms, neuropsychological deficits, more academic problems, increased family dysfunction and higher likelihood of aggression and violence.
There is debate among professionals regarding the validity and appropriateness of diagnosing young children with conduct disorder. The characteristics of the diagnosis are commonly seen in young children who are referred to mental health professionals. A premature diagnosis made in young children, and thus labeling and stigmatizing an individual, may be inappropriate. It is also argued that some children may not in fact have conduct disorder, but are engaging in developmentally appropriate disruptive behavior.
The second developmental course is known as the "adolescent-onset type" and occurs when conduct disorder symptoms are present after the age of 10 years. Individuals with adolescent-onset conduct disorder exhibit less impairment than those with the childhood-onset type and are not characterized by similar psychopathology. At times, these individuals will remit in their deviant patterns before adulthood. Research has shown that there is a greater number of children with adolescent-onset conduct disorder than those with childhood-onset, suggesting that adolescent-onset conduct disorder is an exaggeration of developmental behaviors that are typically seen in adolescence, such as rebellion against authority figures and rejection of conventional values. However, this argument is not established and empirical research suggests that these subgroups are not as valid as once thought.
In addition to these two courses that are recognized by the DSM-IV-TR, there appears to be a relationship among oppositional defiant disorder, conduct disorder, and antisocial personality disorder. Specifically, research has demonstrated continuity in the disorders such that conduct disorder is often diagnosed in children who have been previously diagnosed with oppositional defiant disorder, and most adults with antisocial personality disorder were previously diagnosed with conduct disorder. For example, some research has shown that 90% of children diagnosed with conduct disorder had a previous diagnosis of oppositional defiant disorder. Moreover, both disorders share relevant risk factors and disruptive behaviors, suggesting that oppositional defiant disorder is a developmental precursor and milder variant of conduct disorder. However, this is not to say that this trajectory occurs in all individuals. In fact, only about 25% of children with oppositional defiant disorder will receive a later diagnosis of conduct disorder. Correspondingly, there is an established link between conduct disorder and the diagnosis of antisocial personality disorder as an adult. In fact, the current diagnostic criteria for antisocial personality disorder require a conduct disorder diagnosis before the age of 15. However, again, only 25–40% of youths with conduct disorder will develop an antisocial personality disorder. Nonetheless, many of the individuals who do not meet full criteria for antisocial personality disorder still exhibit a pattern of social and personal impairments or antisocial behaviors. These developmental trajectories suggest the existence of antisocial pathways in certain individuals, which have important implications for both research and treatment.
Associated conditions
Children with conduct disorder have a high risk of developing other adjustment problems. Specifically, risk factors associated with conduct disorder and the effects of conduct disorder symptomatology on a child's psychosocial context have been linked to overlapping with other psychological disorders. In this way, there seems to be reciprocal effects of comorbidity with certain disorders, leading to increased overall risk for these youth.
Attention deficit hyperactivity disorder
ADHD is the condition most commonly associated with conduct disorders, with approximately 25–30% of boys and 50–55% of girls with conduct disorder having a comorbid ADHD diagnosis. While it is unlikely that ADHD alone is a risk factor for developing conduct disorder, children who exhibit hyperactivity and impulsivity along with aggression is associated with the early onset of conduct problems. Moreover, children with comorbid conduct disorder and ADHD show more severe aggression.
Substance use disorders
Conduct disorder is also highly associated with both substance use and abuse. Children with conduct disorder have an earlier onset of substance use, as compared to their peers, and also tend to use multiple substances. However, substance use disorders themselves can directly or indirectly cause conduct disorder like traits in about half of adolescents who have a substance use disorder. As mentioned above, it seems that there is a transactional relationship between substance use and conduct problems, such that aggressive behaviors increase substance use, which leads to increased aggressive behavior.
Substance use in conduct disorder can lead to antisocial behavior in adulthood.
Schizophrenia
Conduct disorder is a precursor to schizophrenia in a minority of cases, with about 40% of men and 31% of women with schizophrenia meeting criteria for childhood conduct disorder.
Cause
While the cause of conduct disorder is complicated by an intricate interplay of biological and environmental factors, identifying underlying mechanisms is crucial for obtaining accurate assessment and implementing effective treatment. These mechanisms serve as the fundamental building blocks on which evidence-based treatments are developed. Despite the complexities, several domains have been implicated in the development of conduct disorder including cognitive variables, neurological factors, intraindividual factors, familial and peer influences, and wider contextual factors. These factors may also vary based on the age of onset, with different variables related to early (e.g., neurodevelopmental basis) and adolescent (e.g., social/peer relationships) onset.
Risks
The development of conduct disorder is not immutable or predetermined. A number of interactive risk and protective factors exist that can influence and change outcomes, and in most cases conduct disorder develops due to an interaction and gradual accumulation of risk factors. In addition to the risk factors identified under cause, several other variables place youth at increased risk for developing the disorder, including child physical abuse, in-utero alcohol exposure, and maternal smoking during pregnancy. Protective factors have also been identified, and most notably include high IQ, being female, positive social orientations, good coping skills, and supportive family and community relationships.
However, a correlation between a particular risk factor and a later developmental outcome (such as conduct disorder) cannot be taken as definitive evidence for a causal link. Co-variation between two variables can arise, for instance, if they represent age-specific expressions of similar underlying genetic factors. There have been studies that found that, although smoking during pregnancy does contribute to increased levels of antisocial behaviour, in mother-fetus pairs that were not genetically related (by virtue of in-vitro fertilisation), no link between smoking during pregnancy and later conduct problems was found. Thus, the distinction between causality and correlation is an important consideration.
Learning disabilities
While language impairments are most common, approximately 20–25% of youth with conduct disorder have some type of learning disability. Although the relationship between the disorders is complex, it seems as if learning disabilities result from a combination of ADHD, a history of academic difficulty and failure, and long-standing socialization difficulties with family and peers. However, confounding variables, such as language deficits, SES disadvantage, or neurodevelopmental delay also need to be considered in this relationship, as they could help explain some of the association between conduct disorder and learning problems.
Cognitive factors
In terms of cognitive function, intelligence and cognitive deficits are common amongst youths with conduct disorder, particularly those with early-onset and have intelligence quotients (IQ) one standard deviation below the mean and severe deficits in verbal reasoning and executive function. Executive function difficulties may manifest in terms of one's ability to shift between tasks, plan as well as organize, and also inhibit a prepotent response. These findings hold true even after taking into account other variables such as socioeconomic status (SES), and education. However, IQ and executive function deficits are only one piece of the puzzle, and the magnitude of their influence is increased during transactional processes with environmental factors.
Brain differences
Beyond difficulties in executive function, neurological research on youth with conduct disorder also demonstrate differences in brain anatomy and function that reflect the behaviors and mental anomalies associated in conduct disorder. Compared to normal controls, youths with early and adolescent onset of conduct disorder displayed reduced responses in brain regions associated with social behavior (i.e., amygdala, ventromedial prefrontal cortex, insula, and orbitofrontal cortex). In addition, youths with conduct disorder also demonstrated less responsiveness in the orbitofrontal regions of the brain during a stimulus-reinforcement and reward task. This provides a neural explanation for why youths with conduct disorder may be more likely to repeat poor decision making patterns. Lastly, youths with conduct disorder display a reduction in grey matter volume in the amygdala, which may account for the fear conditioning deficits. This reduction has been linked to difficulty processing social emotional stimuli, regardless of the age of onset. Aside from the differences in neuroanatomy and activation patterns between youth with conduct disorder and controls, neurochemical profiles also vary between groups. Individuals with conduct disorder are characterized as having reduced serotonin and cortisol levels (e.g., reduced hypothalamic-pituitary-adrenal (HPA) axis), as well as reduced autonomic nervous system (ANS) functioning. These reductions are associated with the inability to regulate mood and impulsive behaviors, weakened signals of anxiety and fear, and decreased self-esteem. Taken together, these findings may account for some of the variance in the psychological and behavioral patterns of youth with conduct disorder.
Intra-individual factors
Aside from findings related to neurological and neurochemical profiles of youth with conduct disorder, intraindividual factors such as genetics may also be relevant. Having a sibling or parent with conduct disorder increases the likelihood of having the disorder, with a heritability rate of .53. There also tends to be a stronger genetic link for individuals with childhood-onset compared to adolescent onset. In addition, youth with conduct disorder also exhibit polymorphism in the monoamine oxidase A gene, low resting heart rates, and increased testosterone.
Family and peer influences
Elements of the family and social environment may also play a role in the development and maintenance of conduct disorder. For instance, antisocial behavior suggestive of conduct disorder is associated with single parent status, parental divorce, large family size, and the young age of mothers. However, these factors are difficult to tease apart from other demographic variables that are known to be linked with conduct disorder, including poverty and low socioeconomic status. Family functioning and parent–child interactions also play a substantial role in childhood aggression and conduct disorder, with low levels of parental involvement, inadequate supervision, and unpredictable discipline practices reinforcing youth's defiant behaviors. Moreover, maternal depression has a significant impact on conduct disordered children and can lead to negative reciprocal feedback between the mother and conduct disordered child. Peer influences have also been related to the development of antisocial behavior in youth, particularly peer rejection in childhood and association with deviant peers. Peer rejection is not only a marker of a number of externalizing disorders, but also a contributing factor for the continuity of the disorders over time. Hinshaw and Lee (2003) also explain that association with deviant peers has been thought to influence the development of conduct disorder in two ways: 1) a "selection" process whereby youth with aggressive characteristics choose deviant friends, and 2) a "facilitation" process whereby deviant peer networks bolster patterns of antisocial behavior. In a separate study by Bonin and colleagues, parenting programs were shown to positively affect child behavior and reduce costs to the public sector.
Wider contextual factors
In addition to the individual and social factors associated with conduct disorder, research has highlighted the importance of environment and context in youth with antisocial behavior. However, it is important to note that these are not static factors, but rather transactional in nature (e.g., individuals are influenced by and also influence their environment). For instance, neighborhood safety and exposure to violence have been studied in conjunction with conduct disorder, but it is not simply the case that youth with aggressive tendencies reside in violent neighborhoods. Transactional models propose that youth may resort to violence more often as a result of exposure to community violence, but their predisposition towards violence also contributes to neighborhood climate.
Diagnosis
Conduct disorder is classified in the fourth edition of Diagnostic and Statistical Manual of Mental Disorders (DSM). It is diagnosed based on a prolonged pattern of antisocial behaviour such as serious violation of laws and social norms and rules in people younger than the age of 18. Similar criteria are used in those over the age of 18 for the diagnosis of antisocial personality disorder. No proposed revisions for the main criteria of conduct disorder exist in the DSM-5; there is a recommendation by the work group to add an additional specifier for callous and unemotional traits. According to DSM-5 criteria for conduct disorder, there are four categories that could be present in the child's behavior: aggression to people and animals, destruction of property, deceitfulness or theft, and serious violation of rules.
Almost all adolescents who have a substance use disorder have conduct disorder-like traits, but after successful treatment of the substance use disorder, about half of these adolescents no longer display conduct disorder-like symptoms. Therefore, it is important to exclude a substance-induced cause and instead address the substance use disorder prior to making a psychiatric diagnosis of conduct disorder.
Treatment
First-line treatment is psychotherapy based on behavior modification and problem-solving skills. This treatment seeks to integrate individual, school, and family settings. Parent-management training can also be helpful. No medications have been FDA approved for conduct disorder, but risperidone (a second-generation antipsychotic) has the most evidence to support its use for aggression in children who have not responded to behavioral and psychosocial interventions. Selective Serotonin Reuptake Inhibitors (SSRIs) are also sometimes used to treat irritability in these patients.
Prognosis
About 25–40% of youths diagnosed with conduct disorder qualify for a diagnosis of antisocial personality disorder when they reach adulthood. For those that do not develop ASPD, most still exhibit social dysfunction in adult life.
Epidemiology
Conduct disorder is estimated to affect 51.1 million people globally as of 2013. The percentage of children affected by conduct disorder is estimated to range from 1–10%. However, among incarcerated youth or youth in juvenile detention facilities, rates of conduct disorder are between 23% and 87%.
Sex differences
The majority of research on conduct disorder suggests that there are a significantly greater number of males than females with the diagnosis, with some reports demonstrating a threefold to fourfold difference in prevalence. However, this difference may be somewhat biased by the diagnostic criteria which focus on more overt behaviors, such as aggression and fighting, which are more often exhibited by males. Females are more likely to be characterized by covert behaviors, such as stealing or running away. Moreover, conduct disorder in females is linked to several negative outcomes, such as antisocial personality disorder and early pregnancy, suggesting that sex differences in disruptive behaviors need to be more fully understood.
Females are more responsive to peer pressure including feelings of guilt than males.
Racial differences
Research on racial or cultural differences on the prevalence or presentation of conduct disorder is limited. However, according to studies on American youth, it appears that black youths are more often diagnosed with conduct disorder, while Asian youths are about one-third as likely to be diagnosed with conduct disorder when compared to white youths. It has been widely theorized for decades that this disparity is due to unconscious bias in those who give the diagnosis.
| Biology and health sciences | Mental disorders | Health |
159225 | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics | Fermi–Dirac statistics | Fermi–Dirac statistics is a type of quantum statistics that applies to the physics of a system consisting of many non-interacting, identical particles that obey the Pauli exclusion principle. A result is the Fermi–Dirac distribution of particles over energy states. It is named after Enrico Fermi and Paul Dirac, each of whom derived the distribution independently in 1926. Fermi–Dirac statistics is a part of the field of statistical mechanics and uses the principles of quantum mechanics.
Fermi–Dirac statistics applies to identical and indistinguishable particles with half-integer spin (1/2, 3/2, etc.), called fermions, in thermodynamic equilibrium. For the case of negligible interaction between particles, the system can be described in terms of single-particle energy states. A result is the Fermi–Dirac distribution of particles over these states where no two particles can occupy the same state, which has a considerable effect on the properties of the system. Fermi–Dirac statistics is most commonly applied to electrons, a type of fermion with spin 1/2.
A counterpart to Fermi–Dirac statistics is Bose–Einstein statistics, which applies to identical and indistinguishable particles with integer spin (0, 1, 2, etc.) called bosons. In classical physics, Maxwell–Boltzmann statistics is used to describe particles that are identical and treated as distinguishable. For both Bose–Einstein and Maxwell–Boltzmann statistics, more than one particle can occupy the same state, unlike Fermi–Dirac statistics.
History
Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronic heat capacity of a metal at room temperature seemed to come from 100 times fewer electrons than were in the electric current. It was also difficult to understand why the emission currents generated by applying high electric fields to metals at room temperature were almost independent of temperature.
The difficulty encountered by the Drude model, the electronic theory of metals at that time, was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words, it was believed that each electron contributed to the specific heat an amount on the order of the Boltzmann constant kB.
This problem remained unsolved until the development of Fermi–Dirac statistics.
Fermi–Dirac statistics was first published in 1926 by Enrico Fermi and Paul Dirac. According to Max Born, Pascual Jordan developed in 1925 the same statistics, which he called Pauli statistics, but it was not published in a timely manner. According to Dirac, it was first studied by Fermi, and Dirac called it "Fermi statistics" and the corresponding particles "fermions".
Fermi–Dirac statistics was applied in 1926 by Ralph Fowler to describe the collapse of a star to a white dwarf. In 1927 Arnold Sommerfeld applied it to electrons in metals and developed the free electron model, and in 1928 Fowler and Lothar Nordheim applied it to field electron emission from metals. Fermi–Dirac statistics continue to be an important part of physics.
Fermi–Dirac distribution
For a system of identical fermions in thermodynamic equilibrium, the average number of fermions in a single-particle state is given by the Fermi–Dirac (F–D) distribution:
where is the Boltzmann constant, is the absolute temperature, is the energy of the single-particle state , and is the total chemical potential. The distribution is normalized by the condition
that can be used to express in that can assume either a positive or negative value.
At zero absolute temperature, is equal to the Fermi energy plus the potential energy per fermion, provided it is in a neighbourhood of positive spectral density. In the case of a spectral gap, such as for electrons in a semiconductor, the point of symmetry is typically called the Fermi level or—for electrons—the electrochemical potential, and will be located in the middle of the gap.
The Fermi–Dirac distribution is only valid if the number of fermions in the system is large enough so that adding one more fermion to the system has negligible effect on . Since the Fermi–Dirac distribution was derived using the Pauli exclusion principle, which allows at most one fermion to occupy each possible state, a result is that .
The variance of the number of particles in state i can be calculated from the above expression for :
Distribution of particles over energy
From the Fermi–Dirac distribution of particles over states, one can find the distribution of particles over energy. The average number of fermions with energy can be found by multiplying the Fermi–Dirac distribution by the degeneracy (i.e. the number of states with energy ),
When , it is possible that , since there is more than one state that can be occupied by fermions with the same energy .
When a quasi-continuum of energies has an associated density of states (i.e. the number of states per unit energy range per unit volume), the average number of fermions per unit energy range per unit volume is
where is called the Fermi function and is the same function that is used for the Fermi–Dirac distribution :
so that
Quantum and classical regimes
The Fermi–Dirac distribution approaches the Maxwell–Boltzmann distribution in the limit of high temperature and low particle density, without the need for any ad hoc assumptions:
In the limit of low particle density, , therefore or equivalently . In that case, , which is the result from Maxwell-Boltzmann statistics.
In the limit of high temperature, the particles are distributed over a large range of energy values, therefore the occupancy on each state (especially the high energy ones with ) is again very small, . This again reduces to Maxwell-Boltzmann statistics.
The classical regime, where Maxwell–Boltzmann statistics can be used as an approximation to Fermi–Dirac statistics, is found by considering the situation that is far from the limit imposed by the Heisenberg uncertainty principle for a particle's position and momentum. For example, in physics of semiconductor, when the density of states of conduction band is much higher than the doping concentration, the energy gap between conduction band and fermi level could be calculated using Maxwell-Boltzmann statistics. Otherwise, if the doping concentration is not negligible compared to density of states of conduction band, the Fermi–Dirac distribution should be used instead for accurate calculation. It can then be shown that the classical situation prevails when the concentration of particles corresponds to an average interparticle separation that is much greater than the average de Broglie wavelength of the particles:
where is the Planck constant, and is the mass of a particle.
For the case of conduction electrons in a typical metal at = 300 K (i.e. approximately room temperature), the system is far from the classical regime because . This is due to the small mass of the electron and the high concentration (i.e. small ) of conduction electrons in the metal. Thus Fermi–Dirac statistics is needed for conduction electrons in a typical metal.
Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the temperature of white dwarf is high (typically = on its surface), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again Fermi–Dirac statistics is required.
Derivations
Grand canonical ensemble
The Fermi–Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from the grand canonical ensemble. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential μ fixed by the reservoir).
Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir.
In other words, each single-particle level is a separate, tiny grand canonical ensemble.
By the Pauli exclusion principle, there are only two possible microstates for the single-particle level: no particle (energy E = 0), or one particle (energy E = ε). The resulting partition function for that single-particle level therefore has just two terms:
and the average particle number for that single-particle level substate is given by
This result applies for each single-particle level, and thus gives the Fermi–Dirac distribution for the entire state of the system.
The variance in particle number (due to thermal fluctuations) may also be derived (the particle number has a simple Bernoulli distribution):
This quantity is important in transport phenomena such as the Mott relations for electrical conductivity and thermoelectric coefficient for an electron gas, where the ability of an energy level to contribute to transport phenomena is proportional to .
Canonical ensemble
It is also possible to derive Fermi–Dirac statistics in the canonical ensemble. Consider a many-particle system composed of N identical fermions that have negligible mutual interaction and are in thermal equilibrium. Since there is negligible interaction between the fermions, the energy of a state of the many-particle system can be expressed as a sum of single-particle energies:
where is called the occupancy number and is the number of particles in the single-particle state with energy . The summation is over all possible single-particle states .
The probability that the many-particle system is in the state is given by the normalized canonical distribution:
where , is called the Boltzmann factor, and the summation is over all possible states of the many-particle system. The average value for an occupancy number is
Note that the state of the many-particle system can be specified by the particle occupancy of the single-particle states, i.e. by specifying so that
and the equation for becomes
where the summation is over all combinations of values of which obey the Pauli exclusion principle, and = 0 or for each . Furthermore, each combination of values of satisfies the constraint that the total number of particles is :
Rearranging the summations,
where the upper index on the summation sign indicates that the sum is not over and is subject to the constraint that the total number of particles associated with the summation is . Note that still depends on through the constraint, since in one case and is evaluated with while in the other case and is evaluated with To simplify the notation and to clearly indicate that still depends on through define
so that the previous expression for can be rewritten and evaluated in terms of the :
The following approximation will be used to find an expression to substitute for :
where
If the number of particles is large enough so that the change in the chemical potential is very small when a particle is added to the system, then Applying the exponential function to both sides, substituting for and rearranging,
Substituting the above into the equation for and using a previous definition of to substitute for , results in the Fermi–Dirac distribution:
Like the Maxwell–Boltzmann distribution and the Bose–Einstein distribution, the Fermi–Dirac distribution can also be derived by the Darwin–Fowler method of mean values.
Microcanonical ensemble
A result can be achieved by directly analyzing the multiplicities of the system and using Lagrange multipliers.
Suppose we have a number of energy levels, labeled by index i, each level having energy εi and containing a total of ni particles. Suppose each level contains gi distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta (i.e. their momenta may be along different directions), in which case they are distinguishable from each other, yet they can still have the same energy. The value of gi associated with level i is called the "degeneracy" of that energy level. The Pauli exclusion principle states that only one fermion can occupy any such sublevel.
The number of ways of distributing ni indistinguishable particles among the gi sublevels of an energy level, with a maximum of one particle per sublevel, is given by the binomial coefficient, using its combinatorial interpretation:
For example, distributing two particles in three sublevels will give population numbers of 110, 101, or 011 for a total of three ways which equals 3!/(2!1!).
The number of ways that a set of occupation numbers ni can be realized is the product of the ways that each individual energy level can be populated:
Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of ni for which W is maximized, subject to the constraint that there be a fixed number of particles and a fixed energy. We constrain our solution using Lagrange multipliers forming the function:
Using Stirling's approximation for the factorials, taking the derivative with respect to ni, setting the result to zero, and solving for ni yields the Fermi–Dirac population numbers:
By a process similar to that outlined in the Maxwell–Boltzmann statistics article, it can be shown thermodynamically that and , so that finally, the probability that a state will be occupied is
| Physical sciences | Statistical mechanics | Physics |
159266 | https://en.wikipedia.org/wiki/Gene%20expression | Gene expression | Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product that enables it to produce end products, proteins or non-coding RNA, and ultimately affect a phenotype. These products are often proteins, but in non-protein-coding genes such as transfer RNA (tRNA) and small nuclear RNA (snRNA), the product is a functional non-coding RNA.
The process of gene expression is used by all known life—eukaryotes (including multicellular organisms), prokaryotes (bacteria and archaea), and utilized by viruses—to generate the macromolecular machinery for life.
In genetics, gene expression is the most fundamental level at which the genotype gives rise to the phenotype, i.e. observable trait. The genetic information stored in DNA represents the genotype, whereas the phenotype results from the "interpretation" of that information. Such phenotypes are often displayed by the synthesis of proteins that control the organism's structure and development, or that act as enzymes catalyzing specific metabolic pathways.
All steps in the gene expression process may be modulated (regulated), including the transcription, RNA splicing, translation, and post-translational modification of a protein. Regulation of gene expression gives control over the timing, location, and amount of a given gene product (protein or ncRNA) present in a cell and can have a profound effect on the cellular structure and function. Regulation of gene expression is the basis for cellular differentiation, development, morphogenesis and the versatility and adaptability of any organism. Gene regulation may therefore serve as a substrate for evolutionary change.
Mechanism
Transcription
The production of a RNA copy from a DNA strand is called transcription, and is performed by RNA polymerases, which add one ribonucleotide at a time to a growing RNA strand as per the complementarity law of the nucleotide bases. This RNA is complementary to the template 3′ → 5′ DNA strand, with the exception that thymines (T) are replaced with uracils (U) in the RNA and possible errors.
In bacteria, transcription is carried out by a single type of RNA polymerase, which needs to bind a DNA sequence called a Pribnow box with the help of the sigma factor protein (σ factor) to start transcription. In eukaryotes, transcription is performed in the nucleus by three types of RNA polymerases, each of which needs a special DNA sequence called the promoter and a set of DNA-binding proteins—transcription factors—to initiate the process (see regulation of transcription below). RNA polymerase I is responsible for transcription of ribosomal RNA (rRNA) genes. RNA polymerase II (Pol II) transcribes all protein-coding genes but also some non-coding RNAs (e.g., snRNAs, snoRNAs or long non-coding RNAs). RNA polymerase III transcribes 5S rRNA, transfer RNA (tRNA) genes, and some small non-coding RNAs (e.g., 7SK). Transcription ends when the polymerase encounters a sequence called the terminator.
mRNA processing
While transcription of prokaryotic protein-coding genes creates messenger RNA (mRNA) that is ready for translation into protein, transcription of eukaryotic genes leaves a primary transcript of RNA (pre-RNA), which first has to undergo a series of modifications to become a mature RNA. Types and steps involved in the maturation processes vary between coding and non-coding preRNAs; i.e. even though preRNA molecules for both mRNA and tRNA undergo splicing, the steps and machinery involved are different. The processing of non-coding RNA is described below (non-coding RNA maturation).
The processing of pre-mRNA include 5′ capping, which is set of enzymatic reactions that add 7-methylguanosine (m7G) to the 5′ end of pre-mRNA and thus protect the RNA from degradation by exonucleases. The m7G cap is then bound by cap binding complex heterodimer (CBP20/CBP80), which aids in mRNA export to cytoplasm and also protect the RNA from decapping.
Another modification is 3′ cleavage and polyadenylation. They occur if polyadenylation signal sequence (5′- AAUAAA-3′) is present in pre-mRNA, which is usually between protein-coding sequence and terminator. The pre-mRNA is first cleaved and then a series of ~200 adenines (A) are added to form poly(A) tail, which protects the RNA from degradation. The poly(A) tail is bound by multiple poly(A)-binding proteins (PABPs) necessary for mRNA export and translation re-initiation. In the inverse process of deadenylation, poly(A) tails are shortened by the CCR4-Not 3′-5′ exonuclease, which often leads to full transcript decay.
A very important modification of eukaryotic pre-mRNA is RNA splicing. The majority of eukaryotic pre-mRNAs consist of alternating segments called exons and introns. During the process of splicing, an RNA-protein catalytical complex known as spliceosome catalyzes two transesterification reactions, which remove an intron and release it in form of lariat structure, and then splice neighbouring exons together. In certain cases, some introns or exons can be either removed or retained in mature mRNA. This so-called alternative splicing creates series of different transcripts originating from a single gene. Because these transcripts can be potentially translated into different proteins, splicing extends the complexity of eukaryotic gene expression and the size of a species proteome.
Extensive RNA processing may be an evolutionary advantage made possible by the nucleus of eukaryotes. In prokaryotes, transcription and translation happen together, whilst in eukaryotes, the nuclear membrane separates the two processes, giving time for RNA processing to occur.
Non-coding RNA maturation
In most organisms non-coding genes (ncRNA) are transcribed as precursors that undergo further processing. In the case of ribosomal RNAs (rRNA), they are often transcribed as a pre-rRNA that contains one or more rRNAs. The pre-rRNA is cleaved and modified (2′-O-methylation and pseudouridine formation) at specific sites by approximately 150 different small nucleolus-restricted RNA species, called snoRNAs. SnoRNAs associate with proteins, forming snoRNPs. While snoRNA part basepair with the target RNA and thus position the modification at a precise site, the protein part performs the catalytical reaction. In eukaryotes, in particular a snoRNP called RNase, MRP cleaves the 45S pre-rRNA into the 28S, 5.8S, and 18S rRNAs. The rRNA and RNA processing factors form large aggregates called the nucleolus.
In the case of transfer RNA (tRNA), for example, the 5′ sequence is removed by RNase P, whereas the 3′ end is removed by the tRNase Z enzyme and the non-templated 3′ CCA tail is added by a nucleotidyl transferase. In the case of micro RNA (miRNA), miRNAs are first transcribed as primary transcripts or pri-miRNA with a cap and poly-A tail and processed to short, 70-nucleotide stem-loop structures known as pre-miRNA in the cell nucleus by the enzymes Drosha and Pasha. After being exported, it is then processed to mature miRNAs in the cytoplasm by interaction with the endonuclease Dicer, which also initiates the formation of the RNA-induced silencing complex (RISC), composed of the Argonaute protein.
Even snRNAs and snoRNAs themselves undergo series of modification before they become part of functional RNP complex. This is done either in the nucleoplasm or in the specialized compartments called Cajal bodies. Their bases are methylated or pseudouridinilated by a group of small Cajal body-specific RNAs (scaRNAs), which are structurally similar to snoRNAs.
RNA export
In eukaryotes most mature RNA must be exported to the cytoplasm from the nucleus. While some RNAs function in the nucleus, many RNAs are transported through the nuclear pores and into the cytosol. Export of RNAs requires association with specific proteins known as exportins. Specific exportin molecules are responsible for the export of a given RNA type. mRNA transport also requires the correct association with Exon Junction Complex (EJC), which ensures that correct processing of the mRNA is completed before export. In some cases RNAs are additionally transported to a specific part of the cytoplasm, such as a synapse; they are then towed by motor proteins that bind through linker proteins to specific sequences (called "zipcodes") on the RNA.
Translation
For some non-coding RNA, the mature RNA is the final gene product. In the case of messenger RNA (mRNA) the RNA is an information carrier coding for the synthesis of one or more proteins. mRNA carrying a single protein sequence (common in eukaryotes) is monocistronic whilst mRNA carrying multiple protein sequences (common in prokaryotes) is known as polycistronic.
Every mRNA consists of three parts: a 5′ untranslated region (5′UTR), a protein-coding region or open reading frame (ORF), and a 3′ untranslated region (3′UTR). The coding region carries information for protein synthesis encoded by the genetic code to form triplets. Each triplet of nucleotides of the coding region is called a codon and corresponds to a binding site complementary to an anticodon triplet in transfer RNA. Transfer RNAs with the same anticodon sequence always carry an identical type of amino acid. Amino acids are then chained together by the ribosome according to the order of triplets in the coding region. The ribosome helps transfer RNA to bind to messenger RNA and takes the amino acid from each transfer RNA and makes a structure-less protein out of it. Each mRNA molecule is translated into many protein molecules, on average ~2800 in mammals.
In prokaryotes translation generally occurs at the point of transcription (co-transcriptionally), often using a messenger RNA that is still in the process of being created. In eukaryotes translation can occur in a variety of regions of the cell depending on where the protein being written is supposed to be. Major locations are the cytoplasm for soluble cytoplasmic proteins and the membrane of the endoplasmic reticulum for proteins that are for export from the cell or insertion into a cell membrane. Proteins that are supposed to be produced at the endoplasmic reticulum are recognised part-way through the translation process. This is governed by the signal recognition particle—a protein that binds to the ribosome and directs it to the endoplasmic reticulum when it finds a signal peptide on the growing (nascent) amino acid chain.
Folding
Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA into a linear chain of amino acids. This polypeptide lacks any developed three-dimensional structure (the left hand side of the neighboring figure). The polypeptide then folds into its characteristic and functional three-dimensional structure from a random coil. Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right hand side of the figure) known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence (Anfinsen's dogma).
The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded. Failure to fold into the intended shape usually produces inactive proteins with different properties including toxic prions. Several neurodegenerative and other diseases are believed to result from the accumulation of misfolded proteins. Many allergies are caused by the folding of the proteins, for the immune system does not produce antibodies for certain protein structures.
Enzymes called chaperones assist the newly formed protein to attain (fold into) the 3-dimensional structure it needs to function. Similarly, RNA chaperones help RNAs attain their functional shapes. Assisting protein folding is one of the main roles of the endoplasmic reticulum in eukaryotes.
Translocation
Secretory proteins of eukaryotes or prokaryotes must be translocated to enter the secretory pathway. Newly synthesized proteins are directed to the eukaryotic Sec61 or prokaryotic SecYEG translocation channel by signal peptides. The efficiency of protein secretion in eukaryotes is very dependent on the signal peptide which has been used.
Protein transport
Many proteins are destined for other parts of the cell than the cytosol and a wide range of signalling sequences or (signal peptides) are used to direct proteins to where they are supposed to be. In prokaryotes this is normally a simple process due to limited compartmentalisation of the cell. However, in eukaryotes there is a great variety of different targeting processes to ensure the protein arrives at the correct organelle.
Not all proteins remain within the cell and many are exported, for example, digestive enzymes, hormones and extracellular matrix proteins. In eukaryotes the export pathway is well developed and the main mechanism for the export of these proteins is translocation to the endoplasmic reticulum, followed by transport via the Golgi apparatus.
Regulation of gene expression
Regulation of gene expression is the control of the amount and timing of appearance of the functional product of a gene. Control of expression is vital to allow a cell to produce the gene products it needs when it needs them; in turn, this gives cells the flexibility to adapt to a variable environment, external signals, damage to the cell, and other stimuli. More generally, gene regulation gives the cell control over all structure and function, and is the basis for cellular differentiation, morphogenesis and the versatility and adaptability of any organism.
Numerous terms are used to describe types of genes depending on how they are regulated; these include:
A constitutive gene is a gene that is transcribed continually as opposed to a facultative gene, which is only transcribed when needed.
A housekeeping gene is a gene that is required to maintain basic cellular function and so is typically expressed in all cell types of an organism. Examples include actin, GAPDH and ubiquitin. Some housekeeping genes are transcribed at a relatively constant rate and these genes can be used as a reference point in experiments to measure the expression rates of other genes.
A facultative gene is a gene only transcribed when needed as opposed to a constitutive gene.
An inducible gene is a gene whose expression is either responsive to environmental change or dependent on the position in the cell cycle.
Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. The stability of the final gene product, whether it is RNA or protein, also contributes to the expression level of the gene—an unstable product results in a low expression level. In general gene expression is regulated through changes in the number and type of interactions between molecules that collectively influence transcription of DNA and translation of RNA.
Some simple examples of where gene expression is important are:
Control of insulin expression so it gives a signal for blood glucose regulation.
X chromosome inactivation in female mammals to prevent an "overdose" of the genes it contains.
Cyclin expression levels control progression through the eukaryotic cell cycle.
Transcriptional regulation
Regulation of transcription can be broken down into three main routes of influence; genetic (direct interaction of a control factor with the gene), modulation interaction of a control factor with the transcription machinery and epigenetic (non-sequence changes in DNA structure that influence transcription).
Direct interaction with DNA is the simplest and the most direct method by which a protein changes transcription levels. Genes often have several protein binding sites around the coding region with the specific function of regulating transcription. There are many classes of regulatory DNA binding sites known as enhancers, insulators and silencers. The mechanisms for regulating transcription are varied, from blocking key binding sites on the DNA for RNA polymerase to acting as an activator and promoting transcription by assisting RNA polymerase binding.
The activity of transcription factors is further modulated by intracellular signals causing protein post-translational modification including phosphorylation, acetylation, or glycosylation. These changes influence a transcription factor's ability to bind, directly or indirectly, to promoter DNA, to recruit RNA polymerase, or to favor elongation of a newly synthesized RNA molecule.
The nuclear membrane in eukaryotes allows further regulation of transcription factors by the duration of their presence in the nucleus, which is regulated by reversible changes in their structure and by binding of other proteins. Environmental stimuli or endocrine signals may cause modification of regulatory proteins eliciting cascades of intracellular signals, which result in regulation of gene expression.
It has become apparent that there is a significant influence of non-DNA-sequence specific effects on transcription. These effects are referred to as epigenetic and involve the higher order structure of DNA, non-sequence specific DNA binding proteins and chemical modification of DNA. In general epigenetic effects alter the accessibility of DNA to proteins and so modulate transcription.
In eukaryotes the structure of chromatin, controlled by the histone code, regulates access to DNA with significant impacts on the expression of genes in euchromatin and heterochromatin areas.
Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription
Gene expression in mammals is regulated by many cis-regulatory elements, including core promoters and promoter-proximal elements that are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand). Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Enhancers and their associated transcription factors have a leading role in the regulation of gene expression.
Enhancers are genome regions that regulate genes. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. Multiple enhancers, each often tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control gene expression.
The illustration shows an enhancer looping around to come into proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1). One member of the dimer is anchored to its binding motif on the enhancer and the other member is anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function-specific transcription factors (among the about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer. A small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern transcription level of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter.
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene.
DNA methylation and demethylation in transcriptional regulation
DNA methylation is a widespread mechanism for epigenetic influence on gene expression and is seen in bacteria and eukaryotes and has roles in heritable transcription silencing and transcription regulation. Methylation most often occurs on a cytosine (see Figure). Methylation of cytosine primarily occurs in dinucleotide sequences where a cytosine is followed by a guanine, a CpG site. The number of CpG sites in the human genome is about 28 million. Depending on the type of cell, about 70% of the CpG sites have a methylated cytosine.
Methylation of cytosine in DNA has a major role in regulating gene expression. Methylation of CpGs in a promoter region of a gene usually represses gene transcription while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene.
Transcriptional regulation in learning and memory
In a rat, contextual fear conditioning (CFC) is a painful learning experience. Just one episode of CFC can result in a life-long fearful memory. After an episode of CFC, cytosine methylation is altered in the promoter regions of about 9.17% of all genes in the hippocampus neuron DNA of a rat. The hippocampus is where new memories are initially stored. After CFC about 500 genes have increased transcription (often due to demethylation of CpG sites in a promoter region) and about 1,000 genes have decreased transcription (often due to newly formed 5-methylcytosine at CpG sites in a promoter region). The pattern of induced and repressed genes within neurons appears to provide a molecular basis for forming the first transient memory of this training event in the hippocampus of the rat brain.
Some specific mechanisms guiding new DNA methylations and new DNA demethylations in the hippocampus during memory establishment have been established (see for summary). One mechanism includes guiding the short isoform of the TET1 DNA demethylation enzyme, TET1s, to about 600 locations on the genome. The guidance is performed by association of TET1s with EGR1 protein, a transcription factor important in memory formation. Bringing TET1s to these locations initiates DNA demethylation at those sites, up-regulating associated genes. A second mechanism involves DNMT3A2, a splice-isoform of DNA methyltransferase DNMT3A, which adds methyl groups to cytosines in DNA. This isoform is induced by synaptic activity, and its location of action appears to be determined by histone post-translational modifications (a histone code). The resulting new messenger RNAs are then transported by messenger RNP particles (neuronal granules) to synapses of the neurons, where they can be translated into proteins affecting the activities of synapses.
In particular, the brain-derived neurotrophic factor gene (BDNF) is known as a "learning gene". After CFC there was upregulation of BDNF gene expression, related to decreased CpG methylation of certain internal promoters of the gene, and this was correlated with learning.
Transcriptional regulation in cancer
The majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes silenced. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-transcribed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
Post-transcriptional regulation
In eukaryotes, where export of RNA is required before translation is possible, nuclear export is thought to provide additional control over gene expression. All transport in and out of the nucleus is via the nuclear pore and transport is controlled by a wide range of importin and exportin proteins.
Expression of a gene coding for a protein is only possible if the messenger RNA carrying the code survives long enough to be translated. In a typical cell, an RNA molecule is only stable if specifically protected from degradation. RNA degradation has particular importance in regulation of expression in eukaryotic cells where mRNA has to travel significant distances before being translated. In eukaryotes, RNA is stabilised by certain post-transcriptional modifications, particularly the 5′ cap and poly-adenylated tail.
Intentional degradation of mRNA is used not just as a defence mechanism from foreign RNA (normally from viruses) but also as a route of mRNA destabilisation. If an mRNA molecule has a complementary sequence to a small interfering RNA then it is targeted for destruction via the RNA interference pathway.
Three prime untranslated regions and microRNAs
Three prime untranslated regions (3′UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3′-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3′-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.
The 3′-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3′-UTRs. Among all regulatory motifs within the 3′-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.
As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Friedman et al. estimate that >45,000 miRNA target sites within human mRNA 3′UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).
The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes.
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.
Translational regulation
Direct regulation of translation is less prevalent than control of transcription or mRNA stability but is occasionally used. Inhibition of protein translation is a major target for toxins and antibiotics, so they can kill a cell by overriding its normal gene expression control. Protein synthesis inhibitors include the antibiotic neomycin and the toxin ricin.
Post-translational modifications
Post-translational modifications (PTMs) are covalent modifications to proteins. Like RNA splicing, they help to significantly diversify the proteome. These modifications are usually catalyzed by enzymes. Additionally, processes like covalent additions to amino acid side chain residues can often be reversed by other enzymes. However, some, like the proteolytic cleavage of the protein backbone, are irreversible.
PTMs play many important roles in the cell. For example, phosphorylation is primarily involved in activating and deactivating proteins and in signaling pathways. PTMs are involved in transcriptional regulation: an important function of acetylation and methylation is histone tail modification, which alters how accessible DNA is for transcription. They can also be seen in the immune system, where glycosylation plays a key role. One type of PTM can initiate another type of PTM, as can be seen in how ubiquitination tags proteins for degradation through proteolysis. Proteolysis, other than being involved in breaking down proteins, is also important in activating and deactivating them, and in regulating biological processes such as DNA transcription and cell death.
Measurement
Measuring gene expression is an important part of many life sciences, as the ability to quantify the level at which a particular gene is expressed within a cell, tissue or organism can provide a lot of valuable information. For example, measuring gene expression can:
Identify viral infection of a cell (viral protein expression).
Determine an individual's susceptibility to cancer (oncogene expression).
Find if a bacterium is resistant to penicillin (beta-lactamase expression).
Gene expression profiling evaluates a panel of genes to help understand the fundamental mechanism of a cell. This is increasingly used in cancer therapy to target specific chemotherapy. (See RNA-Seq and DNA_microarray for details.)
Similarly, the analysis of the location of protein expression is a powerful tool, and this can be done on an organismal or cellular scale. Investigation of localization is particularly important for the study of development in multicellular organisms and as an indicator of protein function in single cells. Ideally, measurement of expression is done by detecting the final gene product (for many genes, this is the protein); however, it is often easier to detect one of the precursors, typically mRNA and to infer gene-expression levels from these measurements.
mRNA quantification
Levels of mRNA can be quantitatively measured by northern blotting, which provides size and sequence information about the mRNA molecules. A sample of RNA is separated on an agarose gel and hybridized to a radioactively labeled RNA probe that is complementary to the target sequence. The radiolabeled RNA is then detected by an autoradiograph. Because the use of radioactive reagents makes the procedure time-consuming and potentially dangerous, alternative labeling and detection methods, such as digoxigenin and biotin chemistries, have been developed. Perceived disadvantages of Northern blotting are that large quantities of RNA are required and that quantification may not be completely accurate, as it involves measuring band strength in an image of a gel. On the other hand, the additional mRNA size information from the Northern blot allows the discrimination of alternately spliced transcripts.
Another approach for measuring mRNA abundance is RT-qPCR. In this technique, reverse transcription is followed by quantitative PCR. Reverse transcription first generates a DNA template from the mRNA; this single-stranded template is called cDNA. The cDNA template is then amplified in the quantitative step, during which the fluorescence emitted by labeled hybridization probes or intercalating dyes changes as the DNA amplification process progresses. With a carefully constructed standard curve, qPCR can produce an absolute measurement of the number of copies of original mRNA, typically in units of copies per nanolitre of homogenized tissue or copies per cell. qPCR is very sensitive (detection of a single mRNA molecule is theoretically possible), but can be expensive depending on the type of reporter used; fluorescently labeled oligonucleotide probes are more expensive than non-specific intercalating fluorescent dyes.
For expression profiling, or high-throughput analysis of many genes within a sample, quantitative PCR may be performed for hundreds of genes simultaneously in the case of low-density arrays. A second approach is the hybridization microarray. A single array or "chip" may contain probes to determine transcript levels for every known gene in the genome of one or more organisms. Alternatively, "tag based" technologies like Serial analysis of gene expression (SAGE) and RNA-Seq, which can provide a relative measure of the cellular concentration of different mRNAs, can be used. An advantage of tag-based methods is the "open architecture", allowing for the exact measurement of any transcript, with a known or unknown sequence. Next-generation sequencing (NGS) such as RNA-Seq is another approach, producing vast quantities of sequence data that can be matched to a reference genome. Although NGS is comparatively time-consuming, expensive, and resource-intensive, it can identify single-nucleotide polymorphisms, splice-variants, and novel genes, and can also be used to profile expression in organisms for which little or no sequence information is available.
RNA profiles in Wikipedia
Profiles like these are found for almost all proteins listed in Wikipedia. They are generated by organizations such as the Genomics Institute of the Novartis Research Foundation and the European Bioinformatics Institute. Additional information can be found by searching their databases (for an example of the GLUT4 transporter pictured here, see citation). These profiles indicate the level of DNA expression (and hence RNA produced) of a certain protein in a certain tissue, and are color-coded accordingly in the images located in the Protein Box on the right side of each Wikipedia page.
Protein quantification
For genes encoding proteins, the expression level can be directly assessed by a number of methods with some clear analogies to the techniques for mRNA quantification.
One of the most commonly used methods is to perform a Western blot against the protein of interest. This gives information on the size of the protein in addition to its identity. A sample (often cellular lysate) is separated on a polyacrylamide gel, transferred to a membrane and then probed with an antibody to the protein of interest. The antibody can either be conjugated to a fluorophore or to horseradish peroxidase for imaging and/or quantification. The gel-based nature of this assay makes quantification less accurate, but it has the advantage of being able to identify later modifications to the protein, for example proteolysis or ubiquitination, from changes in size.
mRNA-protein correlation
While transcription directly reflects gene expression, the copy number of mRNA molecules does not directly correlate with the number of protein molecules translated from mRNA. Quantification of both protein and mRNA permits a correlation of the two levels. Regulation on each step of gene expression can impact the correlation, as shown for regulation of translation or protein stability. Post-translational factors, such as protein transport in highly polar cells, can influence the measured mRNA-protein correlation as well.
Localization
Analysis of expression is not limited to quantification; localization can also be determined. mRNA can be detected with a suitably labelled complementary mRNA strand and protein can be detected via labelled antibodies. The probed sample is then observed by microscopy to identify where the mRNA or protein is.
By replacing the gene with a new version fused to a green fluorescent protein marker or similar, expression may be directly quantified in live cells. This is done by imaging using a fluorescence microscope. It is very difficult to clone a GFP-fused protein into its native location in the genome without affecting expression levels, so this method often cannot be used to measure endogenous gene expression. It is, however, widely used to measure the expression of a gene artificially introduced into the cell, for example via an expression vector. By fusing a target protein to a fluorescent reporter, the protein's behavior, including its cellular localization and expression level, can be significantly changed.
The enzyme-linked immunosorbent assay works by using antibodies immobilised on a microtiter plate to capture proteins of interest from samples added to the well. Using a detection antibody conjugated to an enzyme or fluorophore the quantity of bound protein can be accurately measured by fluorometric or colourimetric detection. The detection process is very similar to that of a Western blot, but by avoiding the gel steps more accurate quantification can be achieved.
Expression system
An expression system is a system specifically designed for the production of a gene product of choice. This is normally a protein although may also be RNA, such as tRNA or a ribozyme. An expression system consists of a gene, normally encoded by DNA, and the molecular machinery required to transcribe the DNA into mRNA and translate the mRNA into protein using the reagents provided. In the broadest sense this includes every living cell but the term is more normally used to refer to expression as a laboratory tool. An expression system is therefore often artificial in some manner. Expression systems are, however, a fundamentally natural process. Viruses are an excellent example where they replicate by using the host cell as an expression system for the viral proteins and genome.
Inducible expression
Doxycycline is also used in "Tet-on" and "Tet-off" tetracycline controlled transcriptional activation to regulate transgene expression in organisms and cell cultures.
In nature
In addition to these biological tools, certain naturally observed configurations of DNA (genes, promoters, enhancers, repressors) and the associated machinery itself are referred to as an expression system. This term is normally used in the case where a gene or set of genes is switched on under well defined conditions, for example, the simple repressor switch expression system in Lambda phage and the lac operator system in bacteria. Several natural expression systems are directly used or modified and used for artificial expression systems such as the Tet-on and Tet-off expression system.
Gene networks
Genes have sometimes been regarded as nodes in a network, with inputs being proteins such as transcription factors, and outputs being the level of gene expression. The node itself performs a function, and the operation of these functions have been interpreted as performing a kind of information processing within cells and determines cellular behavior.
Gene networks can also be constructed without formulating an explicit causal model. This is often the case when assembling networks from large expression data sets. Covariation and correlation of expression is computed across a large sample of cases and measurements (often transcriptome or proteome data). The source of variation can be either experimental or natural (observational). There are several ways to construct gene expression networks, but one common approach is to compute a matrix of all pair-wise correlations of expression across conditions, time points, or individuals and convert the matrix (after thresholding at some cut-off value) into a graphical representation in which nodes represent genes, transcripts, or proteins and edges connecting these nodes represent the strength of association (see GeneNetwork GeneNetwork 2).
Techniques and tools
The following experimental techniques are used to measure gene expression and are listed in roughly chronological order, starting with the older, more established technologies. They are divided into two groups based on their degree of multiplexity.
Low-to-mid-plex techniques:
Reporter gene
Northern blot
Western blot
Fluorescent in situ hybridization
Reverse transcription PCR
Higher-plex techniques:
SAGE
DNA microarray
Tiling array
RNA-Seq
Gene expression databases
Gene expression omnibus (GEO) at NCBI
Expression Atlas at the EBI
Bgee Bgee at the SIB Swiss Institute of Bioinformatics
Mouse Gene Expression Database at the Jackson Laboratory
CollecTF: a database of experimentally validated transcription factor-binding sites in Bacteria.
COLOMBOS: collection of bacterial expression compendia.
Many Microbe Microarrays Database: microbial Affymetrix data
| Biology and health sciences | Genetics and taxonomy | null |
159292 | https://en.wikipedia.org/wiki/Potassium%20chloride | Potassium chloride | Potassium chloride (KCl, or potassium salt) is a metal halide salt composed of potassium and chlorine. It is odorless and has a white or colorless vitreous crystal appearance. The solid dissolves readily in water, and its solutions have a salt-like taste. Potassium chloride can be obtained from ancient dried lake deposits. KCl is used as a fertilizer, in medicine, in scientific applications, domestic water softeners (as a substitute for sodium chloride salt), and in food processing, where it may be known as E number additive E508.
It occurs naturally as the mineral sylvite, which is named after salt's historical designations sal degistivum Sylvii and sal febrifugum Sylvii, and in combination with sodium chloride as sylvinite.
Uses
Fertilizer
The majority of the potassium chloride produced is used for making fertilizer, called potash, since the growth of many plants is limited by potassium availability. The term "potash" refers to various mined and manufactured salts that contain potassium in water-soluble form. Potassium chloride sold as fertilizer is known as "muriate of potash"—it is the common name for potassium chloride () used in agriculture. The vast majority of potash fertilizer worldwide is sold as muriate of potash. The dominance of muriate of potash in the fertilizer market is due to its high potassium content (approximately 60% equivalent) and relative affordability compared to other potassium sources like sulfate of potash (potassium sulfate). Potassium is one of the three primary macronutrients essential for plant growth, alongside nitrogen and phosphorus. Potassium plays a vital role in various plant physiological processes, including enzyme activation, photosynthesis, protein synthesis, and water regulation. For watering plants, a moderate concentration of potassium chloride (KCl) is used to avoid potential toxicity: 6 mM (millimolar) is generally effective and safe for most plants, that is approximately per liter of water.
Medical use
Potassium is vital in the human body, and potassium chloride by mouth is the standard means to treat low blood potassium, although it can also be given intravenously. It is on the World Health Organization's List of Essential Medicines. It is also an ingredient in Oral Rehydration Therapy (ORT)/solution (ORS) to reduce hypokalemia caused by diarrhoea. This is another medicine on the WHO's List of Essential Medicines.
Potassium chloride contains 52% of elemental potassium by mass.
Overdose causes hyperkalemia which can disrupt cell signaling to the extent that the heart will stop, reversibly in the case of some open heart surgeries.
Culinary use
Potassium chloride can be used as a salt substitute for food, but due to its weak, bitter, unsalty flavor, it is often mixed with ordinary table salt (sodium chloride) to improve the taste, to form low sodium salt. The addition of 1 ppm of thaumatin considerably reduces this bitterness. Complaints of bitterness or a chemical or metallic taste are also reported with potassium chloride used in food.
Execution
In the United States, potassium chloride is used as the final drug in the three-injection sequence of lethal injection as a form of capital punishment. It induces cardiac arrest, ultimately killing the inmate.
Industrial
As a chemical feedstock, the salt is used for the manufacture of potassium hydroxide and potassium metal. It is also used in medicine, lethal injections, scientific applications, food processing, soaps, and as a sodium-free substitute for table salt for people concerned about the health effects of sodium.
It is used as a supplement in animal feed to boost the potassium level in the feed. As an added benefit, it is known to increase milk production.
It is sometimes used in solution as a completion fluid in petroleum and natural gas operations, as well as being an alternative to sodium chloride in household water softener units.
Glass manufacturers use granular potash as a flux, lowering the temperature at which a mixture melts. Because potash imparts excellent clarity to glass, it is commonly used in eyeglasses, glassware, televisions, and computer monitors.
Because natural potassium contains a tiny amount of the isotope potassium-40, potassium chloride is used as a beta radiation source to calibrate radiation monitoring equipment. It also emits a relatively low level of 511 keV gamma rays from positron annihilation, which can be used to calibrate medical scanners.
Potassium chloride is used in some de-icing products designed to be safer for pets and plants, though these are inferior in melting quality to calcium chloride. It is also used in various brands of bottled water.
Potassium chloride was once used as a fire extinguishing agent, and in portable and wheeled fire extinguishers. Known as Super-K dry chemical, it was more effective than sodium bicarbonate-based dry chemicals and was compatible with protein foam. This agent fell out of favor with the introduction of potassium bicarbonate (Purple-K) dry chemical in the late 1960s, which was much less corrosive, as well as more effective. It is rated for B and C fires.
Along with sodium chloride and lithium chloride, potassium chloride is used as a flux for the gas welding of aluminium.
Potassium chloride is also an optical crystal with a wide transmission range from 210 nm to 20 μm. While cheap, KCl crystals are hygroscopic. This limits its application to protected environments or short-term uses such as prototyping. Exposed to free air, KCl optics will "rot". Whereas KCl components were formerly used for infrared optics, they have been entirely replaced by much tougher crystals such as zinc selenide.
Potassium chloride is used as a scotophor with designation P10 in dark-trace CRTs, e.g. in the Skiatron.
Toxicity
The typical amounts of potassium chloride found in the diet appear to be generally safe. In larger quantities, however, potassium chloride is toxic. The of orally ingested potassium chloride is approximately 2.5 g/kg, or for a body mass of . In comparison, the of sodium chloride (table salt) is 3.75 g/kg.
Intravenously, the of potassium chloride is far smaller, at about 57.2 mg/kg to 66.7 mg/kg; this is found by dividing the lethal concentration of positive potassium ions (about 30 to 35 mg/kg) by the proportion by mass of potassium ions in potassium chloride (about 0.52445 mg K+/mg KCl).
Chemical properties
Solubility
KCl is soluble in a variety of polar solvents.
Solutions of KCl are common standards, for example for calibration of the electrical conductivity of (ionic) solutions, since KCl solutions are stable, allowing for reproducible measurements. In aqueous solution, it is essentially fully ionized into solvated and ions.
Redox and the conversion to potassium metal
Although potassium is more electropositive than sodium, KCl can be reduced to the metal by reaction with metallic sodium at 850 °C because the more volatile potassium can be removed by distillation (see Le Chatelier's principle):
KCl_{(l)}{} + Na_{(l)} <=> NaCl_{(l)}{} + K_{(g)}
This method is the main method for producing metallic potassium. Electrolysis (used for sodium) fails because of the high solubility of potassium in molten KCl.
Other potassium chloride stoichiometries
Potassium chlorides with formulas other than KCl have been predicted to become stable under pressures of 20 GPa or more. Among these, two phases of KCl3 were synthesized and characterized. At 20-40 GPa, a trigonal structure containing K+ and Cl3− is obtained; above 40 GPa this gives way to a phase isostructural with the intermetallic compound Cr3Si.
Physical properties
Under ambient conditions, the crystal structure of potassium chloride is like that of NaCl. It adopts a face-centered cubic structure known as the B1 phase with a lattice constant of roughly 6.3 Å. Crystals cleave easily in three directions. Other polymorphic and hydrated phases are adopted at high pressures.
Some other properties are
Transmission range: 210 nm to 20 μm
Transmittivity = 92% at 450 nm and rises linearly to 94% at 16 μm
Refractive index = 1.456 at 10 μm
Reflection loss = 6.8% at 10 μm (two surfaces)
dN/dT (expansion coefficient)= −33.2×10−6/°C
dL/dT (refractive index gradient)= 40×10−6/°C
Thermal conductivity = 0.036 W/(cm·K)
Damage threshold (Newman and Novak): 4 GW/cm2 or 2 J/cm2 (0.5 or 1 ns pulse rate); 4.2 J/cm2 (1.7 ns pulse rate Kovalev and Faizullov)
As with other compounds containing potassium, KCl in powdered form gives a lilac flame.
Production
Potassium chloride is extracted from minerals sylvite, carnallite, and potash. It is also extracted from salt water and can be manufactured by crystallization from solution, flotation or electrostatic separation from suitable minerals. It is a by-product of the production of nitric acid from potassium nitrate and hydrochloric acid.
Most potassium chloride is produced as agricultural and industrial-grade potash in Saskatchewan, Canada, Russia, and Belarus. Saskatchewan alone accounted for over 25% of the world's potash production in 2017.
Laboratory methods
Potassium chloride is inexpensively available and is rarely prepared intentionally in the laboratory. It can be generated by treating potassium hydroxide (or other potassium bases) with hydrochloric acid:
KOH + HCl -> KCl + H2O
This conversion is an acid-base neutralization reaction. The resulting salt can then be purified by recrystallization. Another method would be to allow potassium to burn in the presence of chlorine gas, also a very exothermic reaction:
2 K + Cl2 -> 2 KCl
| Physical sciences | Halide salts | Chemistry |
159298 | https://en.wikipedia.org/wiki/Ultralight%20aviation | Ultralight aviation | Ultralight aviation (called microlight aviation in some countries) is the flying of lightweight, 1- or 2-seat fixed-wing aircraft. Some countries differentiate between weight-shift control and conventional three-axis control aircraft with ailerons, elevator and rudder, calling the former "microlight" and the latter "ultralight".
During the late 1970s and early 1980s, mostly stimulated by the hang gliding movement, many people sought affordable powered flight. As a result, many aviation authorities set up definitions of lightweight, slow-flying aeroplanes that could be subject to minimum regulations. The resulting aeroplanes are commonly called "ultralight aircraft" or "microlights", although the weight and speed limits differ from country to country. In Europe, the sporting (FAI) definition limits the maximum stalling speed to and the maximum take-off weight to , or if a ballistic parachute is installed. The definition means that the aircraft has a slow landing speed and short landing roll in the event of an engine failure.
In most affluent countries, microlights or ultralight aircraft now account for a significant percentage of the global civilian-owned aircraft. For instance, in Canada in February 2018, the ultralight aircraft fleet made up to 20.4% of the total civilian aircraft registered. In other countries that do not register ultralight aircraft, like in the United States, it is unknown what proportion of the total fleet they make up. In countries where there is no specific extra regulation, ultralights are considered regular aircraft and subject to certification requirements for both aircraft and pilot.
Definitions
Australia
In Australia, ultralight aircraft and their pilots can either be registered with the Hang Gliding Federation of Australia (HGFA) or Recreational Aviation Australia (RA Aus). In all cases, except for privately built single seat ultralight aeroplanes, microlight aircraft or trikes are regulated by the Civil Aviation Regulations.
Canada
United Kingdom
Pilots of a powered, fixed wing aircraft or paramotors do not need a licence, provided its weight with a full fuel tank is not more than , but they must obey the rules of the air.
For heavier microlights the current UK regulations are similar to the European ones, but helicopters and gyroplanes are not included.
Other than the very earliest aircraft, all two-seat UK microlights (and until 2007 all single-seaters) have been required to meet an airworthiness standard; BCAR Section S.
In 2007, Single Seat DeRegulated (SSDR), a sub-category of single seat aircraft was introduced, allowing owners more freedom for modification and experiments. By 2017 the airworthiness of all single seat microlights became solely the responsibility of the user, but pilots must hold a microlight licence; currently NPPL(M) (National Private Pilots Licence).
New Zealand
Ultralights in New Zealand are subject to NZCAA General Aviation regulations with microlight specific variations as described in Part 103 and AC103-1.
United States
The United States FAA's definition of an ultralight is significantly different from that in most other countries and can lead to some confusion when discussing the topic. The governing regulation in the United States is FAR 103 Ultralight Vehicles. In 2004, the FAA introduced the "Light-sport aircraft" category, which resembles some other countries' microlight categories. Ultralight aviation is represented by the United States Ultralight Association (USUA), which acts as the US aeroclub representative to the Fédération Aéronautique Internationale.
Types
There are several categories of aircraft which qualify as ultralights in some countries:
Fixed-wing aircraft: traditional airplane-style designs.
Weight-shift control trike: use a hang glider-style wing, below which is suspended a three-wheeled carriage which carries the engine and aviators. These aircraft are controlled by pushing against a horizontal control bar in roughly the same way as a hang glider pilot flies.
Powered parachute: fuselage-mounted engines with parafoil wings, which are wheeled aircraft.
Powered paraglider: backpack engines with parafoil wings, which are foot-launched.
Powered hang glider: motorized foot-launched hang glider harness.
Autogyro: rotary wing with fuselage-mounted engine, a gyrocopter is different from a helicopter in that the rotating wing is not powered, the engine provides forward thrust and the airflow through the rotary blades causes them to autorotate or "spin up" thereby creating lift.
Helicopter: there are a number of single-seat and two-place helicopters which fall under the microlight categories in countries such as New Zealand. However, few helicopter designs fall within the more restrictive ultralight category defined in the United States of America.
Hot air balloon: there are numerous ultralight hot air balloons in the US, and several more have been built and flown in France and Australia in recent years. Some ultralight hot air balloons are hopper balloons, while others are regular hot air balloons that carry passengers in a basket.
Electric
Advancements in batteries, motors, and motor controllers has led to some practical production electric propulsion systems for some ultralight applications. In many ways, ultralights are a good application for electric power as some models are capable of flying with low power, which allows longer duration flights on battery power.
In 2007, the first pioneering company in this field, the Electric Aircraft Corporation, began offering engine kits to convert ultralight weight shift trikes to electric power. The 18 hp motor weighs and an efficiency of 90% is claimed by designer Randall Fishman. The battery consists of a lithium-polymer battery pack of 5.6kWh which provides 1.5 hours of flying in the trike application. The company claimed a flight recharge cost of 60 cents in 2007.
A significant obstacle to the adoption of electric propulsion for ultralights in the U.S. is the weight of the battery, which is considered part of the empty weight of the aircraft despite efforts to have it considered as fuel. As the specific energy of batteries improves, lighter batteries can be used.
| Technology | Types of aircraft | null |
159421 | https://en.wikipedia.org/wiki/Urination | Urination | Urination is the release of urine from the bladder to the outside of the body. Urine is released through the urethra and exits the penis or vulva through the urinary meatus in placental mammals, but is released through the cloaca in other vertebrates. It is the urinary system's form of excretion. It is also known medically as micturition, voiding, uresis, or, rarely, emiction, and known colloquially by various names including peeing, weeing, pissing, and euphemistically number one. The process of urination is under voluntary control in healthy humans and other animals, but may occur as a reflex in infants, some elderly individuals, and those with neurological injury. It is normal for adult humans to urinate up to seven times during the day.
In some animals, in addition to expelling waste material, urination can mark territory or express submissiveness. Physiologically, urination involves coordination between the central, autonomic, and somatic nervous systems. Brain centres that regulate urination include the pontine micturition center, periaqueductal gray, and the cerebral cortex.
Anatomy and physiology
Anatomy of the bladder and outlet
The main organs involved in urination are the urinary bladder and the urethra. The smooth muscle of the bladder, known as the detrusor, is innervated by sympathetic nervous system fibers from the lumbar spinal cord and parasympathetic fibers from the sacral spinal cord. Fibers in the pelvic nerves constitute the main afferent limb of the voiding reflex; the parasympathetic fibers to the bladder that constitute the excitatory efferent limb also travel in these nerves. Part of the urethra is surrounded by the male or female external urethral sphincter, which is innervated by the somatic pudendal nerve originating in the cord, in an area termed Onuf's nucleus.
Smooth muscle bundles pass on either side of the urethra, and these fibers are sometimes called the internal urethral sphincter, although they do not encircle the urethra. Further along the urethra is a sphincter of skeletal muscle, the sphincter of the membranous urethra (external urethral sphincter). The bladder's epithelium is termed transitional epithelium which contains a superficial layer of dome-like cells and multiple layers of stratified cuboidal cells underneath when evacuated. When the bladder is fully distended the superficial cells become squamous (flat) and the stratification of the cuboidal cells is reduced in order to provide lateral stretching.
Physiology
The physiology of micturition and the physiologic basis of its disorders are subjects about which there is much confusion, especially at the supraspinal level. Micturition is fundamentally a spinobulbospinal reflex facilitated and inhibited by higher brain centers such as the pontine micturition center and, like defecation, subject to voluntary facilitation and inhibition.
In healthy individuals, the lower urinary tract has two discrete phases of activity: the storage (or guarding) phase, when urine is stored in the bladder; and the voiding phase, when urine is released through the urethra. The state of the reflex system is dependent on both a conscious signal from the brain and the firing rate of sensory fibers from the bladder and urethra. At low bladder volumes, afferent firing is low, resulting in excitation of the outlet (the sphincter and urethra), and relaxation of the bladder. At high bladder volumes, afferent firing increases, causing a conscious sensation of urinary urge. Individual ready to urinate consciously initiates voiding, causing the bladder to contract and the outlet to relax. Voiding continues until the bladder empties completely, at which point the bladder relaxes and the outlet contracts to re-initiate storage. The muscles controlling micturition are controlled by the autonomic and somatic nervous systems. During the storage phase, the internal urethral sphincter remains tense and the detrusor muscle relaxed by sympathetic stimulation. During micturition, parasympathetic stimulation causes the detrusor muscle to contract and the internal urethral sphincter to relax. The external urethral sphincter (sphincter urethrae) is under somatic control and is consciously relaxed during micturition.
In infants, voiding occurs involuntarily (as a reflex). The ability to voluntarily inhibit micturition develops by the age of two–three years, as control at higher levels of the central nervous system develops. In the adult, the volume of urine in the bladder that normally initiates a reflex contraction is about .
Storage phase
During storage, bladder pressure stays low, because of the bladder's highly compliant nature. A plot of bladder (intravesical) pressure against the depressant of fluid in the bladder (called a cystometrogram), will show a very slight rise as the bladder is filled. This phenomenon is a manifestation of the law of Laplace, which states that the pressure in a spherical viscus is equal to twice the wall tension divided by the radius. In the case of the bladder, the tension increases as the organ fills, but so does the radius. Therefore, the pressure increase is slight until the organ is relatively full. The bladder's smooth muscle has some inherent contractile activity; however, when its nerve supply is intact, stretch receptors in the bladder wall initiate a reflex contraction that has a lower threshold than the inherent contractile response of the muscle.
Action potentials carried by sensory neurons from stretch receptors in the urinary bladder wall travel to the sacral segments of the spinal cord through the pelvic nerves. Since bladder wall stretch is low during the storage phase, these afferent neurons fire at low frequencies. Low-frequency afferent signals cause relaxation of the bladder by inhibiting sacral parasympathetic preganglionic neurons and exciting lumbar sympathetic preganglionic neurons. Conversely, afferent input causes contraction of the sphincter through excitation of Onuf's nucleus, and contraction of the bladder neck and urethra through excitation of the sympathetic preganglionic neurons.
Diuresis (production of urine by the kidney) occurs constantly, and as the bladder becomes full, afferent firing increases, yet the micturition reflex can be voluntarily inhibited until it is appropriate to begin voiding.
Voiding phase
Voiding begins when a voluntary signal is sent from the brain to begin urination, and continues until the bladder is empty.
Bladder afferent signals ascend the spinal cord to the periaqueductal gray, where they project both to the pontine micturition center and to the cerebrum. At a certain level of afferent activity, the conscious urge to void or urination urgency, becomes difficult to ignore. Once the voluntary signal to begin voiding has been issued, neurons in the pontine micturition center fire maximally, causing excitation of sacral preganglionic neurons. The firing of these neurons causes the wall of the bladder to contract; as a result, a sudden, sharp rise in intravesical pressure occurs. The pontine micturition center also causes inhibition of Onuf's nucleus, resulting in relaxation of the external urinary sphincter. When the external urinary sphincter is relaxed urine is released from the urinary bladder when the pressure there is great enough to force urine to flow out of the urethra. The micturition reflex normally produces a series of contractions of the urinary bladder.
The flow of urine through the urethra has an overall excitatory role in micturition, which helps sustain voiding until the bladder is empty.
Many men, and some women, may sometimes briefly shiver after or during urination.
After urination, the female urethra empties partially by gravity, with assistance from muscles. Urine remaining in the male urethra is expelled by several contractions of the bulbospongiosus muscle, and, by some men, manual squeezing along the length of the penis to expel the rest of the urine.
For land mammals over 1 kilogram, the duration of urination does not vary with body mass, being dispersed around an average of 21 seconds (standard deviation 13 seconds), despite a 4 order of magnitude (1000×) difference in bladder volume. This is due to increased urethra length of large animals, which amplifies gravitational force (hence flow rate), and increased urethra width, which increases flow rate. For smaller mammals a different phenomenon occurs, where urine is discharged as droplets, and urination in smaller mammals, such as mice and rats, can occur in less than a second. The posited benefits of faster voiding are decreased risk of predation (while voiding) and decreased risk of urinary tract infection.
Voluntary control
The mechanism by which voluntary urination is initiated remains unsettled. One possibility is that the voluntary relaxation of the muscles of the pelvic floor causes a sufficient downward tug on the detrusor muscle to initiate its contraction. Another possibility is the excitation or disinhibition of neurons in the pontine micturition center, which causes concurrent contraction of the bladder and relaxation of the sphincter.
There is an inhibitory area for micturition in the midbrain. After transection of the brain stem just above the pons, the threshold is lowered and less bladder filling is required to trigger it, whereas after transection at the top of the midbrain, the threshold for the reflex is essentially normal. There is another facilitatory area in the posterior hypothalamus. In humans with lesions in the superior frontal gyrus, the desire to urinate is reduced and there is also difficulty in stopping micturition once it has commenced. However, stimulation experiments in animals indicate that other cortical areas also affect the process.
The bladder can be made to contract by voluntary facilitation of the spinal voiding reflex when it contains only a few milliliters of urine. Voluntary contraction of the abdominal muscles aids the expulsion of urine by increasing the pressure applied to the urinary bladder wall, but voiding can be initiated without straining even when the bladder is nearly empty.
Voiding can also be consciously interrupted once it has begun, through a contraction of the perineal muscles. The external sphincter can be contracted voluntarily, which will prevent urine from passing down the urethra.
Experience of urination
The need to urinate is experienced as an uncomfortable, full feeling. It is highly correlated with the fullness of the bladder. In many males the feeling of the need to urinate can be sensed at the base of the penis as well as the bladder, even though the neural activity associated with a full bladder comes from the bladder itself, and can be felt there as well. In females the need to urinate is felt in the lower abdomen region when the bladder is full. When the bladder becomes too full, the sphincter muscles will involuntarily relax, allowing urine to pass from the bladder. Release of urine is experienced as a lessening of the discomfort.
Disorders
Clinical conditions
Many clinical conditions can cause disturbances to normal urination, including:
Urinary incontinence, the inability to hold urine
Stress incontinence, incontinence as a result of external mechanical disturbances
Urge incontinence, incontinence that occurs as a result of the uncontrollable urge to urinate
Mixed incontinence, a combination of the two types of incontinence
Urinary retention, the inability to initiate urination
Overactive bladder, a strong urge to urinate, usually accompanied by detrusor overactivity
Interstitial cystitis, a condition characterized by urinary frequency, urgency, and pain
Prostatitis, an inflammation of the prostate gland that can cause urinary frequency, urgency, and pain
Benign prostatic hyperplasia, an enlargement of the prostate that can cause urinary frequency, urgency, retention, and the dribbling of urine
Urinary tract infection, which can cause urinary frequency and dysuria
Polyuria, abnormally large production of urine, associated with, in particular, diabetes mellitus (types 1 and 2), and diabetes insipidus
Oliguria, low urine output, usually due to a problem with the upper urinary tract
Anuria refers to absent or almost absent urine output.
Micturition syncope, a vasovagal response which may cause fainting.
Paruresis, the inability to urinate in the presence of others, such as in a public toilet.
Bladder sphincter dyssynergia, a discoordination between the bladder and external urethral sphincter as a result of brain or spinal cord injury
A drug that increases urination is called a diuretic, whereas antidiuretics decrease the production of urine by the kidneys.
Experimentally induced disorders
There are three major types of bladder dysfunction due to neural lesions: (1) the type due to interruption of the afferent nerves from the bladder; (2) the type due to interruption of both afferent and efferent nerves; and (3) the type due to interruption of facilitatory and inhibitory pathways descending from the brain. In all three types the bladder contracts, but the contractions are generally not sufficient to empty the viscus completely, and residual urine is left in the bladder. Paruresis, also known as shy bladder syndrome, is an example of a bladder interruption from the brain that often causes total interruption until the person has left a public area. These people (males) may have difficulty urinating in the presence of others and will consequently avoid using urinals without dividers or those directly adjacent to another person. Alternatively, they may opt for the privacy of a stall or simply avoid public toilets altogether.
Deafferentation
When the sacral dorsal roots are cut in experimental animals or interrupted by diseases of the dorsal roots such as tabes dorsalis in humans, all reflex contractions of the bladder are abolished. The bladder becomes distended, thin-walled, and hypotonic, but there are some contractions because of the intrinsic response of the smooth muscle to stretch.
Denervation
When the afferent and efferent nerves are both destroyed, as they may be by tumors of the cauda equina or filum terminale, the bladder is flaccid and distended for a while. Gradually, however, the muscle of the "decentralized bladder" becomes active, with many contraction waves that expel dribbles of urine out of the urethra. The bladder becomes shrunken and the bladder wall hypertrophied. The reason for the difference between the small, hypertrophic bladder seen in this condition and the distended, hypotonic bladder seen when only the afferent nerves are interrupted is not known. The hyperactive state in the former condition suggests the development of denervation hypersensitization even though the neurons interrupted are preganglionic rather than postganglionic.
Spinal cord injury
During spinal shock, the bladder is flaccid and unresponsive. It becomes overfilled, and urine dribbles through the sphincters (overflow incontinence). After spinal shock has passed, a spinally mediated voiding reflex ensues, although there is no voluntary control and no inhibition or facilitation from higher centers. Some paraplegic patients train themselves to initiate voiding by pinching or stroking their thighs, provoking a mild mass reflex. In some instances, the voiding reflex becomes hyperactive. Bladder capacity is reduced and the wall becomes hypertrophied. This type of bladder is sometimes called the spastic neurogenic bladder. The reflex hyperactivity is made worse, and may be caused, by infection in the bladder wall.
Techniques
Young children
A common technique used in many developing nations involves holding the child by the backs of the thighs, above the ground, facing outward, in order to urinate.
Fetal urination
The fetus urinates hourly and produces most of the amniotic fluid in the second and third trimester of pregnancy. The amniotic fluid is then recycled by fetal swallowing.
Urination after injury
Occasionally, if a male's penis is damaged or removed, or a female's genitals/urinary tract is damaged, other urination techniques must be used. Most often in such cases, doctors will reposition the urethra to a location where urination can still be accomplished, usually in a position that would promote urination only while seated/squatting, though a permanent urinary catheter may be used in rare cases.
Alternative urination tools
Sometimes urination is done in a container such as a bottle, urinal, bedpan, or chamber pot (also known as a gazunder). A container or wearable urine collection device may be used so that the urine can be examined for medical reasons or for a drug test, for a bedridden patient, when no toilet is available, or there is no other possibility to dispose of the urine immediately.
An alternative solution (for traveling, stakeouts, etc.) is a special disposable bag containing absorbent material that solidifies the urine within seconds, making it convenient and safe to store and dispose of later.
It is possible for both sexes to urinate into bottles in case of emergencies. The technique can help children to urinate discreetly inside cars and in other places without being seen by others. A female urination device can assist women and girls in urinating while standing or into a bottle.
In microgravity, excrement tends to float freely, so astronauts use a specially designed space toilet, which uses suction to collect and recycle urine; the space toilet also has a receptacle for defecation.
Social and cultural aspects
Art
A puer mingens is a figure in a work of art depicted as a prepubescent boy in the act of urinating, either actual or simulated. The puer mingens could represent anything from whimsy and boyish innocence to erotic symbols of virility and masculine bravado.
Toilet training
Babies have little socialized control over urination within traditions or families that do not practice elimination communication and instead use diapers. Toilet training is the process of learning to restrict urination to socially approved times and situations. Consequently, young children sometimes develop nocturnal enuresis.
Facilities
It is socially more accepted and more environmentally hygienic for those who are able, especially when indoors and in outdoor urban or suburban areas, to urinate in a toilet. Public toilets may have urinals, usually for males, although female urinals exist, designed to be used in various ways.
Urination without facilities
Acceptability of outdoor urination in a public place other than at a public urinal varies with the situation and with customs. Potential disadvantages include a dislike of the smell of urine, and exposure of genitals. It can be avoided or mitigated by going to a quiet place and/or facing a tree or wall if urinating standing up, or while squatting, hiding the back behind walls, bushes, or a tree.
Portable toilets (port-a-potties) are frequently placed in outdoor situations where no immediate facility is available. These need to be serviced (cleaned out) on a regular basis. Urination in a heavily wooded area is generally harmless, actually saves water, and may be condoned for males (and less commonly, females) in certain situations as long as common sense is used. Examples (depending on circumstances) include activities such as camping, hiking, delivery driving, cross country running, rural fishing, amateur baseball, golf, etc.
The more developed and crowded a place is, the more public urination tends to be objectionable. In the countryside, it is more acceptable than in a street in a town, where it may be a common transgression. Often this is done after the consumption of alcoholic beverages, which causes production of additional urine as well as a reduction of inhibitions. One proposed way to inhibit public urination due to drunkenness is the Urilift, which is disguised as a normal manhole by day but raises out of the ground at night to provide a public restroom for bar-goers.
In many places, public urination is punishable by fines, though attitudes vary widely by country. In general, females are less likely to urinate in public than males. Women and girls, unlike men and boys, are restricted in where they can urinate conveniently and discreetly.
The 5th-century BC historian Herodotus, writing on the culture of the ancient Persians and highlighting the differences with those of the Greeks, noted that to urinate in the presence of others was prohibited among Persians.
There was a popular belief in the UK, that it was legal for a man to urinate in public so long as it occurred on the rear wheel of his vehicle and he had his right hand on the vehicle, but this is not true. Public urination still remains more accepted by males in the UK, although British cultural tradition itself seems to find such practices objectionable.
In Islamic toilet etiquette, it is haram to urinate while facing the Qibla, or to turn one's back to it when urinating or relieving bowels, but modesty requirements for females make it impossible for girls to relieve themselves without facilities. When toilets are unavailable, females can relieve themselves in Laos, Russia and Mongolia in emergency, but it remains less accepted for females in India even when circumstances make this a highly desirable option.
Women generally need to urinate more frequently than men, but as opposed to the common misconception, it is not due to having smaller bladders. Resisting the urge to urinate because of lack of facilities can promote urinary tract infections which can lead to more serious infections and, in rare situations, can cause renal damage in women. Female urination devices are available to help women to urinate discreetly, as well to help them urinate while standing.
Sitting, standing, or squatting
Techniques and body postures while urinating vary across cultures. Different anatomical conditions in men and women may presume different postures, yet these are largely shaped by cultural norms, types of clothing, and the sanitary facilities available. While sitting toilets are the most common form in Western countries, squat toilets are common in Asia, Africa, and the Arab world. Urinals for men are widespread worldwide, although women's urinals are available in some countries, recently becoming more common in Western countries. With the spread of pants among women, a standing posture became impractical, but in some regions where women wear traditional skirts or robes, an upright posture is common.
Males
Cultures around the world differ regarding socially accepted voiding positions and preferences: in the Middle-East and Asia, the squatting position was more prevalent, while in the Western world the standing and sitting positions were more common. For practising Muslim men, the genital modesty of squatting is also associated with proper cleanliness requirements or awrah. In Western culture, the standing position is regarded as the more efficient option among healthy males. In restrooms without urinals, and sometimes at home, men may be urged to use the sitting position as to diminish spattering of urine.
Elderly males with prostate gland enlargement may benefit from sitting down to urinate, with the seated voiding position found superior as compared with standing in elderly males with benign prostate hyperplasia.
Females
In Western culture, females usually sit or squat for urination, depending on what type of toilet they use; a squat toilet is used for urination in a squatting position. Women averting contact with a toilet seat may employ a partial squatting position (or "hovering"), similar to using a female urinal. However, this may not completely void the bladder.
Females may also urinate while standing, and while clothed. It is common for women in various regions of Africa to use this position when they urinate, as do women in Laos. Herodotus described a similar custom in ancient Egypt. An alternative method for women voiding while standing is to use a female urination device to assist.
Talking about urination
In many societies and in many social classes, even mentioning the need to urinate is seen as a social transgression, despite it being a universal need. Many adults avoid stating that they need to urinate.
Many expressions exist, some euphemistic and some vulgar. For example, centuries ago the standard English word (both noun and verb, for the product and the activity) was "piss", but subsequently "pee", formerly associated with children, has become more common in general public speech. Since elimination of bodily wastes is, of necessity, a subject talked about with toddlers during toilet training, other expressions considered suitable for use by and with children exist, and some continue to be used by adults, e.g. "weeing", "doing/having a wee-wee", "to tinkle", "go potty", "go pee pee".
Other expressions include "squirting" and "taking a leak", and, predominantly by younger persons for outdoor female urination, "popping a squat", referring to the position many women adopt in such circumstances. National varieties of English show creativity. American English uses "to whiz". Australian English has coined "I am off to take a Chinese singing lesson", derived from the tinkling sound of urination against the China porcelain of a toilet bowl. British English uses "going to see my aunt", "going to see a man about a dog", "to piddle", "to splash (one's) boots", as well as "to have a slash", which originates from the Scottish term for a large splash of liquid. One of the most common, albeit old-fashioned, euphemisms in British English is "to spend a penny", a reference to coin-operated pay toilets, which used (pre-decimalisation) to charge that sum.
Use in language
| Biology and health sciences | Basics | Biology |
159472 | https://en.wikipedia.org/wiki/Flight | Flight | Flight or flying is the motion of an object through an atmosphere, or through the vacuum of outer space, without contacting any planetary surface. This can be achieved by generating aerodynamic lift associated with gliding or propulsive thrust, aerostatically using buoyancy, or by ballistic movement.
Many things can fly, from animal aviators such as birds, bats and insects, to natural gliders/parachuters such as patagial animals, anemochorous seeds and ballistospores, to human inventions like aircraft (airplanes, helicopters, airships, balloons, etc.) and rockets which may propel spacecraft and spaceplanes.
The engineering aspects of flight are the purview of aerospace engineering which is subdivided into aeronautics, the study of vehicles that travel through the atmosphere, and astronautics, the study of vehicles that travel through space, and ballistics, the study of the flight of projectiles.
Types of flight
Buoyant flight
Humans have managed to construct lighter-than-air vehicles that raise off the ground and fly, due to their buoyancy in the air.
An aerostat is a system that remains aloft primarily through the use of buoyancy to give an aircraft the same overall density as air. Aerostats include free balloons, airships, and moored balloons. An aerostat's main structural component is its envelope, a lightweight skin that encloses a volume of lifting gas to provide buoyancy, to which other components are attached.
Aerostats are so named because they use "aerostatic" lift, a buoyant force that does not require lateral movement through the surrounding air mass to effect a lifting force. By contrast, aerodynes primarily use aerodynamic lift, which requires the lateral movement of at least some part of the aircraft through the surrounding air mass.
Aerodynamic flight
Unpowered flight versus powered flight
Some things that fly do not generate propulsive thrust through the air, for example, the flying squirrel. This is termed gliding. Some other things can exploit rising air to climb such as raptors (when gliding) and man-made sailplane gliders. This is termed soaring. However most other birds and all powered aircraft need a source of propulsion to climb. This is termed powered flight.
Animal flight
The only groups of living things that use powered flight are birds, insects, and bats, while many groups have evolved gliding. The extinct pterosaurs, an order of reptiles contemporaneous with the dinosaurs, were also very successful flying animals, and there were apparently some flying dinosaurs (see Flying and gliding animals#Non-avian dinosaurs). Each of these groups' wings evolved independently, with insects the first animal group to evolve flight. The wings of the flying vertebrate groups are all based on the forelimbs, but differ significantly in structure; insect wings are hypothesized to be highly modified versions of structures that form gills in most other groups of arthropods.
Bats are the only mammals capable of sustaining level flight (see bat flight). However, there are several gliding mammals which are able to glide from tree to tree using fleshy membranes between their limbs; some can travel hundreds of meters in this way with very little loss in height. Flying frogs use greatly enlarged webbed feet for a similar purpose, and there are flying lizards which fold out their mobile ribs into a pair of flat gliding surfaces. "Flying" snakes also use mobile ribs to flatten their body into an aerodynamic shape, with a back and forth motion much the same as they use on the ground.
Flying fish can glide using enlarged wing-like fins, and have been observed soaring for hundreds of meters. It is thought that this ability was chosen by natural selection because it was an effective means of escape from underwater predators. The longest recorded flight of a flying fish was 45 seconds.
Most birds fly (see bird flight), with some exceptions. The largest birds, the ostrich and the emu, are earthbound flightless birds, as were the now-extinct dodos and the Phorusrhacids, which were the dominant predators of South America in the Cenozoic era. The non-flying penguins have wings adapted for use under water and use the same wing movements for swimming that most other birds use for flight. Most small flightless birds are native to small islands, and lead a lifestyle where flight would offer little advantage.
Among living animals that fly, the wandering albatross has the greatest wingspan, up to ; the great bustard has the greatest weight, topping at .
Most species of insects can fly as adults. Insect flight makes use of either of two basic aerodynamic models: creating a leading edge vortex, found in most insects, and using clap and fling, found in very small insects such as thrips.
Many species of spiders, spider mites and lepidoptera use a technique called ballooning to ride air currents such as thermals, by exposing their gossamer threads which gets lifted by wind and atmospheric electric fields.
Mechanical
Mechanical flight is the use of a machine to fly. These machines include aircraft such as airplanes, gliders, helicopters, autogyros, airships, balloons, ornithopters as well as spacecraft. Gliders are capable of unpowered flight. Another form of mechanical flight is para-sailing, where a parachute-like object is pulled by a boat. In an airplane, lift is created by the wings; the shape of the wings of the airplane are designed specially for the type of flight desired. There are different types of wings: tempered, semi-tempered, sweptback, rectangular and elliptical. An aircraft wing is sometimes called an airfoil, which is a device that creates lift when air flows across it.
Supersonic
Supersonic flight is flight faster than the speed of sound. Supersonic flight is associated with the formation of shock waves that form a sonic boom that can be heard from the ground, and is frequently startling. The creation of this shockwave requires a significant amount of energy; because of this, supersonic flight is generally less efficient than subsonic flight at about 85% of the speed of sound.
Hypersonic
Hypersonic flight is very high speed flight where the heat generated by the compression of the air due to the motion through the air causes chemical changes to the air. Hypersonic flight is achieved primarily by reentering spacecraft such as the Space Shuttle and Soyuz.
Ballistic
Atmospheric
Some things generate little or no lift and move only or mostly under the action of momentum, gravity, air drag and in some cases thrust. This is termed ballistic flight. Examples include balls, arrows, bullets, fireworks etc.
Spaceflight
Essentially an extreme form of ballistic flight, spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space. Examples include ballistic missiles, orbital spaceflight, etc.
Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.
A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of the Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.
Solid-state propulsion
In 2018, researchers at Massachusetts Institute of Technology (MIT) managed to fly an aeroplane with no moving parts, powered by an "ionic wind" also known as electroaerodynamic thrust.
History
Many human cultures have built devices that fly, from the earliest projectiles such as stones and spears, the
boomerang in Australia, the hot air Kongming lantern, and kites.
Aviation
George Cayley studied flight scientifically in the first half of the 19th century, and in the second half of the 19th century Otto Lilienthal made over 200 gliding flights and was also one of the first to understand flight scientifically. His work was replicated and extended by the Wright brothers who made gliding flights and finally the first controlled and extended, manned powered flights.
Spaceflight
Spaceflight, particularly human spaceflight became a reality in the 20th century following theoretical and practical breakthroughs by Konstantin Tsiolkovsky and Robert H. Goddard. The first orbital spaceflight was in 1957, and Yuri Gagarin was carried aboard the first crewed orbital spaceflight in 1961.
Physics
There are different approaches to flight. If an object has a lower density than air, then it is buoyant and is able to float in the air without expending energy. A heavier than air craft, known as an aerodyne, includes flighted animals and insects, fixed-wing aircraft and rotorcraft. Because the craft is heavier than air, it must generate lift to overcome its weight. The wind resistance caused by the craft moving through the air is called drag and is overcome by propulsive thrust except in the case of gliding.
Some vehicles also use thrust in the place of lift; for example rockets and Harrier jump jets.
Forces
Forces relevant to flight are
Propulsive thrust (except in gliders)
Lift, created by the reaction to an airflow
Drag, created by aerodynamic friction
Weight, created by gravity
Buoyancy, for lighter than air flight
These forces must be balanced for stable flight to occur.
Thrust
A fixed-wing aircraft generates forward thrust when air is pushed in the direction opposite to flight. This can be done in several ways including by the spinning blades of a propeller, or a rotating fan pushing air out from the back of a jet engine, or by ejecting hot gases from a rocket engine. The forward thrust is proportional to the mass of the airstream multiplied by the difference in velocity of the airstream. Reverse thrust can be generated to aid braking after landing by reversing the pitch of variable-pitch propeller blades, or using a thrust reverser on a jet engine. Rotary wing aircraft and thrust vectoring V/STOL aircraft use engine thrust to support the weight of the aircraft, and vector sum of this thrust fore and aft to control forward speed.
Lift
In the context of an air flow relative to a flying body, the lift force is the component of the aerodynamic force that is perpendicular to the flow direction. Aerodynamic lift results when the wing causes the surrounding air to be deflected - the air then causes a force on the wing in the opposite direction, in accordance with Newton's third law of motion.
Lift is commonly associated with the wing of an aircraft, although lift is also generated by rotors on rotorcraft (which are effectively rotating wings, performing the same function without requiring that the aircraft move forward through the air). While common meanings of the word "lift" suggest that lift opposes gravity, aerodynamic lift can be in any direction. When an aircraft is cruising for example, lift does oppose gravity, but lift occurs at an angle when climbing, descending or banking. On high-speed cars, the lift force is directed downwards (called "down-force") to keep the car stable on the road.
Drag
For a solid object moving through a fluid, the drag is the component of the net aerodynamic or hydrodynamic force acting opposite to the direction of the movement. Therefore, drag opposes the motion of the object, and in a powered vehicle it must be overcome by thrust. The process which creates lift also causes some drag.
Lift-to-drag ratio
Aerodynamic lift is created by the motion of an aerodynamic object (wing) through the air, which due to its shape and angle deflects the air. For sustained straight and level flight, lift must be equal and opposite to weight. In general, long narrow wings are able deflect a large amount of air at a slow speed, whereas smaller wings need a higher forward speed to deflect an equivalent amount of air and thus generate an equivalent amount of lift. Large cargo aircraft tend to use longer wings with higher angles of attack, whereas supersonic aircraft tend to have short wings and rely heavily on high forward speed to generate lift.
However, this lift (deflection) process inevitably causes a retarding force called drag. Because lift and drag are both aerodynamic forces, the ratio of lift to drag is an indication of the aerodynamic efficiency of the airplane. The lift to drag ratio is the L/D ratio, pronounced "L over D ratio." An airplane has a high L/D ratio if it produces a large amount of lift or a small amount of drag. The lift/drag ratio is determined by dividing the lift coefficient by the drag coefficient, CL/CD.
The lift coefficient Cl is equal to the lift L divided by the (density r times half the velocity V squared times the wing area A). [Cl = L / (A * .5 * r * V^2)] The lift coefficient is also affected by the compressibility of the air, which is much greater at higher speeds, so velocity V is not a linear function. Compressibility is also affected by the shape of the aircraft surfaces.
The drag coefficient Cd is equal to the drag D divided by the (density r times half the velocity V squared times the reference area A). [Cd = D / (A * .5 * r * V^2)]
Lift-to-drag ratios for practical aircraft vary from about 4:1 for vehicles and birds with relatively short wings, up to 60:1 or more for vehicles with very long wings, such as gliders. A greater angle of attack relative to the forward movement also increases the extent of deflection, and thus generates extra lift. However a greater angle of attack also generates extra drag.
Lift/drag ratio also determines the glide ratio and gliding range. Since the glide ratio is based only on the relationship of the aerodynamics forces acting on the aircraft, aircraft weight will not affect it. The only effect weight has is to vary the time that the aircraft will glide for – a heavier aircraft gliding at a higher airspeed will arrive at the same touchdown point in a shorter time.
Buoyancy
Air pressure acting up against an object in air is greater than the pressure above pushing down. The buoyancy, in both cases, is equal to the weight of fluid displaced - Archimedes' principle holds for air just as it does for water.
A cubic meter of air at ordinary atmospheric pressure and room temperature has a mass of about 1.2 kilograms, so its weight is about 12 newtons. Therefore, any 1-cubic-meter object in air is buoyed up with a force of 12 newtons. If the mass of the 1-cubic-meter object is greater than 1.2 kilograms (so that its weight is greater than 12 newtons), it falls to the ground when released. If an object of this size has a mass less than 1.2 kilograms, it rises in the air. Any object that has a mass that is less than the mass of an equal volume of air will rise in air - in other words, any object less dense than air will rise.
Thrust to weight ratio
Thrust-to-weight ratio is, as its name suggests, the ratio of instantaneous thrust to weight (where weight means weight at the Earth's standard acceleration ). It is a dimensionless parameter characteristic of rockets and other jet engines and of vehicles propelled by such engines (typically space launch vehicles and jet aircraft).
If the thrust-to-weight ratio is greater than the local gravity strength (expressed in gs), then flight can occur without any forward motion or any aerodynamic lift being required.
If the thrust-to-weight ratio times the lift-to-drag ratio is greater than local gravity then takeoff using aerodynamic lift is possible.
Flight dynamics
Flight dynamics is the science of air and space vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of mass, known as pitch, roll and yaw (See Tait-Bryan rotations for an explanation).
The control of these dimensions can involve a horizontal stabilizer (i.e. "a tail"), ailerons and other movable aerodynamic devices which control angular stability i.e. flight attitude (which in turn affects altitude, heading). Wings are often angled slightly upwards- they have "positive dihedral angle" which gives inherent roll stabilization.
Energy efficiency
To create thrust so as to be able to gain height, and to push through the air to overcome the drag associated with lift all takes energy. Different objects and creatures capable of flight vary in the efficiency of their muscles, motors and how well this translates into forward thrust.
Propulsive efficiency determines how much energy vehicles generate from a unit of fuel.
Range
The range that powered flight articles can achieve is ultimately limited by their drag, as well as how much energy they can store on board and how efficiently they can turn that energy into propulsion.
For powered aircraft the useful energy is determined by their fuel fraction- what percentage of the takeoff weight is fuel, as well as the specific energy of the fuel used.
Power-to-weight ratio
All animals and devices capable of sustained flight need relatively high power-to-weight ratios to be able to generate enough lift and/or thrust to achieve take off.
Takeoff and landing
Vehicles that can fly can have different ways to takeoff and land. Conventional aircraft accelerate along the ground until sufficient lift is generated for takeoff, and reverse the process for landing. Some aircraft can take off at low speed; this is called a short takeoff. Some aircraft such as helicopters and Harrier jump jets can take off and land vertically. Rockets also usually take off and land vertically, but some designs can land horizontally.
Guidance, navigation and control
Navigation
Navigation is the systems necessary to calculate current position (e.g. compass, GPS, LORAN, star tracker, inertial measurement unit, and altimeter).
In aircraft, successful air navigation involves piloting an aircraft from place to place without getting lost, breaking the laws applying to aircraft, or endangering the safety of those on board or on the ground.
The techniques used for navigation in the air will depend on whether the aircraft is flying under the visual flight rules (VFR) or the instrument flight rules (IFR). In the latter case, the pilot will navigate exclusively using instruments and radio navigation aids such as beacons, or as directed under radar control by air traffic control. In the VFR case, a pilot will largely navigate using dead reckoning combined with visual observations (known as pilotage), with reference to appropriate maps. This may be supplemented using radio navigation aids.
Guidance
A guidance system is a device or group of devices used in the navigation of a ship, aircraft, missile, rocket, satellite, or other moving object. Typically, guidance is responsible for the calculation of the vector (i.e., direction, velocity) toward an objective.
Control
A conventional fixed-wing aircraft flight control system consists of flight control surfaces, the respective cockpit controls, connecting linkages, and the necessary operating mechanisms to control an aircraft's direction in flight. Aircraft engine controls are also considered as flight controls as they change speed.
Traffic
In the case of aircraft, air traffic is controlled by air traffic control systems.
Collision avoidance is the process of controlling spacecraft to try to prevent collisions.
Flight safety
Air safety is a term encompassing the theory, investigation and categorization of flight failures, and the prevention of such failures through regulation, education and training. It can also be applied in the context of campaigns that inform the public as to the safety of air travel.
| Physical sciences | Fluid mechanics | null |
159506 | https://en.wikipedia.org/wiki/RGB%20color%20spaces | RGB color spaces | RGB color spaces is a category of additive colorimetric color spaces specifying part of its absolute color space definition using the RGB color model.
RGB color spaces are commonly found describing the mapping of the RGB color model to human perceivable color, but some RGB color spaces use imaginary (non-real-world) primaries and thus can not be displayed directly.
Like any color space, while the specifications in this category use the RGB color model to describe their space, it is not mandatory to use that model to signal pixel color values. Broadcast TV color spaces like NTSC, PAL, Rec. 709, Rec. 2020 additionally describe a translation from RGB to YCbCr and that is how they are usually signalled for transmission, but an image can be stored as either RGB or YCbCr. This demonstrates using the singular term "RGB color space" can be misleading, since a chosen color space or signalled colour can be described by any appropriate color model. However the singular can be seen in specifications where storage signalled as RGB is its intended use.
Definition
The normal human eye contains three types of color-sensitive cone cells. Each cell is responsive to light of either long, medium, or short wavelengths, which we generally categorize as red, green, and blue. Taken together, the responses of these cone cells are called the Tristimulus values, and the combination of their responses is processed into the psychological effect of color vision.
RGB use in color space definitions employ primaries (and often a white point) based on the RGB color model, to map to real world color. Applying Grassmann's law of light additivity, the range of colors that can be produced are those enclosed within the triangle on the chromaticity diagram defined using the primaries as vertices.
The primary colors are usually mapped to xyY chromaticity coordinates, though the uʹ,vʹ coordinates from the UCS chromaticity diagram may be used. Both xyY and uʹ,vʹ are derived from the CIE 1931 color space, a device independent space also known as XYZ which covers the full gamut of human-perceptible colors visible to the CIE 2° standard observer.
Applications
RGB color spaces are well-suited to describing the electronic display of color, such as computer monitors and color television. These devices often reproduce colours using an array of red, green, and blue phosphors agitated by a cathode-ray tube (CRT), or an array of red, green, and blue LCDs lit by a backlight, and are therefore naturally described by an additive color model with RGB primaries.
Early examples of RGB color spaces came with the adoption of the NTSC color television standard in 1953 across North America, followed by PAL and SECAM covering the rest of the world. These early RGB spaces were defined in part by the phosphor used by CRTs in use at the time, and the gamma of the electron beam. While these color spaces reproduced the intended colors using additive red, green, and blue primaries, the broadcast signal itself was encoded from RGB components to a composite signal such as YIQ, and decoded back by the receiver into RGB signals for display.
HDTV uses the BT.709 color space, later repurposed for computer monitors as sRGB. Both use the same color primaries and white point, but different transfer functions, as HDTV is intended for a dark living room while sRGB is intended for a brighter office environment. The gamut of these spaces is limited, covering only 35.9% of the CIE 1931 gamut. While this allows the use of a limited bit depth without causing color banding, and therefore reduces transmission bandwidth, it also prevents the encoding of deeply saturated colors that might be available in an alternate color spaces. Some RGB color spaces such as Adobe RGB and ProPhoto intended for the creation, rather than transmission, of images are designed with expanded gamuts to address this issue, however this does not mean the larger space has 'more colors". The numerical quantity of colors is related to bit depth and not the size or shape of the gamut. A large space with a low bit depth can be detrimental to the gamut density and result in high errors.
More recent color spaces such as Rec. 2020 for UHD-TVs define an extremely large gamut covering 63.3% of the CIE 1931 space. This standard is not currently realisable with current LCD technology, and alternative architectures such as quantum dot or OLED based devices are currently in development.
Color space specifications employing the RGB color model
The CIE 1931 color space standard defines both the CIE RGB space, which is a color space with monochromatic primaries, and the CIE XYZ color space, which is functionally similar to a linear RGB color space, however the primaries are not physically realizable, thus are not described as red, green, and blue.
M.A.C. is not to be confused with MacOS. Here, M.A.C.refers to Multiplexed Analogue Components.
| Physical sciences | Basics | Physics |
159512 | https://en.wikipedia.org/wiki/Barnacle | Barnacle | Barnacles are arthropods of the subclass Cirripedia in the subphylum Crustacea. They are related to crabs and lobsters, with similar nauplius larvae. Barnacles are exclusively marine invertebrates; many species live in shallow and tidal waters. Some 2,100 species have been described.
Barnacle adults are sessile; most are suspension feeders with hard calcareous shells, but the Rhizocephala are specialized parasites of other crustaceans, with reduced bodies. Barnacles have existed since at least the mid-Carboniferous, some 325 million years ago.
In folklore, barnacle geese were once held to emerge fully formed from goose barnacles. Both goose barnacles and the Chilean giant barnacle are fished and eaten. Barnacles are economically significant as biofouling on ships, where they cause hydrodynamic drag, reducing efficiency.
Etymology
The word "barnacle" is attested in the early 13th century as Middle English "bernekke" or "bernake", close to Old French "bernaque" and medieval Latin bernacae or berneka, denoting the barnacle goose. Because the full life cycles of both barnacles and geese were unknown at the time, (geese spend their breeding seasons in the Arctic) a folktale emerged that geese hatched from barnacles. It was not applied strictly to the arthropod until the 1580s. The ultimate meaning of the word is unknown.
The name comes from the Latin words cirritus "curly" from cirrus "curl" and pedis from pes "foot". The two words together mean "curly-footed", alluding to the curved legs used in filter-feeding.
Description
Most barnacles are encrusters, attaching themselves to a hard substrate such as a rock, the shell of a mollusc, or a ship; or to an animal such as a whale (whale barnacles). The most common form, acorn barnacles, are sessile, growing their shells directly onto the substrate, whereas goose barnacles attach themselves by means of a stalk.
Anatomy and physiology
Barnacles have a carapace made of six hard calcareous plates, with a lid or operculum made of four more plates. Inside the carapace, the animal lies on its stomach, projecting its limbs downwards. Segmentation is usually indistinct; the body is more or less evenly divided between the head and thorax, with little or no abdomen. Adult barnacles have few appendages on their heads, with only a single, vestigial pair of antennae attached to the cement gland. The six pairs of thoracic limbs are called cirri; these are feathery and very long. The cirri extend to filter food, such as plankton, from the water and move it towards the mouth.
Acorn barnacles are attached to the substratum by cement glands that form the base of the first pair of antennae; in effect, the animal is fixed upside down by means of its forehead. In some barnacles, the cement glands are fixed to a long, muscular stalk, but in most they are part of a flat membrane or calcified plate. These glands secrete a type of natural quick cement made of complex protein bonds (polyproteins) and other trace components like calcium. This natural cement can withstand a pulling strength of and a sticking strength of .
Barnacles have no true heart, although a sinus close to the esophagus performs a similar function, with blood being pumped through it by a series of muscles. The blood vascular system is minimal. Similarly, they have no gills, absorbing oxygen from the water through the cirri and the surface of the body. The excretory organs of barnacles are maxillary glands.
The main sense of barnacles appears to be touch, with the hairs on the limbs being especially sensitive. The adult has three photoreceptors (ocelli), one median and two lateral. These record the stimulus for the barnacle shadow reflex, where a sudden decrease in light causes cessation of the fishing rhythm and closing of the opercular plates. The photoreceptors are likely only capable of sensing the difference between light and dark. This eye is derived from the primary naupliar eye.
Life cycle
Barnacles pass through two distinct larval stages, the nauplius and the cyprid, before developing into a mature adult.
Nauplius larva
A fertilised egg hatches into a nauplius: a one-eyed larva comprising a head and a telson with three pairs of limbs, lacking a thorax or abdomen. This undergoes six moults, passing through five instars, before transforming into the cyprid stage. Nauplii are typically initially brooded by the parent, and released after the first moult as larvae that swim freely using setae. All but the first instars are filter feeders.
Cypris larva
The cypris larva is the second and final larval stage before adulthood. In Rhizocephala and Thoracica an abdomen is absent in this stage, but the y-cyprids (post-naupliar instar) has three distinct abdominal segments. It is not a feeding stage; its role is to find a suitable place to settle, since the adults are sessile. The cyprid stage lasts from days to weeks. It explores potential surfaces with modified antennules; once it has found a suitable spot, it attaches head-first using its antennules and a secreted glycoproteinous cement. Larvae assess surfaces based upon their surface texture, chemistry, relative wettability, color, and the presence or absence and composition of a surface biofilm; swarming species are more likely to attach near other barnacles. As the larva exhausts its energy reserves, it becomes less selective in the sites it selects. It cements itself permanently to the substrate with another proteinaceous compound, and then undergoes metamorphosis into a juvenile barnacle.
Adult
Typical acorn barnacles develop six hard calcareous plates to surround and protect their bodies. For the rest of their lives, they are cemented to the substrate, using their feathery legs (cirri) to capture plankton. Once metamorphosis is over and they have reached their adult form, barnacles continue to grow by adding new material to their heavily calcified plates. These plates are not moulted; however, like all ecdysozoans, the barnacle moults its cuticle.
Sexual reproduction
Most barnacles are hermaphroditic, producing both eggs and sperms. A few species have separate sexes, or have both males and hermaphrodites. The ovaries are located in the base or stalk, and may extend into the mantle, while the testes are towards the back of the head, often extending into the thorax. Typically, recently moulted hermaphroditic individuals are receptive as females. Self-fertilization, although theoretically possible, has been experimentally shown to be rare in barnacles.
The sessile lifestyle of acorn barnacles makes sexual reproduction difficult, as they cannot leave their shells to mate. To facilitate genetic transfer between isolated individuals, barnacles have developed extraordinarily long penises. Barnacles are believed to have the largest penis-to-body size ratio of any known animal, up to eight times their body length, though on exposed coasts the penis is shorter and thicker. The mating of acorn barnacles is described as pseudocopulation.
The goose barnacle Pollicipes polymerus can alternatively reproduce by spermcasting, in which the male barnacle releases his sperm into the water, to be taken up by females. Isolated individuals always made use of spermcasting and sperm capture, as did a quarter of individuals with a close neighbour. This 2013 discovery overturned the long-held belief that barnacles were limited to pseucocopulation or hermaphroditism.
Rhizocephalan barnacles had been considered hermaphroditic, but their males inject themselves into females' bodies, degrading to little more than sperm-producing cells.
Ecology
Filter feeding
Most barnacles are filter feeders. From within their shell, they repeatedly reach into the water column with their cirri. These feathery appendages beat rhythmically to draw plankton and detritus into the shell for consumption.
Species-specific zones
Although they have been found at water depths to , most barnacles inhabit shallow waters, with 75% of species living in water depths less than , and 25% inhabiting the intertidal zone. Within the intertidal zone, different species of barnacles live in very tightly constrained locations, allowing the exact height of an assemblage above or below sea level to be precisely determined.
Since the intertidal zone periodically desiccates, barnacles are well adapted against water loss. Their calcite shells are impermeable, and they can close their apertures with movable plates when not feeding. Their hard shells are assumed by zoologists to have evolved as an anti-predator adaptation.
One group of stalked barnacles has adapted to a rafting lifestyle, drifting around close to the water's surface. They colonize every floating object, such as driftwood, and like some non-stalked barnacles attach themselves to marine animals. The species most specialized for this lifestyle is Dosima fascicularis, which secretes a gas-filled cement that makes it float at the surface.
Parasitism
Other members of the class have an entirely different mode of life. Barnacles of the superorder Rhizocephala, including the genus Sacculina, are parasitic castrators of other arthropods, including crabs. The anatomy of these parasitic barnacles is greatly reduced compared to their free-living relatives. They have no carapace or limbs, having only unsegmented sac-like bodies. They feed by extending thread-like rhizomes of living cells into their hosts' bodies from their points of attachment.
Goose barnacles of the genus Anelasma (in the order Pollicipedomorpha) are specialized parasites of certain shark species. Their cirri are no longer used to filter-feed. Instead, these barnacles get their nutrients directly from the host through a root-like body part embedded in the shark's flesh.
Competitors
Barnacles are displaced by limpets and mussels, which compete for space. They employ two strategies to overwhelm their competitors: "swamping", and fast growth. In the swamping strategy, vast numbers of barnacles settle in the same place at once, covering a large patch of substrate, allowing at least some to survive in the balance of probabilities. Fast growth allows the suspension feeders to access higher levels of the water column than their competitors, and to be large enough to resist displacement; species employing this response, such as the aptly named Megabalanus, can reach in length.
Competitors may include other barnacles. Balanoids gained their advantage over the chthalamoids in the Oligocene, when they evolved tubular skeletons, which provide better anchorage to the substrate, and allow them to grow faster, undercutting, crushing, and smothering chthalamoids.
Predators and parasites
Among the most common predators of barnacles are whelks. They are able to grind through the calcareous exoskeleton and eat the animal inside. Barnacle larvae are consumed by filter-feeding benthic predators including the mussel Mytilus edulis and the ascidian Styela gibbsi. Another predator is the starfish species Pisaster ochraceus. A stalked barnacle in the Iblomorpha, Chaetolepas calcitergum, lacks a heavily mineralised shell, but contains a high concentration of toxic bromine; this may serve to deter predators. The turbellarian flatworm Stylochus, a serious predator of oyster spat, has been found in barnacles. Parasites of barnacles include many species of Gregarinasina (alveolate protozoa), a few fungi, a few species of trematodes, and a parasitic castrator isopod, Hemioniscus balani.
History of taxonomy
Barnacles were classified by Linnaeus and Cuvier as Mollusca, but in 1830 John Vaughan Thompson published observations showing the metamorphosis of the nauplius and cypris larvae into adult barnacles, and noted that these larvae were similar to those of crustaceans. In 1834, Hermann Burmeister reinterpreted these findings, moving barnacles from the Mollusca to Articulata (in modern terms, annelids + arthropods), showing naturalists that detailed study was needed to reevaluate their taxonomy.
Charles Darwin took up this challenge in 1846, and developed his initial interest into a major study published as a series of monographs in 1851 and 1854. He undertook this study at the suggestion of his friend the botanist Joseph Dalton Hooker, namely to thoroughly understand at least one species before making the generalisations needed for his theory of evolution by natural selection.
The Royal Society notes that barnacles occupied Darwin, who worked from home, so intensely "that his son assumed all fathers behaved the same way: when visiting a friend he asked, 'Where does your father do his barnacles?'" Upon the conclusion of his research, Darwin declared "I hate a barnacle as no man ever did before."
Evolution
Fossil record
The oldest definitive fossil barnacle is Praelepas from the mid-Carboniferous, around 330-320 million years ago. Older claimed barnacles such as Priscansermarinus from the Middle Cambrian, some , do not show clear barnacle morphological traits, though Rhamphoverritor from the Silurian Coalbrookdale Formation of England may represent a stem-group barnacle. Barnacles first radiated and became diverse during the Late Cretaceous. Barnacles underwent a second, much larger radiation beginning during the Neogene and still continuing.
Phylogeny
The following cladogram, not fully resolved, shows the phylogenetic relationships of the Cirripedia within Thecostraca as of 2021.
Taxonomy
Over 2,100 species of Cirripedia have been described. Some authorities regard the Cirripedia as a full class or subclass. In 2001, Martin and Davis placed Cirripedia as an infraclass of Thecostraca, and divided it into six orders:
Infraclass Cirripedia Burmeister, 1834
Superorder Acrothoracica Gruvel, 1905
Order Pygophora Berndt, 1907
Order Apygophora Berndt, 1907
Superorder Rhizocephala Müller, 1862
Order Kentrogonida Delage, 1884
Order Akentrogonida Häfele, 1911
Superorder Thoracica Darwin, 1854
Order Pedunculata Lamarck, 1818
Order Sessilia Lamarck, 1818
In 2021, Chan et al. elevated Cirripedia to a subclass of the Thecostraca, and the superorders Acrothoracica, Rhizocephala, and Thoracica to infraclass. The updated classification with 11 orders has been accepted in the World Register of Marine Species.
Subclass Cirripedia Burmeister, 1834
Infraclass Acrothoracica Gruvel, 1905
Order Cryptophialida Kolbasov, Newman & Hoeg, 2009
Order Lithoglyptida Kolbasov, Newman & Hoeg, 2009
Infraclass Rhizocephala Müller, 1862
Infraclass Thoracica Darwin, 1854
Superorder Phosphatothoracica Gale, 2019
Order Iblomorpha Buckeridge & Newman, 2006
Order † Eolepadomorpha Chan et al., 2021
Superorder Thoracicalcarea Gale, 2015
Order Calanticomorpha Chan et al., 2021
Order Pollicipedomorpha Chan et al., 2021
Order Scalpellomorpha Buckeridge & Newman, 2006
Order † Archaeolepadomorpha Chan et al., 2021
Order † Brachylepadomorpha Withers, 1923
(Unranked) Sessilia
Order Balanomorpha Pilsbry, 1916
Order Verrucomorpha Pilsbry, 1916
Relationship with humans
Biofouling
Barnacles are of economic consequence, as they often attach themselves to man-made structures. Particularly in the case of ships, they are classified as fouling organisms. The number and size of barnacles that cover ships can impair their efficiency by causing hydrodynamic drag.
As food
The flesh of some barnacles is routinely consumed by humans, including Japanese goose barnacles (e.g. Capitulum mitella), and goose barnacles (e.g. Pollicipes pollicipes), a delicacy in Spain and Portugal. The Chilean giant barnacle Austromegabalanus psittacus is fished, or overfished, in commercial quantities on the Chilean coast, where it is known as the .
Technological applications
MIT researchers have developed an adhesive inspired by the protein-based bioglue produced by barnacles to firmly attach to rocks. The adhesive can form a tight seal to halt bleeding within about 15 seconds of application.
The stable isotope signals in the layers of barnacle shells can potentially be used as a forensic tracking method for whales, loggerhead turtles and for marine debris, such as shipwrecks or aircraft wreckage.
In culture
One version of the barnacle goose myth is that the birds emerge fully formed from goose barnacles. The myth, with variants such as that the goose barnacles grow on trees, owes its longstanding popularity to ignorance of bird migration. The myth survived to modern times through bestiaries.
More recently, Barnacle Bill became a "comic folktype" of a seaman, with a drinking song and several films (a 1930 animated short with Betty Boop, a 1935 British drama, a 1941 feature with Wallace Beery, and a 1957 Ealing comedy) named after him.
The political reformer John W. Gardner likened middle managers who settle into a comfortable position and "have stopped learning or growing" to the barnacle, who "is confronted with an existential decision about where it's going to live. Once it decides... it spends the rest of its life with its head cemented to a rock".
| Biology and health sciences | Crustaceans | Animals |
159748 | https://en.wikipedia.org/wiki/Muscular%20system | Muscular system | The muscular system is an organ system consisting of skeletal, smooth, and cardiac muscle. It permits movement of the body, maintains posture, and circulates blood throughout the body. The muscular systems in vertebrates are controlled through the nervous system although some muscles (such as the cardiac muscle) can be completely autonomous. Together with the skeletal system in the human, it forms the musculoskeletal system, which is responsible for the movement of the body.
Types
There are three distinct types of muscle: skeletal muscle, cardiac or heart muscle, and smooth (non-striated) muscle. Muscles provide strength, balance, posture, movement, and heat for the body to keep warm.
There are more than 600 muscles in an adult male human body. A kind of elastic tissue makes up each muscle, which consists of thousands, or tens of thousands, of small muscle fibers. Each fiber comprises many tiny strands called fibrils, impulses from nerve cells control the contraction of each muscle fiber.
Skeletal
Skeletal muscle, is a type of striated muscle, composed of muscle cells, called muscle fibers, which are in turn composed of myofibrils. Myofibrils are composed of sarcomeres, the basic building blocks of striated muscle tissue. Upon stimulation by an action potential, skeletal muscles perform a coordinated contraction by shortening each sarcomere. The best proposed model for understanding contraction is the sliding filament model of muscle contraction. Within the sarcomere, actin and myosin fibers overlap in a contractile motion towards each other. Myosin filaments have club-shaped myosin heads that project toward the actin filaments, and provide attachment points on binding sites for the actin filaments. The myosin heads move in a coordinated style; they swivel toward the center of the sarcomere, detach and then reattach to the nearest active site of the actin filament. This is called a ratchet type drive system.
This process consumes large amounts of adenosine triphosphate (ATP), the energy source of the cell. ATP binds to the cross-bridges between myosin heads and actin filaments. The release of energy powers the swiveling of the myosin head. When ATP is used, it becomes adenosine diphosphate (ADP), and since muscles store little ATP, they must continuously replace the discharged ADP with ATP. Muscle tissue also contains a stored supply of a fast-acting recharge chemical, creatine phosphate, which when necessary can assist with the rapid regeneration of ADP into ATP.
Calcium ions are required for each cycle of the sarcomere. Calcium is released from the sarcoplasmic reticulum into the sarcomere when a muscle is stimulated to contract. This calcium uncovers the actin-binding sites. When the muscle no longer needs to contract, the calcium ions are pumped from the sarcomere and back into storage in the sarcoplasmic reticulum.
There are approximately 639 skeletal muscles in the human body.
Cardiac
Heart muscle is striated muscle but is distinct from skeletal muscle because the muscle fibers are laterally connected. Furthermore, just as with smooth muscles, their movement is involuntary. Heart muscle is controlled by the sinus node influenced by the autonomic nervous system.
Smooth
Smooth muscle contraction is regulated by the autonomic nervous system, hormones, and local chemical signals, allowing for gradual and sustained contractions. This type of muscle tissue is also capable of adapting to different levels of stretch and tension, which is important for maintaining proper blood flow and the movement of materials through the digestive system.
Physiology
Contraction
Neuromuscular junctions are the focal point where a motor neuron attaches to a muscle. Acetylcholine, (a neurotransmitter used in skeletal muscle contraction) is released from the axon terminal of the nerve cell when an action potential reaches the microscopic junction called a synapse. A group of chemical messengers across the synapse and stimulate the formation of electrical changes, which are produced in the muscle cell when the acetylcholine binds to receptors on its surface. Calcium is released from its storage area in the cell's sarcoplasmic reticulum. An impulse from a nerve cell causes calcium release and brings about a single, short muscle contraction called a muscle twitch. If there is a problem at the neuromuscular junction, a very prolonged contraction may occur, such as the muscle contractions that result from tetanus. Also, a loss of function at the junction can produce paralysis.
Skeletal muscles are organized into hundreds of motor units, each of which involves a motor neuron, attached by a series of thin finger-like structures called axon terminals. These attach to and control discrete bundles of muscle fibers. A coordinated and fine-tuned response to a specific circumstance will involve controlling the precise number of motor units used. While individual muscle units' contract as a unit, the entire muscle can contract on a predetermined basis due to the structure of the motor unit. Motor unit coordination, balance, and control frequently come under the direction of the cerebellum of the brain. This allows for complex muscular coordination with little conscious effort, such as when one drives a car without thinking about the process.
Tendon
A tendon is a piece of connective tissue that connects a muscle to a bone. When a muscle intercepts, it pulls against the skeleton to create movement. A tendon connects this muscle to a bone, making this function possible.
Aerobic and anaerobic muscle activity
At rest, the body produces the majority of its ATP aerobically in the mitochondria without producing lactic acid or other fatiguing byproducts. During exercise, the method of ATP production varies depending on the fitness of the individual as well as the duration and intensity of exercise. At lower activity levels, when exercise continues for a long duration (several minutes or longer), energy is produced aerobically by combining oxygen with carbohydrates and fats stored in the body.
During activity that is higher in intensity, with possible duration decreasing as intensity increases, ATP production can switch to anaerobic pathways, such as the use of the creatine phosphate and the phosphagen system or anaerobic glycolysis. Aerobic ATP production is biochemically much slower and can only be used for long-duration, low-intensity exercise, but produces no fatiguing waste products that cannot be removed immediately from the sarcomere and the body, and it results in a much greater number of ATP molecules per fat or carbohydrate molecule. Aerobic training allows the oxygen delivery system to be more efficient, allowing aerobic metabolism to begin quicker. Anaerobic ATP production produces ATP much faster and allows near-maximal intensity exercise, but also produces significant amounts of lactic acid which render high-intensity exercise unsustainable for more than several minutes. The phosphagen system is also anaerobic. It allows for the highest levels of exercise intensity, but intramuscular stores of phosphocreatine are very limited and can only provide energy for exercises lasting up to ten seconds. Recovery is very quick, with full creatine stores regenerated within five minutes.
Clinical significance
Multiple diseases can affect the muscular system.
Muscular Dystrophy
Muscular dystrophy is a group of disorders associated with progressive muscle weakness and loss of muscle mass. These disorders are caused by mutations in a person's genes. The disease affects between 19.8 and 25.1 per 100,000 person-years globally.
There are more than 30 types of muscular dystrophy. Depending on the type, muscular dystrophy can affect the patient's heart and lungs, and/or their ability to move, walk, and perform daily activities. The most common types include:
Duchenne muscular dystrophy (DMD) and Becker muscular dystrophy (BMD)
Myotonic dystrophy
Limb-Girdle (LGMD)
Facioscapulohumeral dystrophy (FSHD)
Congenital dystrophy (CMD)
Distal (DD)
Oculopharyngeal dystrophy (OPMD)
Emery-Dreifuss (EDMD)
| Biology and health sciences | Muscular system | null |
159750 | https://en.wikipedia.org/wiki/Pectin | Pectin | Pectin ( : "congealed" and "curdled") is a heteropolysaccharide, a structural polymer contained in the primary lamella, in the middle lamella, and in the cell walls of terrestrial plants. The principal chemical component of pectin is galacturonic acid (a sugar acid derived from galactose) which was isolated and described by Henri Braconnot in 1825. Commercially produced pectin is a white-to-light-brown powder, produced from citrus fruits for use as an edible gelling agent, especially in jams and jellies, dessert fillings, medications, and sweets; as a food stabiliser in fruit juices and milk drinks, and as a source of dietary fiber.
Biology
Pectin is composed of complex polysaccharides that are present in the primary cell walls of a plant, and are abundant in the green parts of terrestrial plants.
Pectin is the principal component of the middle lamella, where it binds cells. Pectin is deposited by exocytosis into the cell wall via vesicles produced in the Golgi apparatus. The amount, structure and chemical composition of pectin is different among plants, within a plant over time, and in various parts of a plant. Pectin is an important cell wall polysaccharide that allows primary cell wall extension and plant growth. During fruit ripening, pectin is broken down by the enzymes pectinase and pectinesterase, in which process the fruit becomes softer as the middle lamellae break down and cells become separated from each other. A similar process of cell separation caused by the breakdown of pectin occurs in the abscission zone of the petioles of deciduous plants at leaf fall.
Pectin is a natural part of the human diet, but does not contribute significantly to nutrition. The daily intake of pectin from fruits and vegetables can be estimated to be around 5 g if approximately 500 g of fruits and vegetables are consumed per day.
In human digestion, pectin binds to cholesterol in the gastrointestinal tract and slows glucose absorption by trapping carbohydrates. Pectin is thus a soluble dietary fiber. In non-obese diabetic (NOD) mice pectin has been shown to increase the incidence of autoimmune type 1 diabetes.
A study found that after consumption of fruit the concentration of methanol in the human body increased by as much as an order of magnitude due to the degradation of natural pectin (which is esterified with methanol) in the colon.
Pectin has been observed to have some function in repairing the DNA of some types of plant seeds, usually desert plants. Pectinaceous surface pellicles, which are rich in pectin, create a mucilage layer that holds in dew that helps the cell repair its DNA.
Consumption of pectin has been shown to slightly (3–7%) reduce blood LDL cholesterol levels. The effect depends upon the source of pectin; apple and citrus pectins were more effective than orange pulp fibre pectin. The mechanism appears to be an increase of viscosity in the intestinal tract, leading to a reduced absorption of cholesterol from bile or food. In the large intestine and colon, microorganisms degrade pectin and liberate short-chain fatty acids that have a positive prebiotic effect.
Chemistry
Pectins, also known as pectic polysaccharides, are rich in galacturonic acid. Several distinct polysaccharides have been identified and characterised within the pectic group. Homogalacturonans are linear chains of α-(1–4)-linked D-galacturonic acid. Substituted galacturonans are characterised by the presence of saccharide appendant residues (such as D-xylose or D-apiose in the respective cases of xylogalacturonan and apiogalacturonan) branching from a backbone of D-galacturonic acid residues. Rhamnogalacturonan I pectins (RG-I) contain a backbone of the repeating disaccharide: 4)-α-D-galacturonic acid-(1,2)-α-L-rhamnose-(1. From many of the rhamnose residues, sidechains of various neutral sugars branch off. The neutral sugars are mainly D-galactose, L-arabinose and D-xylose, with the types and proportions of neutral sugars varying with the origin of pectin.
Another structural type of pectin is rhamnogalacturonan II (RG-II), which is a less frequent, complex, highly branched polysaccharide. Rhamnogalacturonan II is classified by some authors within the group of substituted galacturonans since the rhamnogalacturonan II backbone is made exclusively of D-galacturonic acid units.
The molecular weight of isolated pectine greatly varies by the source and the method of isolation. Values have been reported as low as 28 kDa for apple pomace up to 753 kDa for sweet potato peels.
In nature, around 80 percent of carboxyl groups of galacturonic acid are esterified with methanol. This proportion is decreased to a varying degree during pectin extraction. Pectins are classified as high- versus low-methoxy pectins (short HM-pectins versus LM-pectins), with more or less than half of all the galacturonic acid esterified. The ratio of esterified to non-esterified galacturonic acid determines the behaviour of pectin in food applications – HM-pectins can form a gel under acidic conditions in the presence of high sugar concentrations, while LM-pectins form gels by interaction with divalent cations, particularly Ca2+, according to the idealized 'egg box' model, in which ionic bridges are formed between calcium ions and the ionised carboxyl groups of the galacturonic acid.
In high-methoxy pectins at soluble solids content above 60% and a pH value between 2.8 and 3.6, hydrogen bonds and hydrophobic interactions bind the individual pectin chains together. These bonds form as water is bound by sugar and forces pectin strands to stick together. These form a three-dimensional molecular net that creates the macromolecular gel. The gelling-mechanism is called a low-water-activity gel or sugar-acid-pectin gel.
While low-methoxy pectins need calcium to form a gel, they can do so at lower soluble solids and higher pH than high-methoxy pectins. Normally low-methoxy pectins form gels with a range of pH from 2.6 to 7.0 and with a soluble solids content between 10 and 70%.
The non-esterified galacturonic acid units can be either free acids (carboxyl groups) or salts with sodium, potassium, or calcium. The salts of partially esterified pectins are called pectinates, if the degree of esterification is below 5 percent the salts are called pectates, the insoluble acid form, pectic acid.
Some plants, such as sugar beet, potatoes and pears, contain pectins with acetylated galacturonic acid in addition to methyl esters. Acetylation prevents gel-formation but increases the stabilising and emulsifying effects of pectin.
Amidated pectin is a modified form of pectin. Here, some of the galacturonic acid is converted with ammonia to carboxylic acid amide. These pectins are more tolerant of varying calcium concentrations that occur in use.
Thiolated pectin exhibits substantially improved gelling properties since this thiomer is able to crosslink via disulfide bond formation. These high gelling properties are advantageous for various pharmaceutical applications and applications in food industry.
To prepare a pectin-gel, the ingredients are heated, dissolving the pectin. Upon cooling below gelling temperature, a gel starts to form. If gel formation is too strong, syneresis or a granular texture are the result, while weak gelling leads to excessively soft gels.
Amidated pectins behave like low-ester pectins but need less calcium and are more tolerant of excess calcium. Also, gels from amidated pectin are thermoreversible; they can be heated and after cooling solidify again, whereas conventional pectin-gels will afterwards remain liquid.
High-ester pectins set at higher temperatures than low-ester pectins. However, gelling reactions with calcium increase as the degree of esterification falls. Similarly, lower pH-values or higher soluble solids (normally sugars) increase gelling speeds. Suitable pectins can therefore be selected for jams and jellies, or for higher-sugar confectionery jellies.
Sources and production
Pears, apples, guavas, quince, plums, gooseberries, and oranges and other citrus fruits contain large amounts of pectin, while soft fruits, like cherries, grapes, and strawberries, contain small amounts of pectin.
Typical levels of pectin in fresh fruits and vegetables are:
Apples, 1–1.5%
Apricots, 1%
Cherries, 0.4%
Oranges, 0.5–3.5%
Carrots 1.4%
Citrus peels, 30%
Rose hips, 15%
The main raw materials for pectin production are dried citrus peels or apple pomace, both by-products of juice production. Pomace from sugar beets is also used to a small extent.
From these materials, pectin is extracted by adding hot dilute acid at pH values from 1.5 to 3.5. During several hours of extraction, the protopectin loses some of its branching and chain length and goes into solution. After filtering, the extract is concentrated in a vacuum and the pectin is then precipitated by adding ethanol or isopropanol. An old technique of precipitating pectin with aluminium salts is no longer used (apart from alcohols and polyvalent cations, pectin also precipitates with proteins and detergents).
Alcohol-precipitated pectin is then separated, washed, and dried. Treating the initial pectin with dilute acid leads to low-esterified pectins. When this process includes ammonium hydroxide (NH3(aq)), amidated pectins are obtained. After drying and milling, pectin is usually standardised with sugar, and sometimes calcium salts or organic acids, to optimise performance in a particular application.
Uses
The main use for pectin is as a gelling agent, thickening agent and stabiliser in food.
In some countries, pectin is also available as a solution or an extract, or as a blended powder, for home jam making.
The classical application is giving the jelly-like consistency to jams or marmalades, which would otherwise be sweet juices. Pectin also reduces syneresis in jams and marmalades and increases the gel strength of low-calorie jams. For household use, pectin is an ingredient in gelling sugar (also known as "jam sugar") where it is diluted to the right concentration with sugar and some citric acid to adjust pH.
For various food applications, different kinds of pectins can be distinguished by their properties, such as acidity, degree of esterification, relative number of methoxyl groups in the molecules, etc. For instance, the term "high methoxyl" refers to pectins that have a large proportion of the carboxyl groups in the pectin molecule that are esterified with methanol, compared to low methoxyl pectins:
high methoxyl pectins are defined as those with a degree of esterification equal to or above 50, are typically used in traditional jam and jelly making; such pectins require high sugar concentrations and acidic conditions to form gels, and provide a smooth texture and suitable to be used in bakery fillings and confectionery applications;
low methoxyl pectins have a degree of esterification of less than 50, can be either amidated or non-amidated: the percentage level of substitution of the amide group, defined as the degree of amidation, defines the efficacy of a pectin; low methoxyl pectins can provide a range of textures and rheological properties, depending on the calcium concentration and the calcium reactivity of the pectin chosen—amidated low methoxyl pectins are generally thermoreversible, meaning they can form gels that can melt and reform, whereas non-amidated low methoxyl pectins can form thermostable gels that withstand high temperatures; these properties make low methoxyl pectins suitable for low sugar and sugar-free applications, dairy products, and stabilizing acidic protein drinks.
For conventional jams and marmalades that contain above 60% sugar and soluble fruit solids, high-ester (high methoxyl) pectins are used. With low-ester (low methoxyl) pectins and amidated pectins, less sugar is needed, so that diet products can be made. Water extract of aiyu seeds is traditionally used in Taiwan to make aiyu jelly, where the extract gels without heating due to low-ester pectins from the seeds and the bivalent cations from the water.
Pectin is used in confectionery jellies to give a good gel structure, a clean bite and to confer a good flavour release. Pectin can also be used to stabilise acidic protein drinks, such as drinking yogurt, to improve the mouth-feel and the pulp stability in juice based drinks and as a fat substitute in baked goods.
Typical levels of pectin used as a food additive are between 0.5 and 1.0% – this is about the same amount of pectin as in fresh fruit.
In medicine, pectin increases viscosity and volume of stool so that it is used against constipation and diarrhea. Until 2002, it was one of the main ingredients used in Kaopectate – a medication to combat diarrhea – along with kaolinite. It has been used in gentle heavy metal removal from biological systems. Pectin is also used in throat lozenges as a demulcent.
In cosmetic products, pectin acts as a stabiliser. Pectin is also used in wound healing preparations and speciality medical adhesives, such as colostomy devices.
Sriamornsak revealed that pectin could be used in various oral drug delivery platforms, e.g., controlled release systems, gastro-retentive systems, colon-specific delivery systems and mucoadhesive delivery systems, according to its intoxicity and low cost. It was found that pectin from different sources provides different gelling abilities, due to variations in molecular size and chemical composition. Like other natural polymers, a major problem with pectin is inconsistency in reproducibility between samples, which may result in poor reproducibility in drug delivery characteristics.
In ruminant nutrition, depending on the extent of lignification of the cell wall, pectin is up to 90% digestible by bacterial enzymes. Ruminant nutritionists recommend that the digestibility and energy concentration in forages be improved by increasing pectin concentration in the forage.
In cigars, pectin is considered an excellent substitute for vegetable glue and many cigar smokers and collectors use pectin for repairing damaged tobacco leaves on their cigars.
Yablokov et al., writing in Chernobyl: Consequences of the Catastrophe for People and the Environment, quote research conducted by the Ukrainian Center of Radiation Medicine and the Belarusian Institute of Radiation Medicine and Endocrinology, concluded, regarding pectin's radioprotective effects, that "adding pectin preparations to the food of inhabitants of the Chernobyl-contaminated regions promotes an effective excretion of incorporated radionuclides" such as cesium-137. The authors reported on the positive results of using pectin food additive preparations in a number of clinical studies conducted on children in severely polluted areas, with up to 50% improvement over control groups.
During the Second World War, Allied pilots were provided with maps printed on silk, for navigation in escape and evasion efforts. The printing process at first proved nearly impossible because the several layers of ink immediately ran, blurring outlines and rendering place names illegible until the inventor of the maps, Clayton Hutton, mixed a little pectin with the ink and at once the pectin coagulated the ink and prevented it from running, allowing small topographic features to be clearly visible.
Legal status
At the Joint FAO/WHO Expert Committee Report on Food Additives and in the European Union, no numerical acceptable daily intake (ADI) has been set, as pectin is considered safe.
The European Union (EU) has not set a daily intake limit for two types of pectin, known as E440(i) and Amidated Pectin E440(ii). The EU has established purity standards for these additives in the EU Commission Regulation (EU)/231/2012. Pectin can be used as needed in most food categories, a concept referred to as "quantum satis". The European Food Safety Authority (EFSA) conducted a re-evaluation of Pectin E440(i) and Amidated Pectin E440(ii) in 2017. The EFSA concluded that the use of these food additives poses no safety concern for the general population. Furthermore, the agency stated that it is not necessary to establish a numerical value for the Acceptable Daily Intake (ADI).
In the United States, pectin is generally recognised as safe for human consumption.
In the International Numbering System (INS), pectin has the number 440. In Europe, pectins are differentiated into the E numbers E440(i) for non-amidated pectins and E440(ii) for amidated pectins. There are specifications in all national and international legislation defining its quality and regulating its use.
History
Pectin was first isolated and described in 1825 by Henri Braconnot, though the action of pectin to make jams and marmalades was known long before. To obtain well-set jams from fruits that had little or only poor quality pectin, pectin-rich fruits or their extracts were mixed into the recipe.
During the Industrial Revolution, the makers of fruit preserves turned to producers of apple juice to obtain dried apple pomace that was cooked to extract pectin. Later, in the 1920s and 1930s, factories were built that commercially extracted pectin from dried apple pomace, and later citrus peel, in regions that produced apple juice in both the US and Europe.
Pectin was first sold as a liquid extract, but is now most often used as dried powder, which is easier than a liquid to store and handle.
| Biology and health sciences | Carbohydrates | Biology |
159974 | https://en.wikipedia.org/wiki/Lagrange%20multiplier | Lagrange multiplier | In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables). It is named after the mathematician Joseph-Louis Lagrange.
Summary and rationale
The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function or Lagrangian. In the general case, the Lagrangian is defined as
for functions ; the notation denotes an inner product. The value is called the Lagrange multiplier.
In simple cases, where the inner product is defined as the dot product, the Lagrangian is
The method can be summarized as follows: in order to find the maximum or minimum of a function subject to the equality constraint , find the stationary points of considered as a function of and the Lagrange multiplier . This means that all partial derivatives should be zero, including the partial derivative with respect to .
or equivalently
The solution corresponding to the original constrained optimization is always a saddle point of the Lagrangian function, which can be identified among the stationary points from the definiteness of the bordered Hessian matrix.
The great advantage of this method is that it allows the optimization to be solved without explicit parameterization in terms of the constraints. As a result, the method of Lagrange multipliers is widely used to solve challenging constrained optimization problems. Further, the method of Lagrange multipliers is generalized by the Karush–Kuhn–Tucker conditions, which can also take into account inequality constraints of the form for a given constant .
Statement
The following is known as the Lagrange multiplier theorem.
Let be the objective function and let be the constraints function, both belonging to (that is, having continuous first derivatives). Let be an optimal solution to the following optimization problem such that, for the matrix of partial derivatives , :
Then there exists a unique Lagrange multiplier such that (Note that this is a somewhat conventional thing where is clearly treated as a column vector to ensure that the dimensions match. But, we might as well make it just a row vector without taking the transpose.)
The Lagrange multiplier theorem states that at any local maximum (or minimum) of the function evaluated under the equality constraints, if constraint qualification applies (explained below), then the gradient of the function (at that point) can be expressed as a linear combination of the gradients of the constraints (at that point), with the Lagrange multipliers acting as coefficients. This is equivalent to saying that any direction perpendicular to all gradients of the constraints is also perpendicular to the gradient of the function. Or still, saying that the directional derivative of the function is in every feasible direction.
Single constraint
For the case of only one constraint and only two choice variables (as exemplified in Figure 1), consider the optimization problem
(Sometimes an additive constant is shown separately rather than being included in , in which case the constraint is written as in Figure 1.) We assume that both and have continuous first partial derivatives. We introduce a new variable () called a Lagrange multiplier (or Lagrange undetermined multiplier) and study the Lagrange function (or Lagrangian or Lagrangian expression) defined by
where the term may be either added or subtracted. If is a maximum of for the original constrained problem and then there exists such that () is a stationary point for the Lagrange function (stationary points are those points where the first partial derivatives of are zero). The assumption is called constraint qualification. However, not all stationary points yield a solution of the original problem, as the method of Lagrange multipliers yields only a necessary condition for optimality in constrained problems. Sufficient conditions for a minimum or maximum also exist, but if a particular candidate solution satisfies the sufficient conditions, it is only guaranteed that that solution is the best one locally – that is, it is better than any permissible nearby points. The global optimum can be found by comparing the values of the original objective function at the points satisfying the necessary and locally sufficient conditions.
The method of Lagrange multipliers relies on the intuition that at a maximum, cannot be increasing in the direction of any such neighboring point that also has . If it were, we could walk along to get higher, meaning that the starting point wasn't actually the maximum. Viewed in this way, it is an exact analogue to testing if the derivative of an unconstrained function is , that is, we are verifying that the directional derivative is 0 in any relevant (viable) direction.
We can visualize contours of given by for various values of , and the contour of given by .
Suppose we walk along the contour line with We are interested in finding points where almost does not change as we walk, since these points might be maxima.
There are two ways this could happen:
We could touch a contour line of , since by definition does not change as we walk along its contour lines. This would mean that the tangents to the contour lines of and are parallel here.
We have reached a "level" part of , meaning that does not change in any direction.
To check the first possibility (we touch a contour line of ), notice that since the gradient of a function is perpendicular to the contour lines, the tangents to the contour lines of and are parallel if and only if the gradients of and are parallel. Thus we want points where and
for some
where
are the respective gradients. The constant is required because although the two gradient vectors are parallel, the magnitudes of the gradient vectors are generally not equal. This constant is called the Lagrange multiplier. (In some conventions is preceded by a minus sign).
Notice that this method also solves the second possibility, that is level: if is level, then its gradient is zero, and setting is a solution regardless of .
To incorporate these conditions into one equation, we introduce an auxiliary function
and solve
Note that this amounts to solving three equations in three unknowns. This is the method of Lagrange multipliers.
Note that implies as the partial derivative of with respect to is
To summarize
The method generalizes readily to functions on variables
which amounts to solving equations in unknowns.
The constrained extrema of are critical points of the Lagrangian , but they are not necessarily local extrema of (see below).
One may reformulate the Lagrangian as a Hamiltonian, in which case the solutions are local minima for the Hamiltonian. This is done in optimal control theory, in the form of Pontryagin's minimum principle.
The fact that solutions of the method of Lagrange multipliers are not necessarily extrema of the Lagrangian, also poses difficulties for numerical optimization. This can be addressed by minimizing the magnitude of the gradient of the Lagrangian, as these minima are the same as the zeros of the magnitude, as illustrated in Example 5: Numerical optimization.
Multiple constraints
The method of Lagrange multipliers can be extended to solve problems with multiple constraints using a similar argument. Consider a paraboloid subject to two line constraints that intersect at a single point. As the only feasible solution, this point is obviously a constrained extremum. However, the level set of is clearly not parallel to either constraint at the intersection point (see Figure 3); instead, it is a linear combination of the two constraints' gradients. In the case of multiple constraints, that will be what we seek in general: The method of Lagrange seeks points not at which the gradient of is a multiple of any single constraint's gradient necessarily, but in which it is a linear combination of all the constraints' gradients.
Concretely, suppose we have constraints and are walking along the set of points satisfying Every point on the contour of a given constraint function has a space of allowable directions: the space of vectors perpendicular to The set of directions that are allowed by all constraints is thus the space of directions perpendicular to all of the constraints' gradients. Denote this space of allowable moves by and denote the span of the constraints' gradients by Then the space of vectors perpendicular to every element of
We are still interested in finding points where does not change as we walk, since these points might be (constrained) extrema. We therefore seek such that any allowable direction of movement away from is perpendicular to (otherwise we could increase by moving along that allowable direction). In other words, Thus there are scalars such that
These scalars are the Lagrange multipliers. We now have of them, one for every constraint.
As before, we introduce an auxiliary function
and solve
which amounts to solving equations in unknowns.
The constraint qualification assumption when there are multiple constraints is that the constraint gradients at the relevant point are linearly independent.
Modern formulation via differentiable manifolds
The problem of finding the local maxima and minima subject to constraints can be generalized to finding local maxima and minima on a differentiable manifold In what follows, it is not necessary that be a Euclidean space, or even a Riemannian manifold. All appearances of the gradient (which depends on a choice of Riemannian metric) can be replaced with the exterior derivative
Single constraint
Let be a smooth manifold of dimension Suppose that we wish to find the stationary points of a smooth function when restricted to the submanifold defined by where is a smooth function for which is a regular value.
Let and be the exterior derivatives of and . Stationarity for the restriction at means Equivalently, the kernel contains In other words, and are proportional 1-forms. For this it is necessary and sufficient that the following system of equations holds:
where denotes the exterior product. The stationary points are the solutions of the above system of equations plus the constraint Note that the equations are not independent, since the left-hand side of the equation belongs to the subvariety of consisting of decomposable elements.
In this formulation, it is not necessary to explicitly find the Lagrange multiplier, a number such that
Multiple constraints
Let and be as in the above section regarding the case of a single constraint. Rather than the function described there, now consider a smooth function with component functions for which is a regular value. Let be the submanifold of defined by
is a stationary point of if and only if contains For convenience let and where denotes the tangent map or Jacobian ( can be canonically identified with ). The subspace has dimension smaller than that of , namely and belongs to if and only if belongs to the image of Computationally speaking, the condition is that belongs to the row space of the matrix of or equivalently the column space of the matrix of (the transpose). If denotes the exterior product of the columns of the matrix of the stationary condition for at becomes
Once again, in this formulation it is not necessary to explicitly find the Lagrange multipliers, the numbers such that
Interpretation of the Lagrange multipliers
In this section, we modify the constraint equations from the form to the form where the are real constants that are considered to be additional arguments of the Lagrangian expression .
Often the Lagrange multipliers have an interpretation as some quantity of interest. For example, by parametrising the constraint's contour line, that is, if the Lagrangian expression is
then
So, is the rate of change of the quantity being optimized as a function of the constraint parameter.
As examples, in Lagrangian mechanics the equations of motion are derived by finding stationary points of the action, the time integral of the difference between kinetic and potential energy. Thus, the force on a particle due to a scalar potential, , can be interpreted as a Lagrange multiplier determining the change in action (transfer of potential to kinetic energy) following a variation in the particle's constrained trajectory.
In control theory this is formulated instead as costate equations.
Moreover, by the envelope theorem the optimal value of a Lagrange multiplier has an interpretation as the marginal effect of the corresponding constraint constant upon the optimal attainable value of the original objective function: If we denote values at the optimum with a star (), then it can be shown that
For example, in economics the optimal profit to a player is calculated subject to a constrained space of actions, where a Lagrange multiplier is the change in the optimal value of the objective function (profit) due to the relaxation of a given constraint (e.g. through a change in income); in such a context is the marginal cost of the constraint, and is referred to as the shadow price.
Sufficient conditions
Sufficient conditions for a constrained local maximum or minimum can be stated in terms of a sequence of principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian matrix of second derivatives of the Lagrangian expression.
Examples
Example 1
Suppose we wish to maximize subject to the constraint The feasible set is the unit circle, and the level sets of are diagonal lines (with slope −1), so we can see graphically that the maximum occurs at and that the minimum occurs at
For the method of Lagrange multipliers, the constraint is
hence the Lagrangian function,
is a function that is equivalent to when is set to .
Now we can calculate the gradient:
and therefore:
Notice that the last equation is the original constraint.
The first two equations yield
By substituting into the last equation we have:
so
which implies that the stationary points of are
Evaluating the objective function at these points yields
Thus the constrained maximum is and the constrained minimum is .
Example 2
Now we modify the objective function of Example 1 so that we minimize instead of again along the circle Now the level sets of are still lines of slope −1, and the points on the circle tangent to these level sets are again and These tangency points are maxima of
On the other hand, the minima occur on the level set for (since by its construction cannot take negative values), at and where the level curves of are not tangent to the constraint. The condition that correctly identifies all four points as extrema; the minima are characterized in by and the maxima by
Example 3
This example deals with more strenuous calculations, but it is still a single constraint problem.
Suppose one wants to find the maximum values of
with the condition that the - and -coordinates lie on the circle around the origin with radius That is, subject to the constraint
As there is just a single constraint, there is a single multiplier, say
The constraint is identically zero on the circle of radius Any multiple of may be added to leaving unchanged in the region of interest (on the circle where our original constraint is satisfied).
Applying the ordinary Lagrange multiplier method yields
from which the gradient can be calculated:
And therefore:
(iii) is just the original constraint. (i) implies or If then by (iii) and consequently from (ii). If substituting this into (ii) yields Substituting this into (iii) and solving for gives Thus there are six critical points of
Evaluating the objective at these points, one finds that
Therefore, the objective function attains the global maximum (subject to the constraints) at and the global minimum at The point is a local minimum of and is a local maximum of as may be determined by consideration of the Hessian matrix of
Note that while is a critical point of it is not a local extremum of We have
Given any neighbourhood of one can choose a small positive and a small of either sign to get values both greater and less than This can also be seen from the Hessian matrix of evaluated at this point (or indeed at any of the critical points) which is an indefinite matrix. Each of the critical points of is a saddle point of
Example 4 – Entropy
Suppose we wish to find the discrete probability distribution on the points with maximal information entropy. This is the same as saying that we wish to find the least structured probability distribution on the points In other words, we wish to maximize the Shannon entropy equation:
For this to be a probability distribution the sum of the probabilities at each point must equal 1, so our constraint is:
We use Lagrange multipliers to find the point of maximum entropy, across all discrete probability distributions on We require that:
which gives a system of equations, such that:
Carrying out the differentiation of these equations, we get
This shows that all are equal (because they depend on only). By using the constraint
we find
Hence, the uniform distribution is the distribution with the greatest entropy, among distributions on points.
Example 5 – Numerical optimization
The critical points of Lagrangians occur at saddle points, rather than at local maxima (or minima). Unfortunately, many numerical optimization techniques, such as hill climbing, gradient descent, some of the quasi-Newton methods, among others, are designed to find local maxima (or minima) and not saddle points. For this reason, one must either modify the formulation to ensure that it's a minimization problem (for example, by extremizing the square of the gradient of the Lagrangian as below), or else use an optimization technique that finds stationary points (such as Newton's method without an extremum seeking line search) and not necessarily extrema.
As a simple example, consider the problem of finding the value of that minimizes constrained such that (This problem is somewhat untypical because there are only two values that satisfy this constraint, but it is useful for illustration purposes because the corresponding unconstrained function can be visualized in three dimensions.)
Using Lagrange multipliers, this problem can be converted into an unconstrained optimization problem:
The two critical points occur at saddle points where and .
In order to solve this problem with a numerical optimization technique, we must first transform this problem such that the critical points occur at local minima. This is done by computing the magnitude of the gradient of the unconstrained optimization problem.
First, we compute the partial derivative of the unconstrained problem with respect to each variable:
If the target function is not easily differentiable, the differential with respect to each variable can be approximated as
where is a small value.
Next, we compute the magnitude of the gradient, which is the square root of the sum of the squares of the partial derivatives:
(Since magnitude is always non-negative, optimizing over the squared-magnitude is equivalent to optimizing over the magnitude. Thus, the "square root" may be omitted from these equations with no expected difference in the results of optimization.)
The critical points of occur at and , just as in Unlike the critical points in however, the critical points in occur at local minima, so numerical optimization techniques can be used to find them.
Applications
Control theory
In optimal control theory, the Lagrange multipliers are interpreted as costate variables, and Lagrange multipliers are reformulated as the minimization of the Hamiltonian, in Pontryagin's minimum principle.
Nonlinear programming
The Lagrange multiplier method has several generalizations. In nonlinear programming there are several multiplier rules, e.g. the Carathéodory–John Multiplier Rule and the Convex Multiplier Rule, for inequality constraints.
Economics
In many models in mathematical economics such as general equilibrium models, consumer behavior is implemented as utility maximization and firm behavior as profit maximization, both entities being subject to constraints such as budget constraints and production constraints. The usual way to determine an optimal solution is achieved by maximizing some function, where the constraints are enforced using Lagrangian multipliers.
Power systems
Methods based on Lagrange multipliers have applications in power systems, e.g. in distributed-energy-resources (DER) placement and load shedding.
Safe Reinforcement Learning
The method of Lagrange multipliers applies to constrained Markov decision processes.
It naturally produces gradient-based primal-dual algorithms in safe reinforcement learning.
Normalized solutions
Considering the PDE problems with constraints, i.e., the study of the properties of the normalized solutions, Lagrange multipliers play an important role.
| Mathematics | Multivariable and vector calculus | null |
160067 | https://en.wikipedia.org/wiki/Tsetse%20fly | Tsetse fly | Tsetse ( , or ) (sometimes spelled tzetze; also known as tik-tik flies) are large, biting flies that inhabit much of tropical Africa. Tsetse flies include all the species in the genus Glossina, which are placed in their own family, Glossinidae. The tsetse is an obligate parasite, which lives by feeding on the blood of vertebrate animals. Tsetse has been extensively studied because of their role in transmitting disease. They have pronounced economic and public health impacts in sub-Saharan Africa as the biological vectors of trypanosomes, causing human and animal trypanosomiasis.
Tsetse can be distinguished from other large flies by two easily-observed features: primarily, tsetse fold their wings over their abdomens completely when they are resting (so that one wing rests directly on top of the other); Secondly, tsetse also have a long proboscis, extending directly forward, which is attached by a distinct bulb to the bottom of their heads.
Fossilized tsetse has been recovered from Paleogene-aged rocks in the United States and Germany. Twenty-three extant species of tsetse flies are known from the African continent as well as the Arabian Peninsula.
Terminology
Tsetse without the "fly" has become more common in English, particularly in the scientific and development communities.
The word is pronounced tseh-tseh in the Sotho languages and is easily rendered in other African languages. During World War II, a British de Havilland antisubmarine aircraft was known as the Tsetse Mosquito.
Biology
The biology of tsetse is relatively well understood by entomologists. They have been extensively studied because of their medical, veterinary, and economic importance, because the flies can be raised in a laboratory, and because they are relatively large, facilitating their analysis.
Morphology
Tsetse flies can be seen as independent individuals in three forms: as third-instar larvae, pupae, and adults.
Tsetse first becomes separate from their mothers during the third larval instar, during which they have the typical appearance of maggots. However, this life stage is short, lasting at most a few hours, and is almost never observed outside of the laboratory.
Tsetse next develops a hard external case, the puparium, and become pupae - small, hard-shelled oblongs with two distinctively small, dark lobes at the tail (breathing) end. Tsetse pupae are under long. Within the puparial shell, tsetse complete the last two larval instars and the pupal stage.
At the end of the pupal stage, tsetse emerges as adult flies. The adults are relatively large flies, with lengths of , and have a recognizable shape, or bauplan, which makes them easy to distinguish from other flies. Tsetse have large heads, distinctly separated eyes, and unusual antennae. The thorax is quite large, while the abdomen is wider, rather than elongated, and shorter than the wings.
Four characteristics collectively separate adult tsetse from other kinds of flies:
Anatomy
Like all other insects, tsetse flies have an adult body comprising three visibly distinct parts: the head, the thorax, and the abdomen.
The head has large eyes, distinctly separated on each side, and a distinct, forward-pointing proboscis attached underneath by a large bulb. The thorax is large, made of three fused segments. Three pairs of legs are attached to the thorax, as are two wings and two halteres. The abdomen is short but wide and changes dramatically in volume during feeding.
The internal anatomy of the tsetse is fairly typical of the insects; the crop is large enough to accommodate a huge increase in size during feeding, as tsetse can take a blood meal equal in weight to themselves. The dipteran crop is heavily understudied, with Glossina being one of the few genera having relatively reliable information available: Moloo and Kutuza 1970 for G. brevipalpis (including its innervation) and Langley 1965 for G. morsitans. The reproductive tract of adult females includes a uterus, which can become large enough to hold the third-instar larva at the end of each pregnancy.
Most tsetse flies are, physically, very tough. Houseflies, and even horseflies, are easily killed with a flyswatter, for example; a great deal of effort is needed to crush a tsetse fly.
Life cycle
Tsetse has an unusual life cycle, which may be due to the richness of their blood food source. A female fertilizes only one egg at a time; she will retain each egg within her uterus, the offspring developing internally (during the first three larval stages), in an adaptation called adenotrophic viviparity. During this time, the female feeds the developing offspring with a milky substance (secreted by a modified gland) in the uterus.
In the third larval stage, the tsetse larvae leave the uterus and begin an independent life. The newly-birthed larvae crawl into the ground and develop a hard outer shell (called the puparial case), within which they complete their morphological transformations into adult flies.
The larval life stage has a variable duration, generally 20 to 30 days, and the larvae must rely on stored resources during this time. The importance of the richness and quality of blood to this stage can be seen; all tsetse development (prior to emerging from the puparial case as a full adult) occurs without feeding, with only the nutrition provided by the mother fly. She must get enough energy for her own survival (in addition to the needs of her developing offspring), as well as for the stored resources that her offspring will require until they emerge as adults.
Technically, these insects undergo the standard development process of insects, beginning with oocyte formation, ovulation, fertilization, and development of the egg; following egg development and birth is the three larval stages, a pupal stage, and the emergence and maturation of the adult.
Hosts
Overall Suidae are the most important hosts. Waterbuck (Kobus ellipsiprymnus) are unmolested by Glossina because they produce volatiles which act as repellents. Waterbuck odor volatiles are under testing and development as repellents to protect livestock. By species, bloodmeals are derived from:
Genetics
The genome of Glossina morsitans was sequenced in 2014.
Symbionts
Tsetse flies have at least three bacterial symbionts. The primary symbiont is Wigglesworthia (Wigglesworthia glossinidia), which live within the fly's bacteriocytes. The second symbiont is Sodalis (Sodalis glossinidius) intercellularly or intracellularly, and the third is some kind of Wolbachia.
Diseases
The salivary gland hypertrophy virus causes abnormal bleeding in the lobes of the crop of G. m. centralis and G. m. morsitans.
Systematics
Tsetse flies are members of the order Diptera, the true flies. They belong to the superfamily Hippoboscoidea, in which the tsetse's family, the Glossinidae, is one of four families of blood-feeding obligate parasites.
Up to 34 species and subspecies of tsetse flies are recognized, depending on the particular classification used.
Current classifications place all species of tsetse fly in a single genus named Glossina, with most considering the genus as the sole member of the family Glossinidae.
Species
The tsetse genus is generally split into three groups of species based on a combination of distributional, ecological, behavioral, molecular and morphological characteristics. The genus includes; savannah flies, forest flies and riverine and lacustrine flies.
Savannah flies
The 'savannah' flies: (Morsitans group, subgenus Glossina s.s.):
Glossina austeni (Newstead, 1912) patr. of Austen
Glossina longipalpis (Wiedemann, 1830)
Glossina morsitans (Westwood, 1851)
Glossina morsitans morsitans (Westwood, 1850)
Glossina morsitans submorsitans
Glossina morsitans centralis (Machado, 1970)
Glossina pallidipes (Austen, 1903)
Glossina swynnertoni (Austen, 1923) patr. of Swynnerton
Forest flies
The 'forest' flies: (Fusca group, subgenus Austenina):
Glossina brevipalpis (Newstead, 1910)
Glossina fusca (Walker, 1849)
Glossina fusca fusca (Walker, 1849)
Glossina fusca congolensis (Newstead and Evans, 1921)
Glossina fuscipleuris (Austen, 1911)
Glossina frezili (Gouteux, 1987)
Glossina haningtoni (Newstead and Evans, 1922)
Glossina longipennis (Corti, 1895)
Glossina medicorum (Austen, 1911)
Glossina nashi (Potts, 1955)
Glossina nigrofusca (Newstead, 1911)
Glossina nigrofusca nigrofusca (Newstead, 1911)
Glossina nigrofusca hopkinsi (van Emden, 1944)
Glossina severini (Newstead, 1913)
Glossina schwetzi (Newstead and Evans, 1921)
Glossina severini (Newstead, 1913)
Glossina tabaniformis (Westwood, 1850)
Glossina vanhoofi (Henrard, 1952)
Riverine and lacustrine flies
The 'riverine' and 'lacustrine' flies: (Palpalis group, subgenus Nemorhina):
Glossina caliginea (Austen, 1911)
Glossina fuscipes (Newstead, 1911)
Glossina fuscipes fuscipes (Newstead, 1911)
Glossina fuscipes martinii (Zumpt, 1935)
Glossina fuscipes quanzensis (Pires, 1948)
Glossina pallicera (Bigot, 1891)
Glossina pallicera pallicera (Bigot, 1891)
Glossina pallicera newsteadi (Austen, 1929) patr. of Newstead
Glossina palpalis (Robineau-Desvoidy, 1830)
Glossina palpalis palpalis (Robineau-Desvoidy, 1830)
Glossina palpalis gambiensis (Vanderplank, 1911)
Glossina tachinoides (Westwood, 1850)
Evolutionary history
Fossil glossinids are known from the Florissant Formation in North America and the Enspel Lagerstätte of Germany, dating to the late Eocene and late Oligocene respectively.
Range
Glossina is almost entirely restricted to wooded grasslands and forested areas of the Afrotropics. As of 1990, tsetse flies were reported from a maximum latitude of approximately 15° north in Senegal (Niayes Region), to a minimum of 28.5° south in South Africa (KwaZulu-Natal Province).
Only two subspecies - G. f. fuscipes and G. m. submorsitans - are present in the very southwest of Saudi Arabia. Although Carter found G. tachiniodes in 1903 nearby, near Aden in southern Yemen, there have been no confirmations since.
Trypanosomiasis
Tsetse are biological vectors of trypanosomes, meaning that in the process of feeding, they acquire and then transmit small, single-celled trypanosomes from infected vertebrate hosts to uninfected animals. Some tsetse-transmitted trypanosome species cause trypanosomiasis, an infectious disease. In humans, tsetse transmitted trypanosomiasis is called sleeping sickness. In animals, tsetse-vectored trypanosomiases include nagana, souma (a French term which may not be a distinct condition), and surra according to the animal infected and the trypanosome species involved. The usage is not strict and while nagana generally refers to the disease in cattle and horses it is commonly used for any of the animal trypanosomiases.
Trypanosomes are animal parasites, specifically protozoans of the genus Trypanosoma. These organisms are about the size of red blood cells. Different species of trypanosomes infect different hosts. They range widely in their effects on the vertebrate hosts. Some species, such as T. theileri, do not seem to cause any health problems except perhaps in animals that are already sick.
Some strains are much more virulent. Infected flies have an altered salivary composition which lowers feeding efficiency and consequently increases the feeding time, promoting trypanosome transmission to the vertebrate host. These trypanosomes are highly evolved and have developed a life cycle that requires periods in both the vertebrate and tsetse hosts.
Tsetse transmit trypanosomes in two ways, mechanical and biological transmission.
Mechanical transmission involves the direct transmission of the same individual trypanosomes taken from an infected host into an uninfected host. The name 'mechanical' reflects the similarity of this mode of transmission to mechanical injection with a syringe. Mechanical transmission requires the tsetse to feed on an infected host and acquire trypanosomes in the blood meal, and then, within a relatively short period, to feed on an uninfected host and regurgitate some of the infected blood from the first blood meal into the tissue of the uninfected animal. This type of transmission occurs most frequently when tsetse are interrupted during a blood meal and attempt to satiate themselves with another meal. Other flies, such as horse-flies, can also cause mechanical transmission of trypanosomes.
Biological transmission requires a period of incubation of the trypanosomes within the tsetse host. The term 'biological' is used because trypanosomes must reproduce through several generations inside the tsetse host during the period of incubation (development within the fly is known as the extrinsic incubation period), which requires extreme adaptation of the trypanosomes to their tsetse host. In this mode of transmission, trypanosomes reproduce through several generations, changing in morphology at certain periods. This mode of transmission also includes the sexual phase of the trypanosomes. Tsetse are believed to be more likely to become infected by trypanosomes during their first few blood meals. Tsetse infected by trypanosomes are thought to remain infected for the remainder of their lives. Because of the adaptations required for biological transmission, trypanosomes that can be transmitted biologically by tsetse cannot be transmitted in this manner by other insects.
The relative importance of these two modes of transmission for the propagation of tsetse-vectored trypanosomiases is not yet well understood. However, since the sexual phase of the trypanosome life cycle occurs within the tsetse host, biological transmission is a required step in the life cycle of the tsetse-vectored trypanosomes.
The cycle of biological transmission of trypanosomiasis involves two phases, one inside the tsetse host and the other inside the vertebrate host. Trypanosomes are not passed between a pregnant tsetse and her offspring, so all newly emerged tsetse adults are free of infection. An uninfected fly that feeds on an infected vertebrate animal may acquire trypanosomes in its proboscis or gut. These trypanosomes, depending on the species, may remain in place, move to a different part of the digestive tract, or migrate through the tsetse body into the salivary glands. When an infected tsetse bites a susceptible host, the fly may regurgitate part of a previous blood meal that contains trypanosomes, or may inject trypanosomes in its saliva. Inoculation must contain a minimum of 300 to 450 individual trypanosomes to be successful, and may contain up to 40,000 cells.
In the case of T. b. brucei infecting G. p. gambiensis, during this time the parasite changes the proteome contents of the fly's head. This may be the reason/a reason for the behavioral changes seen, especially the unnecessarily increased feeding frequency, which increases transmission opportunities. This may be due in part to the altered glucose metabolism observed, causing a perceived need for more calories. (The metabolic change, in turn, being due to complete absence of glucose-6-phosphate 1-dehydrogenase in infected flies.) Monoamine neurotransmitter synthesis is also altered: Production of aromatic L-amino acid decarboxylase - involved in dopamine and serotonin synthesis - and α-methyldopa hypersensitive protein was induced. This is very similar to the alterations in other dipteran vectors' head proteomes under infection by other eukaryotic parasites of mammals, found in another study by the same team in the same year.
The trypanosomes are injected into vertebrate muscle tissue, but make their way, first into the lymphatic system, then into the bloodstream, and eventually into the brain. The disease causes the swelling of the lymph glands, emaciation of the body, and eventually leads to death. Uninfected tsetse may bite the infected animal prior to its death and acquire the disease, thereby closing the transmission cycle.
Disease hosts and vectors
The tsetse-vectored trypanosomiases affect various vertebrate species including humans, antelopes, bovine cattle, camels, horses, sheep, goats, and pigs. These diseases are caused by several different trypanosome species that may also survive in wild animals such as crocodiles and monitor lizards. The diseases have different distributions across the African continent, so are transmitted by different species. This table summarizes this information:
In humans
Human African trypanosomiasis, also called sleeping sickness, is caused by trypanosomes of the species Trypanosoma brucei. This disease is invariably fatal if left untreated, but can almost always be cured with current medicines if the disease is diagnosed early enough.
Sleeping sickness begins with a tsetse bite leading to an inoculation in the subcutaneous tissue. The infection moves into the lymphatic system, leading to a characteristic swelling of the lymph glands called Winterbottom's sign. The infection progresses into the blood stream and eventually crosses into the central nervous system and invades the brain leading to extreme lethargy and eventually to death.
The species Trypanosoma brucei, which causes the disease, has often been subdivided into three subspecies that were identified based either on the vertebrate hosts which the strain could infect or on the virulence of the disease in humans. The trypanosomes infectious to animals and not to humans were named Trypanosoma brucei brucei. Strains that infected humans were divided into two subspecies based on their different virulences: Trypanosoma brucei gambiense was thought to have a slower onset and Trypanosoma brucei rhodesiense refers to strains with a more rapid, virulent onset. This characterization has always been problematic but was the best that could be done given the knowledge of the time and the tools available for identification. A recent molecular study using restriction fragment length polymorphism analysis suggests that the three subspecies are polyphyletic, so the elucidation of the strains of T. brucei infective to humans requires a more complex explanation. Procyclins are proteins developed in the surface coating of trypanosomes whilst in their tsetse fly vector.
Other forms of human trypanosomiasis also exist but are not transmitted by tsetse. The most notable is American trypanosomiasis, known as Chagas disease, which occurs in South America, caused by Trypanosoma cruzi, and transmitted by certain insects of the Reduviidae, members of the Hemiptera.
In domestic animals
Animal trypanosomiasis, also called nagana when it occurs in bovine cattle or horses or sura when it occurs in domestic pigs, is caused by several trypanosome species. These diseases reduce the growth rate, milk productivity, and strength of farm animals, generally leading to the eventual death of the infected animals. Certain species of cattle are called trypanotolerant because they can survive and grow even when infected with trypanosomes although they also have lower productivity rates when infected.
The course of the disease in animals is similar to the course of sleeping sickness in humans.
Trypanosoma congolense and Trypanosoma vivax are the two most important species infecting bovine cattle in sub-Saharan Africa. Trypanosoma simiae causes a virulent disease in swine.
Other forms of animal trypanosomiasis are also known from other areas of the globe, caused by different species of trypanosomes and transmitted without the intervention of the tsetse fly.
The tsetse fly vector ranges mostly in the central part of Africa.
Trypanosomiasis poses a considerable constraint on livestock agricultural development in tsetse fly-infested areas of sub-Saharan Africa, especially in West and Central Africa. International research conducted by ILRI in Nigeria, the Democratic Republic of the Congo and Kenya has shown that the N'Dama is the most resistant breed.
Control
The conquest of sleeping sickness and nagana would be of immense benefit to rural development and contribute to poverty alleviation and improved food security in sub-Saharan Africa. Human African trypanosomosis (HAT) and animal African trypanosomosis (AAT) are sufficiently important to make virtually any intervention against these diseases beneficial.
The disease can be managed by controlling the vector and thus reducing the incidence of the disease by disrupting the transmission cycle. Another tactic to manage the disease is to target the disease directly using surveillance and curative or prophylactic treatments to reduce the number of hosts that carry the disease.
Economic analysis indicates that the cost of managing trypanosomosis through the elimination of important populations of major tsetse vectors will be covered several times by the benefits of tsetse-free status. Area-wide interventions against the tsetse and trypanosomosis problem appear more efficient and profitable if sufficiently large areas, with high numbers of cattle, can be covered.
Vector control strategies can aim at either continuous suppression or eradication of target populations. Tsetse fly eradication programmes are complex and logistically demanding activities and usually involve the integration of different control tactics, such as trypanocidal drugs, impregnated treated targets (ITT), insecticide-treated cattle (ITC), aerial spraying (Sequential Aerosol Technique - SAT) and in some situations the release of sterile males (sterile insect technique – SIT). To ensure sustainability of the results, it is critical to apply the control tactics on an area-wide basis, i.e. targeting an entire tsetse population that is preferably genetically isolated.
Control techniques
Many techniques have reduced tsetse populations, with earlier, crude methods recently replaced by methods that are cheaper, more directed, and ecologically better.
Slaughter of wild animals
One early technique involved slaughtering all the wild animals tsetse fed on. For example, the island of Principe off the west coast of Africa was entirely cleared of feral pigs in the 1930s, which led to the extirpation of the fly. While the fly eventually re-invaded in the 1950s, the new population of tsetse was free from the disease.
Land clearing
Another early technique involved complete removal of brush and woody vegetation from an area. However, the technique was not widely used and has been abandoned. Tsetse tend to rest on the trunks of trees so removing woody vegetation made the area inhospitable to the flies. Until about 1959 this was done by hand and so was quite time consuming. Glover et al 1959 describes the technique which they call "chain clearing". Chain clearing drags a chain forward between two heavy vehicles and thereby does the same job much more quickly - but still at some expense. Preventing regrowth of woody vegetation requires continuous clearing efforts which is even more expensive, and only practical where large human populations are present. Also, the clearing of woody vegetation has come to be seen as an environmental problem more than a benefit.
Pesticide campaigns
Pesticides have been used to control tsetse starting initially during the early part of the twentieth century in localized efforts using the inorganic metal-based pesticides, expanding after the Second World War into massive aerial- and ground-based campaigns with organochlorine pesticides such as DDT applied as aerosol sprays at Ultra-Low Volume rates. Later, more targeted techniques used pour-on formulations in which advanced organic pesticides were applied directly to the backs of cattle.
Trapping
Tsetse populations can be monitored and effectively controlled using simple, inexpensive traps. These often use blue cloth, either in sheet or biconical form, since this color attracts the flies. The traps work by channeling the flies into a collection chamber, or by exposing the flies to insecticide sprayed on the cloth. Early traps mimicked the form of cattle, as tsetse are also attracted to large dark colors like the hides of cows and buffaloes. Some scientists put forward the idea that zebra have stripes, not as a camouflage in long grass, but because the black and white bands tend to confuse tsetse and prevent attack.
The use of chemicals as attractants to lure tsetse to the traps has been studied extensively in the late 20th century, but this has mostly been of interest to scientists rather than as an economically reasonable solution. Attractants studied have been those tsetse might use to find food, like carbon dioxide, octenol, and acetone—which are given off in animals' breath and distributed downwind in an odor plume. Synthetic versions of these chemicals can create artificial odor plumes. A cheaper approach is to place cattle urine in a half gourd near the trap. For large trapping efforts, additional traps are generally cheaper than expensive artificial attractants.
A special trapping method is applied in Ethiopia, where the BioFarm Consortium (ICIPE, BioVision Foundation, BEA, Helvetas, DLCO-EA, Praxis Ethiopia) applies the traps in a sustainable agriculture and rural development context (SARD). The traps are just the entry point, followed by improved farming, human health and marketing inputs. This method is in the final stage of testing (as of 2006).
Sterile insect technique
The sterile insect technique (SIT) is a form of pest control that uses ionizing radiation (gamma ray or X-ray) to sterilize male flies that are mass-produced in special rearing facilities. The sterile males are released systematically from the ground or by air in tsetse-infested areas, where they mate with wild females, which do not produce offspring. As a result, this technique can eventually eradicate populations of wild flies. SIT is among the most environmentally friendly control tactics available, and is usually applied as the final component of an integrated campaign. It has been used to subdue the populations of many other fly species including the medfly, Ceratitis capitata.
The sustainable removal of the tsetse fly is in many cases the most cost-effective way of dealing with the T&T problem resulting in major economic benefits for subsistence farmers in rural areas. Insecticide-based methods are normally very ineffective in removing the last remnants of tsetse populations, while, on the contrary, sterile males are very effective in finding and mating the last remaining females. Therefore, the integration of the SIT as the last component of an area-wide integrated approach is essential in many situations to achieve complete eradication of the different tsetse populations, particularly in areas of more dense vegetation.
A project that was implemented from 1994 to 1997 on the Island of Unguja, Zanzibar (United Republic of Tanzania), demonstrated that, after suppression of the tsetse population with insecticides, SIT completely removed the Glossina austeni Newstead population from the Island. This was carried out without any understanding of the population genetics of G. a., but future SIT efforts can benefit from such preparation. Population genetics would help to select the Glossina population to be deployed for similarity to the target population. The eradication of the tsetse fly from Unguja Island in 1997 was followed by the disappearance of the AAT which enabled farmers to integrate livestock keeping with cropping in areas where this had been impossible before. The increased livestock and crop productivity and the possibility of using animals for transport and traction significantly contributed to an increase in the quality of people's lives. Surveys in 1999, 2002, 2014, and 2015 have confirmed this success - continued absence of tsetse and nagana on the island.
In the Niayes region of Senegal, a coastal area close to Dakar, livestock keeping was difficult due to the presence of a population of Glossina palpalis gambiensis. Feasibility studies indicated that the fly population was confined to very fragmented habitats and a population genetics study indicated that the population was genetically isolated from the main tsetse belt in the south eastern part of Senegal. After completion of the feasibility studies (2006–2010), an area-wide integrated eradication campaign that included an SIT component was started in 2011, and by 2015, the Niayes region had become almost tsetse fly free. This has allowed a change of cattle breeds from lower producing trypanotolerant breeds to higher-producing foreign breeds.
The entire target area (Block 1, 2 and 3) has a total surface of , and the first block (northern part) can be considered free of tsetse, as intensive monitoring has failed to detect since 2012 a single wild tsetse fly. The prevalence of AAT has decreased from 40 to 50% before the project started to less than 10% to date in blocks 1 and 2. Although insecticides are being used for fly suppression, they are applied for short periods on traps, nets and livestock, and are not spread into the environment. After the suppression activities are completed, no more insecticide is applied in the area. The removal of trypanosomosis will eliminate the need for constant prophylactic treatments of the cattle with trypanocidal drugs, therefore reducing residues of these drugs in the dung, meat and milk.
The main beneficiaries of the project are the many small holder farmers, the larger commercial farms and the consumers of meat and milk. According to a socio-economic survey and benefit cost analysis, after eradication of the tsetse farmers will be able to replace their local breeds with improved breeds and increase their annual income by €2.8 million. In addition, it is expected that the number of cattle will be reduced by 45%, which will result in reduced environmental impacts.
Societal impact
In the literature of environmental determinism, the tsetse has been linked to difficulties during early state formation for areas where the fly is prevalent. A 2012 study used population growth models, physiological data, and ethnographic data to examine pre-colonial agricultural practices and isolate the effects of the fly. A "tsetse suitability index" was developed from insect population growth, climate and geospatial data to simulate the fly's population steady state. An increase in the tsetse suitability index was associated with a statistically significant weakening of the agriculture, levels of urbanization, institutions and subsistence strategies. Results suggest that the tsetse decimated livestock populations, forcing early states to rely on slave labor to clear land for farming, and preventing farmers from taking advantage of natural animal fertilizers to increase crop production. These long-term effects may have kept population density low and discouraged cooperation between small-scale communities, thus preventing stronger nations from forming.
The authors also suggest that under a lower burden of tsetse, Africa would have developed differently. Agriculture (measured by the usage of large domesticated animals, intensive agriculture, plow use and female participation rate in agriculture) as well as institutions (measured by the appearance of indigenous slavery and levels of centralization) would have been more like those found in Eurasia. Qualitative support for this claim comes from archaeological findings; e.g., Great Zimbabwe is located in the African highlands where the fly does not occur, and represented the largest and technically most advanced precolonial structure in Southern sub-Sahara Africa.
Other authors are more skeptical that the tsetse fly had such an immense influence on African development. One conventional argument is that the tsetse fly made it difficult to use draught animals. Hence, wheeled forms of transportations were not used as well. While this is certainly true for areas with high densities of the fly, similar cases outside tsetse-suitable areas exist. While the fly definitely had a relevant influence on the adoption of new technologies in Africa, it has been contended that it does not represent the single root cause.
History
According to an article in the New Scientist, the depopulated and apparently primevally wild Africa seen in wildlife documentary films was formed in the 19th century by disease, a combination of rinderpest and the tsetse fly. Rinderpest is believed to have originated in Asia, later spreading through the transport of cattle. In 1887, the rinderpest virus was accidentally imported in livestock brought by an Italian expeditionary force to Eritrea. It spread rapidly, reaching Ethiopia by 1888, the Atlantic coast by 1892 and South Africa by 1897. Rinderpest, a cattle plague from central Asia, killed over 90% of the cattle of the pastoral peoples such as the Masai of east Africa. In South Africa, with no native immunity, most of the population – some 5.5 million domestic cattle – died. Pastoralists and farmers were left with no animals – their source of income – and farmers were deprived of their working animals for ploughing and irrigation. The pandemic coincided with a period of drought, causing widespread famine. The starving human populations died of smallpox, cholera, and typhoid, as well as African Sleeping Sickness and other endemic diseases. It is estimated that two-thirds of the Masai died in 1891.
The land was left emptied of its cattle and its people, enabling the colonial powers Germany and Britain to take over Tanzania and Kenya with little effort. With greatly reduced grazing, grassland turned rapidly to bush. The closely cropped grass sward was replaced in a few years by woody grassland and thornbush, ideal habitat for tsetse flies. Wild mammal populations increased rapidly, accompanied by the tsetse fly. Highland regions of east Africa which had been free of tsetse fly were colonised by the pest, accompanied by sleeping sickness, until then unknown in the area. Millions of people died of the disease in the early 20th century.
The areas occupied by the tsetse fly were largely barred to animal husbandry. Sleeping sickness was dubbed "the best game warden in Africa" by conservationists, who assumed that the land, empty of people and full of game animals, had always been like that. Julian Huxley of the World Wildlife Fund called the plains of east Africa "a surviving sector of the rich natural world as it was before the rise of modern man". They created numerous large reserves for hunting safaris. In 1909 the newly retired president Theodore Roosevelt went on a safari that brought over 10,000 animal carcasses to America. Later, much of the land was turned over to nature reserves and national parks such as the Serengeti, Masai Mara, Kruger and Okavango Delta. The result, across eastern and southern Africa, is a modern landscape of manmade ecosystems: farmland and pastoral land largely free of bush and tsetse fly; and bush controlled by the tsetse fly.
Although the colonial powers saw the disease as a threat to their interests, and acted accordingly to bring transmission almost to a halt in the 1960s, this improved situation led to a laxity of surveillance and management by the newly independent governments covering the same areas - and a resurgence that became a crisis again in the 1990s.
Current situation
Tsetse flies are regarded as a major cause of rural poverty in sub-Saharan Africa because they prevent mixed farming. The land infested with tsetse flies is often cultivated by people using hoes rather than more efficient draught animals because nagana, the disease transmitted by tsetse, weakens and often kills these animals. Cattle that do survive produce little milk, pregnant cows often abort their calves, and manure is not available to fertilize the worn-out soils.
The disease nagana or African animal trypanosomiasis (AAT) causes gradual health decline in infected livestock, reduces milk and meat production, and increases abortion rates. Animals eventually succumb to the disease - annual cattle deaths caused by trypanosomiasis are estimated at 3 million, reducing annual cattle production value by US$600m-US$1.2b. This has an enormous impact on the livelihood of farmers who live in tsetse-infested areas, as infected animals cannot be used to plough the land, and keeping cattle is only feasible when the animals are kept under constant prophylactic treatment with trypanocidal drugs, often with associated problems of drug resistance, counterfeited drugs, and suboptimal dosage. The overall annual direct lost potential in livestock and crop production was estimated at US$4.5 billion-US$4.75b.
The tsetse fly lives in nearly in sub-Saharan Africa (mostly wet tropical forest) and many parts of this large area is fertile land that is left uncultivated—a so-called green desert not used by humans and cattle. Most of the 38 countries infested with tsetse are poor, debt-ridden and underdeveloped. Of the 38 tsetse-infested countries, 32 are low-income, food-deficit countries, 29 are least developed countries, and 30 or 34 are among the 40 most heavily indebted poor countries. Eradicating the tsetse and trypanosomiasis (T&T) problem would allow rural Africans to use these areas for animal husbandry or the cultivation of crops and hence increase food production. Only 45 million cattle, of 172 million present in sub-Saharan Africa, are kept in tsetse-infested areas but are often forced into fragile ecosystems like highlands or the semiarid Sahel zone, which increases overgrazing and overuse of land for food production.
In addition to this direct impact, the presence of tsetse and trypanosomiasis discourages the use of more productive exotic and cross-bred cattle, depresses the growth and affects the distribution of livestock populations, reduces the potential opportunities for livestock and crop production (mixed farming) through less draught power to cultivate land and less manure to fertilize (in an environment-friendly way) soils for better crop production, and affects human settlements (people tend to avoid areas with tsetse flies).
Tsetse flies transmit a similar disease to humans, called African trypanosomiasis, human African trypanosomiasis (HAT) or sleeping sickness. An estimated 60-70 million people in 20 countries are at different levels of risk and only 3-4 million people are covered by active surveillance. The DALY index (disability-adjusted life years), an indicator to quantify the burden of disease, includes the impact of both the duration of life lost due to premature death and the duration of life lived with a disability. The annual burden of sleeping sickness is estimated at 2 million DALYs. Since the disease tends to affect economically active adults, the total cost to a family with a patient is about 25% of a year's income.
History of study
In East Africa, C. F. M. Swynnerton played a large role in the first half of the 20th century. Swynnerton did much of the earliest tsetse ecology research. For this E. E. Austen named a patronymic taxon for him, G. swynnertoni in 1922.
Resistance to trypanosomes
Tsetse flies have an arsenal of immune defenses to resist each stage of the trypanosome infectious cycle, and thus are relatively refractory to trypanosome infection. Among the host flies' defenses is the production of hydrogen peroxide, a reactive oxygen species that damages DNA. These defenses limit the population of infected flies.
| Biology and health sciences | Flies (Diptera) | null |
160223 | https://en.wikipedia.org/wiki/Media%20player%20software | Media player software | Media player software is a type of application software for playing multimedia computer files like audio and video files. Media players commonly display standard media control icons known from physical devices such as tape recorders and CD players, such as play ( ), pause ( ), fastforward (⏩️), rewind (⏪), and stop ( ) buttons. In addition, they generally have progress bars (or "playback bars"), which are sliders to locate the current position in the duration of the media file.
Mainstream operating systems have at least one default media player. For example, Windows comes with Windows Media Player, Microsoft Movies & TV and Groove Music, while macOS comes with QuickTime Player and Music. Linux distributions come with different media players, such as SMPlayer, Amarok, Audacious, Banshee, MPlayer, mpv, Rhythmbox, Totem, VLC media player, and xine. Android comes with YouTube Music for audio and Google Photos for video, and smartphone vendors such as Samsung may bundle custom software.
Functionality focus
The basic feature set of media players are a seek bar, a timer with the current and total playback time, playback controls (play, pause, previous, next, stop), playlists, a "repeat" mode, and a "shuffle" (or "random") mode for curiosity and to facilitate searching long timelines of files.
Different media players have different goals and feature sets. Video players are a group of media players that have their features geared more towards playing digital video. For example, Windows DVD Player exclusively plays DVD-Video discs and nothing else. Media Player Classic can play individual audio and video files but many of its features such as color correction, picture sharpening, zooming, set of hotkeys, DVB support and subtitle support are only useful for video material such as films and cartoons. Audio players, on the other hand, specialize in digital audio. For example, AIMP exclusively plays audio formats. MediaMonkey can play both audio and video formats, but many of its features including media library, lyric discovery, music visualization, online radio, audiobook indexing, and tag editing are geared toward consumption of audio material; watching video files on it can be a trying feat. General-purpose media players also do exist. For example, Windows Media Player has exclusive features for both audio and video material, although it cannot match the feature set of Media Player Classic and MediaMonkey combined.
By default, videos are played with fully visible field of view while filling at least either width or height of the viewport to appear as large as possible. Options to change the video's scaling and aspect ratio may include filling the viewport through either stretching or cropping, and "100% view" where each pixel of the video covers exactly one pixel on the screen.
Zooming into the field of view during playback may be implemented through a slider on any screen or with pinch zoom on touch screens, and moving the field of view may be implemented through scrolling by dragging inside the view port or by moving a rectangle inside a miniature view of the entire field of view that denotes the magnified area.
Media player software may have the ability to adjust appearance and acoustics during playback using effects such as mirroring, rotating, cropping, cloning, adjusting colours, deinterlacing, and equalizing and visualizing audio. Easter eggs may be featured, such as a puzzle game on VLC Media Player.
Still snapshots may be extracted directly from a video frame or captured through a screenshot, the former of which is preferred since it preserves videos' original dimensions (height and width). Video players may show a tooltip bubble previewing footage at the position hovered over with the mouse cursor.
A preview tooltip for the seek bar has been implemented on few smartphones through a stylus or a self-capacitive touch screen able to detect a floating finger. Such include the Samsung Galaxy S4, S5 (finger), Note 2, Note 4 (stylus), and Note 3 (both).
Streaming media players may indicate buffered segments of the media in the seek bar.
3D video players
3D video players are used to play 2D video in 3D format. A high-quality three-dimensional video presentation requires that each frame of a motion picture be embedded with information on the depth of objects present in the scene. This process involves shooting the video with special equipment from two distinct perspectives or modeling and rendering each frame as a collection of objects composed of 3D vertices and textures, much like in any modern video game, to achieve special effects. Tedious and costly, this method is only used in a small fraction of movies produced worldwide, while most movies remain in the form of traditional 2D images. It is, however, possible to give an otherwise two-dimensional picture the appearance of depth. Using a technique known as anaglyph processing a "flat" picture can be transformed so as to give an illusion of depth when viewed through anaglyph glasses (usually red-cyan). An image viewed through anaglyph glasses appears to have both protruding and deeply embedded objects in it, at the expense of somewhat distorted colors. The method itself is old enough, dating back to the mid-19th century, but it is only with recent advances in computer technology that it has become possible to apply this kind of transformation to a series of frames in a motion picture reasonably fast or even in real-time, i.e. as the video is being played back. Several implementations exist in the form of 3D video players that render conventional 2D video in anaglyph 3D, as well as in the form of 3D video converters that transform video into stereoscopic anaglyph and transcode it for playback with regular software or hardware video players.
Examples
Well known examples of media player software include Windows Media Player, VLC media player, iTunes, Winamp, Media Player Classic, MediaMonkey, foobar2000, AIMP, MusicBee and JRiver Media Center. Most of these also include music library managers.
Although media players are often multi-media, they can be primarily designed for a specific media. For example, Media Player Classic and VLC media player are video-focused while Winamp and iTunes are music-focused, despite all of them supporting both types of media.
Home theater PC
A home theater PC or media center computer is a convergence device that combines some or all the capabilities of a personal computer with a software application that supports video, photo, audio playback, and sometimes video recording functionality. Although computers with some of these capabilities were available from the late 1980s, the "Home Theater PC" term first appeared in mainstream press in 1996. Since 2007, other types of consumer electronics, including gaming systems and dedicated media devices have crossed over to manage video and music content. The term "media center" also refers to specialized computer programs designed to run on standard personal computers.
| Technology | Computer software | null |
160277 | https://en.wikipedia.org/wiki/Carbon%20fibers | Carbon fibers | Carbon fibers or carbon fibres (alternatively CF, graphite fiber or graphite fibre) are fibers about in diameter and composed mostly of carbon atoms. Carbon fibers have several advantages: high stiffness, high tensile strength, high strength to weight ratio, high chemical resistance, high-temperature tolerance, and low thermal expansion. These properties have made carbon fiber very popular in aerospace, civil engineering, military, motorsports, and other competition sports. However, they are relatively expensive compared to similar fibers, such as glass fiber, basalt fibers, or plastic fibers.
To produce a carbon fiber, the carbon atoms are bonded together in crystals that are more or less aligned parallel to the fiber's long axis as the crystal alignment gives the fiber a high strength-to-volume ratio (in other words, it is strong for its size). Several thousand carbon fibers are bundled together to form a tow, which may be used by itself or woven into a fabric.
Carbon fibers are usually combined with other materials to form a composite. For example, when permeated with a plastic resin and baked, it forms carbon-fiber-reinforced polymer (often referred to as carbon fiber), which has a very high strength-to-weight ratio and is extremely rigid although somewhat brittle. Carbon fibers are also composited with other materials, such as graphite, to form reinforced carbon-carbon composites, which have a very high heat tolerance.
Carbon fiber-reinforced materials are used to make aircraft and spacecraft parts, racing car bodies, golf club shafts, bicycle frames, fishing rods, automobile springs, sailboat masts, and many other components where light weight and high strength are needed.
History
In 1860, Joseph Swan produced carbon fibers for the first time, for use in light bulbs. In 1879, Thomas Edison baked cotton threads or bamboo slivers at high temperatures carbonizing them into an all-carbon fiber filament used in one of the first incandescent light bulbs to be heated by electricity. In 1880, Lewis Latimer developed a reliable carbon wire filament for the incandescent light bulb, heated by electricity.
In 1958, Roger Bacon created high-performance carbon fibers at the Union Carbide Parma Technical Center located outside of Cleveland, Ohio. Those fibers were manufactured by heating strands of rayon until they carbonized. This process proved to be inefficient, as the resulting fibers contained only about 20% carbon. In the early 1960s, a process was developed by Dr. Akio Shindo at Agency of Industrial Science and Technology of Japan, using polyacrylonitrile (PAN) as a raw material. This had produced a carbon fiber that contained about 55% carbon. In 1960 Richard Millington of H.I. Thompson Fiberglas Co. developed a process (US Patent No. 3,294,489) for producing a high carbon content (99%) fiber using rayon as a precursor. These carbon fibers had sufficient strength (modulus of elasticity and tensile strength) to be used as a reinforcement for composites having high strength to weight properties and for high temperature resistant applications.
The high potential strength of carbon fiber was realized in 1963 in a process developed by W. Watt, L. N. Phillips, and W. Johnson at the Royal Aircraft Establishment at Farnborough, Hampshire. The process was patented by the UK Ministry of Defence, then licensed by the British National Research Development Corporation to three companies: Rolls-Royce, who were already making carbon fiber; Morganite; and Courtaulds. Within a few years, after successful use in 1968 of a Hyfil carbon-fiber fan assembly in the Rolls-Royce Conway jet engines of the Vickers VC10, Rolls-Royce took advantage of the new material's properties to break into the American market with its RB-211 aero-engine with carbon-fiber compressor blades. Unfortunately, the blades proved vulnerable to damage from bird impact. This problem and others caused Rolls-Royce such setbacks that the company was nationalized in 1971. The carbon-fiber production plant was sold off to form Bristol Composite Materials Engineering Ltd (often referred to as Bristol Composites).
In the late 1960s, the Japanese took the lead in manufacturing PAN-based carbon fibers. A 1970 joint technology agreement allowed Union Carbide to manufacture Japan's Toray Industries product. Morganite decided that carbon-fiber production was peripheral to its core business, leaving Courtaulds as the only big UK manufacturer. Courtaulds's water-based inorganic process made the product susceptible to impurities that did not affect the organic process used by other carbon-fiber manufacturers, leading Courtaulds ceasing carbon-fiber production in 1991.
During the 1960s, experimental work to find alternative raw materials led to the introduction of carbon fibers made from a petroleum pitch derived from oil processing. These fibers contained about 85% carbon and had excellent flexural strength. Also, during this period, the Japanese Government heavily supported carbon fiber development at home and several Japanese companies such as Toray, Nippon Carbon, Toho Rayon and Mitsubishi started their own development and production. Since the late 1970s, further types of carbon fiber yarn entered the global market, offering higher tensile strength and higher elastic modulus. For example, T400 from Toray with a tensile strength of 4,000 MPa and M40, a modulus of 400 GPa. Intermediate carbon fibers, such as IM 600 from Toho Rayon with up to 6,000 MPa were developed. Carbon fibers from Toray, Celanese and Akzo found their way to aerospace application from secondary to primary parts first in military and later in civil aircraft as in McDonnell Douglas, Boeing, Airbus, and United Aircraft Corporation planes. In 1988, Dr. Jacob Lahijani invented balanced ultra-high Young's modulus (greater than 100 Mpsi) and high tensile strength pitch carbon fiber (greater than 500 kpsi) used extensively in automotive and aerospace applications. In March 2006, the patent was assigned to the University of Tennessee Research Foundation.
Structure and properties
Carbon fiber is frequently supplied in the form of a continuous tow wound onto a reel. The tow is a bundle of thousands of continuous individual carbon filaments held together and protected by an organic coating, or size, such as polyethylene oxide (PEO) or polyvinyl alcohol (PVA). The tow can be conveniently unwound from the reel for use. Each carbon filament in the tow is a continuous cylinder with a diameter of 5–10 micrometers and consists almost exclusively of carbon. The earliest generation (e.g. T300, HTA and AS4) had diameters of 16–22 micrometers. Later fibers (e.g. IM6 or IM600) have diameters that are approximately 5 micrometers.
The atomic structure of carbon fiber is similar to that of graphite, consisting of sheets of carbon atoms arranged in a regular hexagonal pattern (graphene sheets), the difference being in the way these sheets interlock. Graphite is a crystalline material in which the sheets are stacked parallel to one another in regular fashion. The intermolecular forces between the sheets are relatively weak Van der Waals forces, giving graphite its soft and brittle characteristics.
Depending upon the precursor to make the fiber, carbon fiber may be turbostratic or graphitic, or have a hybrid structure with both graphitic and turbostratic parts present. In turbostratic carbon fiber the sheets of carbon atoms are haphazardly folded, or crumpled, together. Carbon fibers derived from polyacrylonitrile (PAN) are turbostratic, whereas carbon fibers derived from mesophase pitch are graphitic after heat treatment at temperatures exceeding 2200 °C. Turbostratic carbon fibers tend to have high ultimate tensile strength, whereas heat-treated mesophase-pitch-derived carbon fibers have high Young's modulus (i.e., high stiffness or resistance to extension under load) and high thermal conductivity.
Applications
Carbon fiber can have higher cost than other materials which has been one of the limiting factors of adoption. In a comparison between steel and carbon fiber materials for automotive materials, carbon fiber may be 10-12x more expensive. However, this cost premium has come down over the past decade from estimates of 35x more expensive than steel in the early 2000s.
Composite materials
Carbon fiber is most notably used to reinforce composite materials, particularly the class of materials known as carbon fiber or graphite reinforced polymers. Non-polymer materials can also be used as the matrix for carbon fibers. Due to the formation of metal carbides and corrosion considerations, carbon has seen limited success in metal matrix composite applications. Reinforced carbon-carbon (RCC) consists of carbon fiber-reinforced graphite, and is used structurally in high-temperature applications. The fiber also finds use in filtration of high-temperature gases, as an electrode with high surface area and impeccable corrosion resistance, and as an anti-static component. Molding a thin layer of carbon fibers significantly improves fire resistance of polymers or thermoset composites because a dense, compact layer of carbon fibers efficiently reflects heat.
The increasing use of carbon fiber composites is displacing aluminum from aerospace applications in favor of other metals because of galvanic corrosion issues. Note, however, that carbon fiber does not eliminate the risk of galvanic corrosion. In contact with metal, it forms "a perfect galvanic corrosion cell ..., and the metal will be subjected to galvanic corrosion attack" unless a sealant is applied between the metal and the carbon fiber.
Carbon fiber can be used as an additive to asphalt to make electrically conductive asphalt concrete. Using this composite material in the transportation infrastructure, especially for airport pavement, decreases some winter maintenance problems that lead to flight cancellation or delay due to the presence of ice and snow. Passing current through the composite material 3D network of carbon fibers dissipates thermal energy that increases the surface temperature of the asphalt, which is able to melt ice and snow above it.
Textiles
Precursors for carbon fibers are polyacrylonitrile (PAN), rayon and pitch. Carbon fiber filament yarns are used in several processing techniques: the direct uses are for prepregging, filament winding, pultrusion, weaving, braiding, etc. Carbon fiber yarn is rated by the linear density (weight per unit length; i.e., 1 g/1000 m = 1 tex) or by number of filaments per yarn count, in thousands. For example, 200 tex for 3,000 filaments of carbon fiber is three times as strong as 1,000 carbon filament yarn, but is also three times as heavy. This thread can then be used to weave a carbon fiber filament fabric or cloth. The appearance of this fabric generally depends on the linear density of the yarn and the weave chosen. Some commonly used types of weave are twill, satin and plain. Carbon filament yarns can also be knitted or braided.
Microelectrodes
Carbon fibers are used for fabrication of carbon-fiber microelectrodes. In this application typically a single carbon fiber with diameter of 5–7 μm is sealed in a glass capillary. At the tip the capillary is either sealed with epoxy and polished to make a carbon-fiber disk microelectrode, or the fiber is cut to a length of 75–150 μm to make a carbon-fiber cylinder electrode. Carbon-fiber microelectrodes are used either in amperometry or fast-scan cyclic voltammetry for detection of biochemical signaling.
Flexible heating
Despite being known for their electrical conductivity, carbon fibers can carry only very low currents on their own. When woven into larger fabrics, they can be used to reliably provide (infrared) heating in applications requiring flexible electrical heating elements and can easily sustain temperatures past 100 °C. Many examples of this type of application can be seen in DIY heated articles of clothing and blankets. Due to its chemical inertness, it can be used relatively safely amongst most fabrics and materials; however, shorts caused by the material folding back on itself will lead to increased heat production and can lead to a fire.
Synthesis
Each carbon filament is produced from a polymer such as polyacrylonitrile (PAN), rayon, or petroleum pitch. All these polymers are known as a precursor. For synthetic polymers such as PAN or rayon, the precursor is first spun into filament yarns, using chemical and mechanical processes to initially align the polymer molecules in a way to enhance the final physical properties of the completed carbon fiber. Precursor compositions and mechanical processes used during spinning filament yarns may vary among manufacturers. After drawing or spinning, the polymer filament yarns are then heated to drive off non-carbon atoms (carbonization), producing the final carbon fiber. The carbon fibers filament yarns may be further treated to improve handling qualities, then wound on to bobbins.
A common method of manufacture involves heating the spun PAN filaments to approximately 300 °C in air, which breaks many of the hydrogen bonds and oxidizes the material. During this process, fibers tend to shrink. The resulting chemical composition and mechanical properties of the fiber are dependent on the time and temperature of the process, as well as on the tension applied to the fiber during oxidation. The oxidized PAN is then placed into a furnace having an inert atmosphere of a gas such as argon, and heated to approximately 2000 °C, which induces graphitization of the material, changing the molecular bond structure. When heated in the correct conditions, these chains bond side-to-side (ladder polymers), forming narrow graphene sheets which eventually merge to form a single, columnar filament. The result is usually 93–95% carbon. Lower-quality fiber can be manufactured using pitch or rayon as the precursor instead of PAN. The carbon can become further enhanced, as high modulus, or high strength carbon, by heat treatment processes. Carbon heated in the range of 1500–2000 °C (carbonization) exhibits the highest tensile strength (5,650MPa, or 820,000psi), while carbon fiber heated from 2500 to 3000 °C (graphitizing) exhibits a higher modulus of elasticity (531GPa, or 77,000,000psi).
| Technology | Fabrics and fibers | null |
160332 | https://en.wikipedia.org/wiki/Ephemeris | Ephemeris | In astronomy and celestial navigation, an ephemeris (; ; , ) is a book with tables that gives the trajectory of naturally occurring astronomical objects and artificial satellites in the sky, i.e., the position (and possibly velocity) over time. Historically, positions were given as printed tables of values, given at regular intervals of date and time. The calculation of these tables was one of the first applications of mechanical computers. Modern ephemerides are often provided in electronic form. However, printed ephemerides are still produced, as they are useful when computational devices are not available.
The astronomical position calculated from an ephemeris is often given in the spherical polar coordinate system of right ascension and declination, together with the distance from the origin if applicable. Some of the astronomical phenomena of interest to astronomers are eclipses, apparent retrograde motion/planetary stations, planetary es, sidereal time, positions for the mean and true nodes of the moon, the phases of the Moon, and the positions of minor celestial bodies such as Chiron.
Ephemerides are used in celestial navigation and astronomy. They are also used by astrologers. GPS signals include ephemeris data used to calculate the position of satellites in orbit.
History
1st millennium BC – Ephemerides in Babylonian astronomy.
2nd century AD – the Almagest and the Handy Tables of Ptolemy
8th century AD – the of Ibrāhīm al-Fazārī
9th century AD – the of Muḥammad ibn Mūsā al-Khwārizmī
11th century AD – the of Ibn Yunus
12th century AD – the Tables of Toledo – based largely on Arabic sources of Islamic astronomy – were edited by Gerard of Cremona to form the standard European ephemeris until the Alfonsine Tables.
13th century AD – the Zīj-i Īlkhānī (Ilkhanic Tables) were compiled at the Maragheh observatory in Persia.
13th century AD – the Alfonsine Tables were compiled in Spain to correct anomalies in the Tables of Toledo, remaining the standard European ephemeris until the Prutenic Tables almost 300 years later.
13th century AD - the Dresden Codex, an extant Mayan ephemeris
1408 – Chinese ephemeris table (copy in Pepysian Library, Cambridge, UK (refer book '1434'); Chinese tables believed known to Regiomontanus).
1474 – Regiomontanus publishes his day-to-day Ephemerides in Nürnberg, Germany.
1496 – the Almanach Perpetuum of Abraão ben Samuel Zacuto (one of the first books published with a movable type and printing press in Portugal)
1504 – While shipwrecked on the island of Jamaica, Christopher Columbus successfully predicted a lunar eclipse for the natives, using the ephemeris of the German astronomer Regiomontanus.
1531 – Work of Johannes Stöffler is published posthumously at Tübingen, extending the ephemeris of Regiomontanus through 1551.
1551 – the Prutenic Tables of Erasmus Reinhold were published, based on Copernicus's theories.
1554 – Johannes Stadius published Ephemerides novae et auctae, the first major ephemeris computed according to Copernicus' heliocentric model, using parameters derived from the Prutenic Tables. Although the Copernican model provided an elegant solution to the problem of computing apparent planetary positions (it avoided the need for the equant and better explained the apparent retrograde motion of planets), it still relied on the use of epicycles, leading to some inaccuracies – for example, periodic errors in the position of Mercury of up to ten degrees. One of the users of Stadius's tables is Tycho Brahe.
1627 – the Rudolphine Tables of Johannes Kepler based on elliptical planetary motion became the new standard.
1679 – La Connaissance des Temps ou calendrier et éphémérides du lever & coucher du Soleil, de la Lune & des autres planètes, first published yearly by Jean Picard and still extant.
1975 – Owen Gingerich, using modern planetary theory and digital computers, calculates the actual positions of the planets in the 16th century and graphs the errors in the planetary positions predicted by the ephemerides of Stöffler, Stadius and others. According to Gingerich, the error patterns "are as distinctive as fingerprints and reflect the characteristics of the underlying tables. That is, the error patterns for Stöffler are different from those of Stadius, but the error patterns of Stadius closely resemble those of Maestlin, Magini, Origanus, and others who followed the Copernican parameters."
Modern ephemeris
For scientific uses, a modern planetary ephemeris comprises software that generates positions of planets and often of their satellites, asteroids, or comets, at virtually any time desired by the user.
After introduction of electronic computers in the 1950s it became feasible to use numerical integration to compute ephemerides. The Jet Propulsion Laboratory Development Ephemeris is a prime example. Conventional so-called analytical ephemerides that utilize series expansions for the coordinates have also been developed, but of much increased size and accuracy as compared to the past, by making use of computers to manage the tens of thousands of terms. Ephemeride Lunaire Parisienne and VSOP are examples.
Typically, such ephemerides cover several centuries, past and future; the future ones can be covered because the field of celestial mechanics has developed several accurate theories. Nevertheless, there are secular phenomena which cannot adequately be considered by ephemerides. The greatest uncertainties in the positions of planets are caused by the perturbations of numerous asteroids, most of whose masses and orbits are poorly known, rendering their effect uncertain. Reflecting the continuing influx of new data and observations, NASA's Jet Propulsion Laboratory (JPL) has revised its published ephemerides nearly every year since 1981.
Solar System ephemerides are essential for the navigation of spacecraft and for all kinds of space observations of the planets, their natural satellites, stars, and galaxies.
Scientific ephemerides for sky observers mostly contain the positions of celestial bodies in right ascension and declination, because these coordinates are the most frequently used on star maps and telescopes. The equinox of the coordinate system must be given. It is, in nearly all cases, either the actual equinox (the equinox valid for that moment, often referred to as "of date" or "current"), or that of one of the "standard" equinoxes, typically J2000.0, B1950.0, or J1900. Star maps almost always use one of the standard equinoxes.
Scientific ephemerides often contain further useful data about the moon, planet, asteroid, or comet beyond the pure coordinates in the sky, such as elongation to the Sun, brightness, distance, velocity, apparent diameter in the sky, phase angle, times of rise, transit, and set, etc.
Ephemerides of the planet Saturn also sometimes contain the apparent inclination of its ring.
Celestial navigation serves as a backup to satellite navigation. Software is widely available to assist with this form of navigation; some of this software has a self-contained ephemeris. When software is used that does not contain an ephemeris, or if no software is used, position data for celestial objects may be obtained from the modern Nautical Almanac or Air Almanac.
An ephemeris is usually only correct for a particular location on the Earth. In many cases, the differences are too small to matter. However, for nearby asteroids or the Moon, they can be quite important.
Other modern ephemerides recently created are the EPM (Ephemerides of Planets and the Moon), from the Russian Institute for Applied Astronomy of the Russian Academy of Sciences, and the INPOP () by the French IMCCE.
| Technology | Astronomical technology | null |
160361 | https://en.wikipedia.org/wiki/Sampling%20%28statistics%29 | Sampling (statistics) | In statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the whole population. The subset is meant to reflect the whole population and statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population, and thus, it can provide insights in cases where it is infeasible to measure an entire population.
Each observation measures one or more properties (such as weight, location, colour or mass) of independent objects or individuals. In survey sampling, weights can be applied to the data to adjust for the sample design, particularly in stratified sampling. Results from probability theory and statistical theory are employed to guide the practice. In business and medical research, sampling is widely used for gathering information about a population. Acceptance sampling is used to determine if a production lot of material meets the governing specifications.
History
Random sampling by using lots is an old idea, mentioned several times in the Bible. In 1786, Pierre Simon Laplace estimated the population of France by using a sample, along with ratio estimator. He also computed probabilistic estimates of the error. These were not expressed as modern confidence intervals but as the sample size that would be needed to achieve a particular upper bound on the sampling error with probability 1000/1001. His estimates used Bayes' theorem with a uniform prior probability and assumed that his sample was random. Alexander Ivanovich Chuprov introduced sample surveys to Imperial Russia in the 1870s.
In the US, the 1936 Literary Digest prediction of a Republican win in the presidential election went badly awry, due to severe bias . More than two million people responded to the study with their names obtained through magazine subscription lists and telephone directories. It was not appreciated that these lists were heavily biased towards Republicans and the resulting sample, though very large, was deeply flawed.
Elections in Singapore have adopted this practice since the 2015 election, also known as the sample counts, whereas according to the Elections Department (ELD), their country's election commission, sample counts help reduce speculation and misinformation, while helping election officials to check against the election result for that electoral division. The reported sample counts yield a fairly accurate indicative result with a 95% confidence interval at a margin of error within 4-5%; ELD reminded the public that sample counts are separate from official results, and only the returning officer will declare the official results once vote counting is complete.
Population definition
Successful statistical practice is based on focused problem definition. In sampling, this includes defining the "population" from which our sample is drawn. A population can be defined as including all people or items with the characteristics one wishes to understand. Because there is very rarely enough time or money to gather information from everyone or everything in a population, the goal becomes finding a representative sample (or subset) of that population.
Sometimes what defines a population is obvious. For example, a manufacturer needs to decide whether a batch of material from production is of high enough quality to be released to the customer or should be scrapped or reworked due to poor quality. In this case, the batch is the population.
Although the population of interest often consists of physical objects, sometimes it is necessary to sample over time, space, or some combination of these dimensions. For instance, an investigation of supermarket staffing could examine checkout line length at various times, or a study on endangered penguins might aim to understand their usage of various hunting grounds over time. For the time dimension, the focus may be on periods or discrete occasions.
In other cases, the examined 'population' may be even less tangible. For example, Joseph Jagger studied the behaviour of roulette wheels at a casino in Monte Carlo, and used this to identify a biased wheel. In this case, the 'population' Jagger wanted to investigate was the overall behaviour of the wheel (i.e. the probability distribution of its results over infinitely many trials), while his 'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of properties of materials such as the electrical conductivity of copper.
This situation often arises when seeking knowledge about the cause system of which the observed population is an outcome. In such cases, sampling theory may treat the observed population as a sample from a larger 'superpopulation'. For example, a researcher might study the success rate of a new 'quit smoking' program on a test group of 100 patients, in order to predict the effects of the program if it were made available nationwide. Here the superpopulation is "everybody in the country, given access to this treatment" – a group that does not yet exist since the program is not yet available to all.
The population from which the sample is drawn may not be the same as the population from which information is desired. Often there is a large but not complete overlap between these two groups due to frame issues etc. (see below). Sometimes they may be entirely separate – for instance, one might study rats in order to get a better understanding of human health, or one might study records from people born in 2008 in order to make predictions about people born in 2009.
Time spent in making the sampled population and population of concern precise is often well spent because it raises many issues, ambiguities, and questions that would otherwise have been overlooked at this stage.
Sampling frame
In the most straightforward case, such as the sampling of a batch of material from production (acceptance sampling by lots), it would be most desirable to identify and measure every single item in the population and to include any one of them in our sample. However, in the more general case this is not usually possible or practical. There is no way to identify all rats in the set of all rats. Where voting is not compulsory, there is no way to identify which people will vote at a forthcoming election (in advance of the election). These imprecise populations are not amenable to sampling in any of the ways below and to which we could apply statistical theory.
As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample. The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register and a telephone directory.
A probability sample is a sample in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined. The combination of these traits makes it possible to produce unbiased estimates of population totals, by weighting sampled units according to their probability of selection.
Example: We want to estimate the total income of adults living in a given street. We visit each household in that street, identify all adults living there, and randomly select one adult from each household. (For example, we can allocate each person a random number, generated from a uniform distribution between 0 and 1, and select the person with the highest number in each household). We then interview the selected person and find their income.
People living on their own are certain to be selected, so we simply add their income to our estimate of the total. But a person living in a household of two adults has only a one-in-two chance of selection. To reflect this, when we come to such a household, we would count the selected person's income twice towards the total. (The person who is selected from that household can be loosely viewed as also representing the person who isn't selected.)
In the above example, not everybody has the same probability of selection; what makes it a probability sample is the fact that each person's probability is known. When every element in the population does have the same probability of selection, this is known as an 'equal probability of selection' (EPS) design. Such designs are also referred to as 'self-weighting' because all sampled units are given the same weight.
Probability sampling includes: simple random sampling, systematic sampling, stratified sampling, probability-proportional-to-size sampling, and cluster or multistage sampling. These various ways of probability sampling have two things in common:
Every element has a known nonzero probability of being sampled and
involves random selection at some point.
Nonprobability sampling
Nonprobability sampling is any sampling method where some elements of the population have no chance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection cannot be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is nonrandom, nonprobability sampling does not allow the estimation of sampling errors. These conditions give rise to exclusion bias, placing limits on how much information a sample can provide about the population. Information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population.
Example: We visit every household in a given street, and interview the first person to answer the door. In any household with more than one occupant, this is a nonprobability sample, because some people are more likely to answer the door (e.g. an unemployed person who spends most of their time at home is more likely to answer than an employed housemate who might be at work when the interviewer calls) and it's not practical to calculate these probabilities.
Nonprobability sampling methods include convenience sampling, quota sampling, and purposive sampling. In addition, nonresponse effects may turn any probability design into a nonprobability design if the characteristics of nonresponse are not well understood, since nonresponse effectively modifies each element's probability of being sampled.
Sampling methods
Within any of the types of frames identified above, a variety of sampling methods can be employed individually or in combination. Factors commonly influencing the choice between these designs include:
Nature and quality of the frame
Availability of auxiliary information about units on the frame
Accuracy requirements, and the need to measure accuracy
Whether detailed analysis of the sample is expected
Cost/operational concerns
Simple random sampling
In a simple random sample (SRS) of a given size, all subsets of a sampling frame have an equal probability of being selected. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. Furthermore, any given pair of elements has the same chance of selection as any other such pair (and similarly for triples, and so on). This minimizes bias and simplifies analysis of results. In particular, the variance between individual results within the sample is a good indicator of variance in the overall population, which makes it relatively easy to estimate the accuracy of results.
Simple random sampling can be vulnerable to sampling error because the randomness of the selection may result in a sample that does not reflect the makeup of the population. For instance, a simple random sample of ten people from a given country will on average produce five men and five women, but any given trial is likely to over represent one sex and underrepresent the other. Systematic and stratified techniques attempt to overcome this problem by "using information about the population" to choose a more "representative" sample.
Also, simple random sampling can be cumbersome and tedious when sampling from a large target population. In some cases, investigators are interested in research questions specific to subgroups of the population. For example, researchers might be interested in examining whether cognitive ability as a predictor of job performance is equally applicable across racial groups. Simple random sampling cannot accommodate the needs of researchers in this situation, because it does not provide subsamples of the population, and other sampling strategies, such as stratified sampling, can be used instead.
Systematic sampling
Systematic sampling (also known as interval sampling) relies on arranging the study population according to some ordering scheme and then selecting elements at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of every kth element from then onwards. In this case, k=(population size/sample size). It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first to the kth element in the list. A simple example would be to select every 10th name from the telephone directory (an 'every 10th' sample, also referred to as 'sampling with a skip of 10').
As long as the starting point is randomized, systematic sampling is a type of probability sampling. It is easy to implement and the stratification induced can make it efficient, if the variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling from databases.
For example, suppose we wish to sample people from a long street that starts in a poor area (house No. 1) and ends in an expensive district (house No. 1000). A simple random selection of addresses from this street could easily end up with too many from the high end and too few from the low end (or vice versa), leading to an unrepresentative sample. Selecting (e.g.) every 10th street number along the street ensures that the sample is spread evenly along the length of the street, representing all of these districts. (If we always start at house #1 and end at #991, the sample is slightly biased towards the low end; by randomly selecting the start between #1 and #10, this bias is eliminated.)
However, systematic sampling is especially vulnerable to periodicities in the list. If periodicity is present and the period is a multiple or factor of the interval used, the sample is especially likely to be unrepresentative of the overall population, making the scheme less accurate than simple random sampling.
For example, consider a street where the odd-numbered houses are all on the north (expensive) side of the road, and the even-numbered houses are all on the south (cheap) side. Under the sampling scheme given above, it is impossible to get a representative sample; either the houses sampled will all be from the odd-numbered, expensive side, or they will all be from the even-numbered, cheap side, unless the researcher has previous knowledge of this bias and avoids it by a using a skip which ensures jumping between the two sides (any odd-numbered skip).
Another drawback of systematic sampling is that even in scenarios where it is more accurate than SRS, its theoretical properties make it difficult to quantify that accuracy. (In the two examples of systematic sampling that are given above, much of the potential sampling error is due to variation between neighbouring houses – but because this method never selects two neighbouring houses, the sample will not give us any information on that variation.)
As described above, systematic sampling is an EPS method, because all elements have the same probability of selection (in the example given, one in ten). It is not 'simple random sampling' because different subsets of the same size have different selection probabilities – e.g. the set {4,14,24,...,994} has a one-in-ten probability of selection, but the set {4,13,24,34,...} has zero probability of selection.
Systematic sampling can also be adapted to a non-EPS approach; for an example, see discussion of PPS samples below.
Stratified sampling
When the population embraces a number of distinct categories, the frame can be organized by these categories into separate "strata." Each stratum is then sampled as an independent sub-population, out of which individual elements can be randomly selected. The ratio of the size of this random selection (or sample) to the size of the population is called a sampling fraction. There are several potential benefits to stratified sampling.
First, dividing the population into distinct, independent strata can enable researchers to draw inferences about specific subgroups that may be lost in a more generalized random sample.
Second, utilizing a stratified sampling method can lead to more efficient statistical estimates (provided that strata are selected based upon relevance to the criterion in question, instead of availability of the samples). Even if a stratified sampling approach does not lead to increased statistical efficiency, such a tactic will not result in less efficiency than would simple random sampling, provided that each stratum is proportional to the group's size in the population.
Third, it is sometimes the case that data are more readily available for individual, pre-existing strata within a population than for the overall population; in such cases, using a stratified sampling approach may be more convenient than aggregating data across groups (though this may potentially be at odds with the previously noted importance of utilizing criterion-relevant strata).
Finally, since each stratum is treated as an independent population, different sampling approaches can be applied to different strata, potentially enabling researchers to use the approach best suited (or most cost-effective) for each identified subgroup within the population.
There are, however, some potential drawbacks to using stratified sampling. First, identifying strata and implementing such an approach can increase the cost and complexity of sample selection, as well as leading to increased complexity of population estimates. Second, when examining multiple criteria, stratifying variables may be related to some, but not to others, further complicating the design, and potentially reducing the utility of the strata. Finally, in some cases (such as designs with a large number of strata, or those with a specified minimum sample size per group), stratified sampling can potentially require a larger sample than would other methods (although in most cases, the required sample size would be no larger than would be required for simple random sampling).
A stratified sampling approach is most effective when three conditions are met
Variability within strata are minimized
Variability between strata are maximized
The variables upon which the population is stratified are strongly correlated with the desired dependent variable.
Advantages over other sampling methods
Focuses on important subpopulations and ignores irrelevant ones.
Allows use of different sampling techniques for different subpopulations.
Improves the accuracy/efficiency of estimation.
Permits greater balancing of statistical power of tests of differences between strata by sampling equal numbers from strata varying widely in size.
Disadvantages
Requires selection of relevant stratification variables which can be difficult.
Is not useful when there are no homogeneous subgroups.
Can be expensive to implement.
Poststratification
Stratification is sometimes introduced after the sampling phase in a process called "poststratification". This approach is typically implemented due to a lack of prior knowledge of an appropriate stratifying variable or when the experimenter lacks the necessary information to create a stratifying variable during the sampling phase. Although the method is susceptible to the pitfalls of post hoc approaches, it can provide several benefits in the right situation. Implementation usually follows a simple random sample. In addition to allowing for stratification on an ancillary variable, poststratification can be used to implement weighting, which can improve the precision of a sample's estimates.
Oversampling
Choice-based sampling or oversampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that rarer target classes will be more represented in the sample. The model is then built on this biased sample. The effects of the input variables on the target are often estimated with more precision with the choice-based sample even when a smaller overall sample size is taken, compared to a random sample. The results usually must be adjusted to correct for the oversampling.
Probability-proportional-to-size sampling
In some cases the sample designer has access to an "auxiliary variable" or "size measure", believed to be correlated to the variable of interest, for each element in the population. These data can be used to improve accuracy in sample design. One option is to use the auxiliary variable as a basis for stratification, as discussed above.
Another option is probability proportional to size ('PPS') sampling, in which the selection probability for each element is set to be proportional to its size measure, up to a maximum of 1. In a simple PPS design, these selection probabilities can then be used as the basis for Poisson sampling. However, this has the drawback of variable sample size, and different portions of the population may still be over- or under-represented due to chance variation in selections.
Systematic sampling theory can be used to create a probability proportionate to size sample. This is done by treating each count within the size variable as a single sampling unit. Samples are then identified by selecting at even intervals among these counts within the size variable. This method is sometimes called PPS-sequential or monetary unit sampling in the case of audits or forensic sampling.
Example: Suppose we have six schools with populations of 150, 180, 200, 220, 260, and 490 students respectively (total 1500 students), and we want to use student population as the basis for a PPS sample of size three. To do this, we could allocate the first school numbers 1 to 150, the second school 151 to 330 (= 150 + 180), the third school 331 to 530, and so on to the last school (1011 to 1500). We then generate a random start between 1 and 500 (equal to 1500/3) and count through the school populations by multiples of 500. If our random start was 137, we would select the schools which have been allocated numbers 137, 637, and 1137, i.e. the first, fourth, and sixth schools.
The PPS approach can improve accuracy for a given sample size by concentrating sample on large elements that have the greatest impact on population estimates. PPS sampling is commonly used for surveys of businesses, where element size varies greatly and auxiliary information is often available – for instance, a survey attempting to measure the number of guest-nights spent in hotels might use each hotel's number of rooms as an auxiliary variable. In some cases, an older measurement of the variable of interest can be used as an auxiliary variable when attempting to produce more current estimates.
Cluster sampling
Sometimes it is more cost-effective to select respondents in groups ('clusters'). Sampling is often clustered by geography, or by time periods. (Nearly all samples are in some sense 'clustered' in time – although this is rarely taken into account in the analysis.) For instance, if surveying households within a city, we might choose to select 100 city blocks and then interview every household within the selected blocks.
Clustering can reduce travel and administrative costs. In the example above, an interviewer can make a single trip to visit several households in one block, rather than having to drive to a different block for each household.
It also means that one does not need a sampling frame listing all elements in the target population. Instead, clusters can be chosen from a cluster-level frame, with an element-level frame created only for the selected clusters. In the example above, the sample only requires a block-level city map for initial selections, and then a household-level map of the 100 selected blocks, rather than a household-level map of the whole city.
Cluster sampling (also known as clustered sampling) generally increases the variability of sample estimates above that of simple random sampling, depending on how the clusters differ between one another as compared to the within-cluster variation. For this reason, cluster sampling requires a larger sample than SRS to achieve the same level of accuracy – but cost savings from clustering might still make this a cheaper option.
Cluster sampling is commonly implemented as multistage sampling. This is a complex form of cluster sampling in which two or more levels of units are embedded one in the other. The first stage consists of constructing the clusters that will be used to sample from. In the second stage, a sample of primary units is randomly selected from each cluster (rather than using all units contained in all selected clusters). In following stages, in each of those selected clusters, additional samples of units are selected, and so on. All ultimate units (individuals, for instance) selected at the last step of this procedure are then surveyed. This technique, thus, is essentially the process of taking random subsamples of preceding random samples.
Multistage sampling can substantially reduce sampling costs, where the complete population list would need to be constructed (before other sampling methods could be applied). By eliminating the work involved in describing clusters that are not selected, multistage sampling can reduce the large costs associated with traditional cluster sampling. However, each sample may not be a full representative of the whole population.
Quota sampling
In quota sampling, the population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Then judgement is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60.
It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example, interviewers might be tempted to interview those who look most helpful. The problem is that these samples may be biased because not everyone gets a chance of selection. This random element is its greatest weakness and quota versus probability has been a matter of controversy for several years.
Minimax sampling
In imbalanced datasets, where the sampling ratio does not follow the population statistics, one can resample the dataset in a conservative manner called minimax sampling. The minimax sampling has its origin in Anderson minimax ratio whose value is proved to be 0.5: in a binary classification, the class-sample sizes should be chosen equally. This ratio can be proved to be minimax ratio only under the assumption of LDA classifier with Gaussian distributions. The notion of minimax sampling is recently developed for a general class of classification rules, called class-wise smart classifiers. In this case, the sampling ratio of classes is selected so that the worst case classifier error over all the possible population statistics for class prior probabilities, would be the best.
Accidental sampling
Accidental sampling (sometimes known as grab, convenience or opportunity sampling) is a type of nonprobability sampling which involves the sample being drawn from that part of the population which is close to hand. That is, a population is selected because it is readily available and convenient. It may be through meeting the person or including a person in the sample when one meets them or chosen by finding them through technological means such as the internet or through phone. The researcher using such a sample cannot scientifically make generalizations about the total population from this sample because it would not be representative enough. For example, if the interviewer were to conduct such a survey at a shopping center early in the morning on a given day, the people that they could interview would be limited to those given there at that given time, which would not represent the views of other members of society in such an area, if the survey were to be conducted at different times of day and several times per week. This type of sampling is most useful for pilot testing. Several important considerations for researchers using convenience samples include:
Are there controls within the research design or experiment which can serve to lessen the impact of a non-random convenience sample, thereby ensuring the results will be more representative of the population?
Is there good reason to believe that a particular convenience sample would or should respond or behave differently than a random sample from the same population?
Is the question being asked by the research one that can adequately be answered using a convenience sample?
In social science research, snowball sampling is a similar technique, where existing study subjects are used to recruit more subjects into the sample. Some variants of snowball sampling, such as respondent driven sampling, allow calculation of selection probabilities and are probability sampling methods under certain conditions.
Voluntary sampling
The voluntary sampling method is a type of non-probability sampling. Volunteers choose to complete a survey.
Volunteers may be invited through advertisements in social media. The target population for advertisements can be selected by characteristics like location, age, sex, income, occupation, education, or interests using tools provided by the social medium. The advertisement may include a message about the research and link to a survey. After following the link and completing the survey, the volunteer submits the data to be included in the sample population. This method can reach a global population but is limited by the campaign budget. Volunteers outside the invited population may also be included in the sample.
It is difficult to make generalizations from this sample because it may not represent the total population. Often, volunteers have a strong interest in the main topic of the survey.
Line-intercept sampling
Line-intercept sampling is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a "transect", intersects the element.
Panel sampling
Panel sampling is the method of first selecting a group of participants through a random sampling method and then asking that group for (potentially the same) information several times over a period of time. Therefore, each participant is interviewed at two or more time points; each period of data collection is called a "wave". The method was developed by sociologist Paul Lazarsfeld in 1938 as a means of studying political campaigns. This longitudinal sampling-method allows estimates of changes in the population, for example with regard to chronic illness to job stress to weekly food expenditures. Panel sampling can also be used to inform researchers about within-person health changes due to age or to help explain changes in continuous dependent variables such as spousal interaction. There have been several proposed methods of analyzing panel data, including MANOVA, growth curves, and structural equation modeling with lagged effects.
Snowball sampling
Snowball sampling involves finding a small group of initial respondents and using them to recruit more respondents. It is particularly useful in cases where the population is hidden or difficult to enumerate.
Theoretical sampling
Theoretical sampling occurs when samples are selected on the basis of the results of the data collected so far with a goal of developing a deeper understanding of the area or develop theories. An initial, general sample is first collected with the goal of investigating general trends, where further sampling may consist of extreme or very specific cases might be selected in order to maximize the likelihood a phenomenon will actually be observable.
Active sampling
In active sampling, the samples which are used for training a machine learning algorithm are actively selected, also compare active learning (machine learning).
Judgmental selection
Judgement sampling is a type non-random sampling where samples are selected based on the opinion of an expert, who can select participants based on how valuable the information they provide is.
Haphazard sampling
Haphazard sampling refers to the idea of using human judgement to simulate randomness. Despite samples being hand-picked, the goal is to ensure that no conscious bias exists within the choice of samples, but often fails due to selection bias. Haphazard sampling is generally opted for due to its convenience, when the tools or capacity to perform other sampling methods may not exist.
Replacement of selected units
Sampling schemes may be without replacement ('WOR' – no element can be selected more than once in the same sample) or with replacement ('WR' – an element may appear multiple times in the one sample). For example, if we catch fish, measure them, and immediately return them to the water before continuing with the sample, this is a WR design, because we might end up catching and measuring the same fish more than once. However, if we do not return the fish to the water or tag and release each fish after catching it, this becomes a WOR design.
Sample size determination
Formulas, tables, and power function charts are well known approaches to determine sample size.
Steps for using sample size tables:
Postulate the effect size of interest, α, and β.
Check sample size table
Select the table corresponding to the selected α
Locate the row corresponding to the desired power
Locate the column corresponding to the estimated effect size.
The intersection of the column and row is the minimum sample size required.
Sampling and data collection
Good data collection involves:
Following the defined sampling process
Keeping the data in time order
Noting comments and other contextual events
Recording non-responses
Applications of sampling
Sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. For example, there are about 600 million tweets produced every day. It is not necessary to look at all of them to determine the topics that are discussed during the day, nor is it necessary to look at all the tweets to determine the sentiment on each of the topics. A theoretical formulation for sampling Twitter data has been developed.
In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict down-time it may not be necessary to look at all the data but a sample may be sufficient.
Errors in sample surveys
Survey results are typically subject to some error. Total errors can be classified into sampling errors and non-sampling errors. The term "error" here includes systematic biases as well as random errors.
Sampling errors and biases
Sampling errors and biases are induced by the sample design. They include:
Selection bias: When the true selection probabilities differ from those assumed in calculating the results.
Random sampling error: Random variation in the results due to the elements in the sample being selected at random.
Non-sampling error
Non-sampling errors are other errors which can impact final survey estimates, caused by problems in data collection, processing, or sample design. Such errors may include:
Over-coverage: inclusion of data from outside of the population
Under-coverage: sampling frame does not include elements in the population.
Measurement error: e.g. when respondents misunderstand a question, or find it difficult to answer
Processing error: mistakes in data coding
Non-response or Participation bias: failure to obtain complete data from all selected individuals
After sampling, a review is held of the exact process followed in sampling, rather than that intended, in order to study any effects that any divergences might have on subsequent analysis.
A particular problem involves non-response. Two major types of non-response exist:
unit nonresponse (lack of completion of any part of the survey)
item non-response (submission or participation in survey but failing to complete one or more components/questions of the survey)
In survey sampling, many of the individuals identified as part of the sample may be unwilling to participate, not have the time to participate (opportunity cost), or survey administrators may not have been able to contact them. In this case, there is a risk of differences between respondents and nonrespondents, leading to biased estimates of population parameters. This is often addressed by improving survey design, offering incentives, and conducting follow-up studies which make a repeated attempt to contact the unresponsive and to characterize their similarities and differences with the rest of the frame. The effects can also be mitigated by weighting the data (when population benchmarks are available) or by imputing data based on answers to other questions. Nonresponse is particularly a problem in internet sampling. Reasons for this problem may include improperly designed surveys, over-surveying (or survey fatigue),
and the fact that potential participants may have multiple e-mail addresses, which they do not use anymore or do not check regularly.
Survey weights
In many situations, the sample fraction may be varied by stratum and data will have to be weighted to correctly represent the population. Thus for example, a simple random sample of individuals in the United Kingdom might not include some in remote Scottish islands who would be inordinately expensive to sample. A cheaper method would be to use a stratified sample with urban and rural strata. The rural sample could be under-represented in the sample, but weighted up appropriately in the analysis to compensate.
More generally, data should usually be weighted if the sample design does not give each individual an equal chance of being selected. For instance, when households have equal selection probabilities but one person is interviewed from within each household, this gives people from large households a smaller chance of being interviewed. This can be accounted for using survey weights. Similarly, households with more than one telephone line have a greater chance of being selected in a random digit dialing sample, and weights can adjust for this.
Weights can also serve other purposes, such as helping to correct for non-response.
Methods of producing random samples
Random number table
Mathematical algorithms for pseudo-random number generators
Physical randomization devices such as coins, playing cards or sophisticated devices such as ERNIE
| Mathematics | Statistics and probability | null |
160556 | https://en.wikipedia.org/wiki/Ball%20%28mathematics%29 | Ball (mathematics) | In mathematics, a ball is the solid figure bounded by a sphere; it is also called a solid sphere. It may be a closed ball (including the boundary points that constitute the sphere) or an open ball (excluding them).
These concepts are defined not only in three-dimensional Euclidean space but also for lower and higher dimensions, and for metric spaces in general. A ball in dimensions is called a hyperball or -ball and is bounded by a hypersphere or ()-sphere. Thus, for example, a ball in the Euclidean plane is the same thing as a disk, the area bounded by a circle. In Euclidean 3-space, a ball is taken to be the volume bounded by a 2-dimensional sphere. In a one-dimensional space, a ball is a line segment.
In other contexts, such as in Euclidean geometry and informal use, sphere is sometimes used to mean ball. In the field of topology the closed -dimensional ball is often denoted as or while the open -dimensional ball is or .
In Euclidean space
In Euclidean -space, an (open) -ball of radius and center is the set of all points of distance less than from . A closed -ball of radius is the set of all points of distance less than or equal to away from .
In Euclidean -space, every ball is bounded by a hypersphere. The ball is a bounded interval when , is a disk bounded by a circle when , and is bounded by a sphere when .
Volume
The -dimensional volume of a Euclidean ball of radius in -dimensional Euclidean space is:
where is Leonhard Euler's gamma function (which can be thought of as an extension of the factorial function to fractional arguments). Using explicit formulas for particular values of the gamma function at the integers and half integers gives formulas for the volume of a Euclidean ball that do not require an evaluation of the gamma function. These are:
In the formula for odd-dimensional volumes, the double factorial is defined for odd integers as .
In general metric spaces
Let be a metric space, namely a set with a metric (distance function) , and let be a positive real number. The open (metric) ball of radius centered at a point in , usually denoted by or , is defined the same way as a Euclidean ball, as the set of points in of distance less than away from ,
The closed (metric) ball, sometimes denoted or , is likewise defined as the set of points of distance less than or equal to away from ,
In particular, a ball (open or closed) always includes itself, since the definition requires . A unit ball (open or closed) is a ball of radius 1.
A ball in a general metric space need not be round. For example, a ball in real coordinate space under the Chebyshev distance is a hypercube, and a ball under the taxicab distance is a cross-polytope. A closed ball also need not be compact. For example, a closed ball in any infinite-dimensional normed vector space is never compact. However, a ball in a vector space will always be convex as a consequence of the triangle inequality.
A subset of a metric space is bounded if it is contained in some ball. A set is totally bounded if, given any positive radius, it is covered by finitely many balls of that radius.
The open balls of a metric space can serve as a base, giving this space a topology, the open sets of which are all possible unions of open balls. This topology on a metric space is called the topology induced by the metric .
Let denote the closure of the open ball in this topology. While it is always the case that it is always the case that For example, in a metric space with the discrete metric, one has but for any
In normed vector spaces
Any normed vector space with norm is also a metric space with the metric In such spaces, an arbitrary ball of points around a point with a distance of less than may be viewed as a scaled (by ) and translated (by ) copy of a unit ball Such "centered" balls with are denoted with
The Euclidean balls discussed earlier are an example of balls in a normed vector space.
-norm
In a Cartesian space with the -norm , that is one chooses some and definesThen an open ball around the origin with radius is given by the set
For , in a 2-dimensional plane , "balls" according to the -norm (often called the taxicab or Manhattan metric) are bounded by squares with their diagonals parallel to the coordinate axes; those according to the -norm, also called the Chebyshev metric, have squares with their sides parallel to the coordinate axes as their boundaries. The -norm, known as the Euclidean metric, generates the well known disks within circles, and for other values of , the corresponding balls are areas bounded by Lamé curves (hypoellipses or hyperellipses).
For , the - balls are within octahedra with axes-aligned body diagonals, the -balls are within cubes with axes-aligned edges, and the boundaries of balls for with are superellipsoids. generates the inner of usual spheres.
Often can also consider the case of in which case we define
General convex norm
More generally, given any centrally symmetric, bounded, open, and convex subset of , one can define a norm on where the balls are all translated and uniformly scaled copies of . Note this theorem does not hold if "open" subset is replaced by "closed" subset, because the origin point qualifies but does not define a norm on .
In topological spaces
One may talk about balls in any topological space , not necessarily induced by a metric. An (open or closed) -dimensional topological ball of is any subset of which is homeomorphic to an (open or closed) Euclidean -ball. Topological -balls are important in combinatorial topology, as the building blocks of cell complexes.
Any open topological -ball is homeomorphic to the Cartesian space and to the open unit -cube (hypercube) . Any closed topological -ball is homeomorphic to the closed -cube .
An -ball is homeomorphic to an -ball if and only if . The homeomorphisms between an open -ball and can be classified in two classes, that can be identified with the two possible topological orientations of .
A topological -ball need not be smooth; if it is smooth, it need not be diffeomorphic to a Euclidean -ball.
Regions
A number of special regions can be defined for a ball:
cap, bounded by one plane
sector, bounded by a conical boundary with apex at the center of the sphere
segment, bounded by a pair of parallel planes
shell, bounded by two concentric spheres of differing radii
wedge, bounded by two planes passing through a sphere center and the surface of the sphere
| Mathematics | Measurement | null |
160573 | https://en.wikipedia.org/wiki/Axle | Axle | An axle or axletree is a central shaft for a rotating wheel or gear. On wheeled vehicles, the axle may be fixed to the wheels, rotating with them, or fixed to the vehicle, with the wheels rotating around the axle. In the former case, bearings or bushings are provided at the mounting points where the axle is supported. In the latter case, a bearing or bushing sits inside a central hole in the wheel to allow the wheel or gear to rotate around the axle. Sometimes, especially on bicycles, the latter type of axle is referred to as a spindle.
Terminology
On cars and trucks, several senses of the word axle occur in casual usage, referring to the shaft itself, its housing, or simply any transverse pair of wheels. Strictly speaking, a shaft that rotates with the wheel, being either bolted or splined in fixed relation to it, is called an axle or axle shaft. However, in looser usage, an entire assembly including the surrounding axle housing (typically a casting) is also called an axle.
An even broader (somewhat figurative) sense of the word refers to every pair of parallel wheels on opposite sides of a vehicle, regardless of their mechanical connection to each other and to the vehicle frame or body. Thus, transverse pairs of wheels in an independent suspension may be called an axle in some contexts. This very loose definition of "axle" is often used in assessing toll roads or vehicle taxes, and is taken as a rough proxy for the overall weight-bearing capacity of a vehicle, and its potential for causing wear or damage to roadway surfaces.
Vehicle axles
Axles are an integral component of most practical wheeled vehicles. In a solid, "live-axle" suspension system, the rotating inner axle cores (or half-shafts) serve to transmit driving torque to the wheels at each end, while the rigid outer tube maintains the position of the wheels at fixed angles relative to the axle, and controls the angle of the axle and wheels assembly to the vehicle body. The solid axles (housings) in this system must also bear the weight of the vehicle plus any cargo. A non-driving axle, such as the front beam axle in heavy-duty trucks and some two-wheel drive light trucks and vans, will have no shaft, and serves only as a suspension and steering component. Conversely, many front-wheel drive cars have a one-piece rear beam axle.
In other types of suspension systems, the axles serve only to transmit driving torque to the wheels: the position and angle of the wheel hubs is made independent from the axles by the function of the suspension system. This is typical of the independent suspensions found on most newer cars, and even SUVs, and on the front of many light trucks. An exception to this rule is the independent (rear) swing axle suspension, wherein the half-axles are also load-bearing suspension arms.
Independent drive-trains still need differentials (or diffs), but without fixed axle-housing tubes attached. The diff may be attached to the vehicle frame or body, and/or be integrated with the transmission (or gearbox) in a combined transaxle unit. The axle (half-)shafts then transmit driving torque to the wheels, usually via constant-velocity joints. Like a full floating axle system, the drive shafts in a front-wheel-drive independent suspension system do not support any vehicle weight.
Structural features and design
A straight axle is a single rigid shaft connecting a wheel on the left side of the vehicle to a wheel on the right side. The axis of rotation fixed by the axle is common to both wheels. Such a design can keep the wheel positions steady under heavy stress, and can therefore support heavy loads. Straight axles are used on trains (that is, locomotives and railway wagons), for the rear axles of commercial trucks, and on heavy-duty off-road vehicles. The axle can optionally be protected and further reinforced by enclosing the length of the axle in a housing.
In split-axle designs, the wheel on each side is attached to a separate shaft. Modern passenger cars have split-drive axles. In some designs, this allows independent suspension of the left and right wheels, and therefore a smoother ride. Even when the suspension is not independent, split axles permit the use of a differential, allowing the left and right drive wheels to be driven at different speeds as the automobile turns, improving traction and extending tire life.
A tandem axle is a group of two or more axles situated close together. Truck designs use such a configuration to provide a greater weight capacity than a single axle. Semi-trailers usually have a tandem axle at the rear.
Axles are typically made from SAE grade 41xx steel or SAE grade 10xx steel. SAE grade 41xx steel is commonly known as "chrome-molybdenum steel" (or "chrome-moly") while SAE grade 10xx steel is known as "carbon steel". The primary differences between the two are that chrome-moly steel is significantly more resistant to bending or breaking, and is very difficult to weld with tools normally found outside a professional welding shop.
Drive axle
An axle that is driven by the engine or prime mover is called a drive axle.
Modern front-wheel drive cars typically combine the transmission (gearbox and differential) and front axle into a single unit called a transaxle. The drive axle is a split axle with a differential and universal joints between the two half axles. Each half axle connects to the wheel by use of a constant velocity (CV) joint which allows the wheel assembly to move freely vertically as well as to pivot when making turns.
In rear-wheel drive cars and trucks, the engine turns a driveshaft (also called a propeller shaft or tailshaft) which transmits the rotational force to a drive axle at the rear of the vehicle. The drive axle may be a live axle, but modern rear-wheel drive automobiles generally use a split axle with a differential. In this case, one half-axle or half-shaft connects the differential with the left rear wheel, a second half-shaft does the same with the right rear wheel; thus the two half-axles and the differential constitute the rear axle. The front drive axle is providing the force to drive the truck. In fact, only one wheel of that axle is actually moving the truck and trailer down the road.
Some simple vehicle designs, such as leisure go-karts, may have a single driven wheel where the drive axle is a split axle with only one of the two shafts driven by the engine, or else have both wheels connected to one shaft without a differential (kart racing). However, other go-karts have two rear drive wheels too.
Lift axle
Some dump trucks and trailers may be configured with a lift axle (also known as an airlift axle or drop axle), which may be mechanically raised or lowered. The axle is lowered to increase the weight capacity, or to distribute the weight of the cargo over more wheels, for example, to cross a weight-restricted bridge. When not needed, the axle is lifted off the ground to save wear on the tires and axle, and to increase traction in the remaining wheels, and to decrease fuel consumption. Lifting an axle also alleviates lateral scrubbing of the additional axle in very tight turns, allowing the vehicle to turn more readily. In some situations, the removal of pressure from the additional axle is necessary for the vehicle to complete a turn at all.
Several manufacturers offer computer-controlled airlifts so that the dead axles are automatically lowered when the main axle reaches its weight limit. The dead axles can still be lifted by the press of a button if needed, for better maneuverability.
Lift axles were in use in the early 1940s. Initially, the axle was lifted by a mechanical device. Soon hydraulics replaced the mechanical lift system. One of the early manufacturers was Zetterbergs, located in Östervåla, Sweden. Their brand was Zeta-lyften.
The liftable tandem drive axle was invented in 1957 by the Finnish truck manufacturer Vanajan Autotehdas, a company sharing history with Sisu Auto.
Full-floating vs semi-floating
A full-floating axle carries the vehicle's weight on the axle casing, not the half-shafts; they serve only to transmit torque from the differential to the wheels. They "float" inside an assembly that carries the vehicle's weight. Thus the only stress it must endure is torque (not lateral bending force). Full-floating axle shafts are retained by a flange bolted to the hub, while the hub and bearings are retained on the spindle by a large nut.
In contrast, a semi-floating design carries the weight of the vehicle on the axle shaft itself; there is a single bearing at the end of the axle housing that carries the load from the axle and that the axle rotates through. To be "semi-floating" the axle shafts must be able to "float" in the housing, bearings and seals, and not subject to axial "thrust" and/or bearing preload. Needle bearings and separate lip seals are used in semi-floating axles with axle retained in the housing at their inner ends typically with circlips which are 3¾-round hardened washers that slide into grooves machined at the inner end of the shafts and retained in/by recesses in the differential carrier side gears which are themselves retained by the differential pinion gear (or "spider gear") shaft. A true semi-floating axle assembly places no side loads on the axle housing tubes or axle shafts.
Axles that are pressed into ball or tapered roller bearings, which are in turn retained in the axle housings with flanges, bolts, and nuts do not "float" and place axial loads on the bearings, housings, and only a short section of the shaft itself, that also carries all radial loads.
The full-floating design is typically used in most ¾- and 1-ton light trucks, medium-duty trucks, and heavy-duty trucks. The overall assembly can carry more weight than a semi-floating or non-floating axle assembly because the hubs have two bearings riding on a fixed spindle. A full-floating axle can be identified by a protruding hub to which the axle shaft flange is bolted.
The semi-floating axle setup is commonly used on half-ton and lighter 4×4 trucks in the rear. This setup allows the axle shaft to be the means of propulsion, and also support the weight of the vehicle. The main difference between the full- and semi-floating axle setups is the number of bearings. The semi-floating axle features only one bearing, while the full-floating assembly has bearings on both the inside and outside of the wheel hub. The other difference is axle removal. To remove the semi-floating axle, the wheel must be removed first; if such an axle breaks, the wheel is most likely to come off the vehicle. The semi-floating design is found under most ½-ton and lighter trucks, as well as in SUVs and rear-wheel-drive passenger cars, usually being smaller or less expensive models.
A benefit of a full-floating axle is that even if an axle shaft (used to transmit torque or power) breaks, the wheel will not come off, preventing serious accidents.
| Technology | Components_2 | null |
160581 | https://en.wikipedia.org/wiki/Tachycardia | Tachycardia | Tachycardia, also called tachyarrhythmia, is a heart rate that exceeds the normal resting rate. In general, a resting heart rate over 100 beats per minute is accepted as tachycardia in adults. Heart rates above the resting rate may be normal (such as with exercise) or abnormal (such as with electrical problems within the heart).
Complications
Tachycardia can lead to fainting.
When the rate of blood flow becomes too rapid, or fast blood flow passes on damaged endothelium, it increases the friction within vessels resulting in turbulence and other disturbances. According to the Virchow's triad, this is one of the three conditions (along with hypercoagulability and endothelial injury/dysfunction) that can lead to thrombosis (i.e., blood clots within vessels).
Causes
Some causes of tachycardia include:
Adrenergic storm
Anaemia
Anxiety
Atrial fibrillation
Atrial flutter
Atrial tachycardia
Atrioventricular reentrant tachycardia
AV nodal reentrant tachycardia
Brugada syndrome
Circulatory shock and its various causes (obstructive shock, cardiogenic shock, hypovolemic shock, distributive shock)
Dehydration
Dysautonomia
Exercise
Fear
Hypoglycemia
Hypovolemia
Hyperthyroidism
Hyperventilation
Inappropriate sinus tachycardia
Junctional tachycardia
Metabolic myopathy
Multifocal atrial tachycardia
Pacemaker mediated
Pain
Panic attack
Pheochromocytoma
Sinus tachycardia
Sleep deprivation
Supraventricular tachycardia
Ventricular tachycardia
Wolff–Parkinson–White syndrome
Drug related:
Alcohol (Ethanol) intoxication
Stimulants
Cannabis
Drug withdrawal
Tricyclic antidepressants
Nefopam
Opioids (rare)
Diagnosis
The upper threshold of a normal human resting heart rate is based on age. Cutoff values for tachycardia in different age groups are fairly well standardized; typical cutoffs are listed below:
1–2 days: Tachycardia >159 beats per minute (bpm)
3–6 days: Tachycardia >166 bpm
1–3 weeks: Tachycardia >182 bpm
1–2 months: Tachycardia >179 bpm
3–5 months: Tachycardia >186 bpm
6–11 months: Tachycardia >169 bpm
1–2 years: Tachycardia >151 bpm
3–4 years: Tachycardia >137 bpm
5–7 years: Tachycardia >133 bpm
8–11 years: Tachycardia >130 bpm
12–15 years: Tachycardia >119 bpm
>15 years – adult: Tachycardia >100 bpm
Heart rate is considered in the context of the prevailing clinical picture. When the heart beats excessively or rapidly, the heart pumps less efficiently and provides less blood flow to the rest of the body, including the heart itself. The increased heart rate also leads to increased work and oxygen demand by the heart, which can lead to rate related ischemia.
Differential diagnosis
An electrocardiogram (ECG) is used to classify the type of tachycardia. They may be classified into narrow and wide complex based on the QRS complex. Equal or less than 0.1s for narrow complex. Presented in order of most to least common, they are:
Narrow complex
Sinus tachycardia, which originates from the sino-atrial (SA) node, near the base of the superior vena cava
Atrial fibrillation
Atrial flutter
AV nodal reentrant tachycardia
Accessory pathway mediated tachycardia
Atrial tachycardia
Multifocal atrial tachycardia
Cardiac Tamponade
Junctional tachycardia (rare in adults)
Wide complex
Ventricular tachycardia, any tachycardia that originates in the ventricles
Any narrow complex tachycardia combined with a problem with the conduction system of the heart, often termed "supraventricular tachycardia with aberrancy"
A narrow complex tachycardia with an accessory conduction pathway, often termed "supraventricular tachycardia with pre-excitation" (e.g. Wolff–Parkinson–White syndrome)
Pacemaker-tracked or pacemaker-mediated tachycardia
Tachycardias may be classified as either narrow complex tachycardias (supraventricular tachycardias) or wide complex tachycardias. Narrow and wide refer to the width of the QRS complex on the ECG. Narrow complex tachycardias tend to originate in the atria, while wide complex tachycardias tend to originate in the ventricles. Tachycardias can be further classified as either regular or irregular.
Sinus
The body has several feedback mechanisms to maintain adequate blood flow and blood pressure. If blood pressure decreases, the heart beats faster in an attempt to raise it. This is called reflex tachycardia. This can happen in response to a decrease in blood volume (through dehydration or bleeding), or an unexpected change in blood flow. The most common cause of the latter is orthostatic hypotension (also called postural hypotension). Fever, hyperventilation, diarrhea and severe infections can also cause tachycardia, primarily due to increase in metabolic demands.
Upon exertion, sinus tachycardia can also be seen in some inborn errors of metabolism that result in metabolic myopathies, such as McArdle's disease (GSD-V). Metabolic myopathies interfere with the muscle's ability to create energy. This energy shortage in muscle cells causes an inappropriate rapid heart rate in response to exercise. The heart tries to compensate for the energy shortage by increasing heart rate to maximize delivery of oxygen and other blood borne fuels to the muscle cells.
"In McArdle's, our heart rate tends to increase in what is called an 'inappropriate' response. That is, after the start of exercise it increases much more quickly than would be expected in someone unaffected by McArdle's." As skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity, individuals with GSD-V experience during exercise: sinus tachycardia, tachypnea, muscle fatigue and pain, during the aforementioned activities and time frames. Those with GSD-V also experience "second wind", after approximately 6–10 minutes of light-moderate aerobic activity, such as walking without an incline, where the heart rate drops and symptoms of exercise intolerance improve.
An increase in sympathetic nervous system stimulation causes the heart rate to increase, both by the direct action of sympathetic nerve fibers on the heart and by causing the endocrine system to release hormones such as epinephrine (adrenaline), which have a similar effect. Increased sympathetic stimulation is usually due to physical or psychological stress. This is the basis for the so-called fight-or-flight response, but such stimulation can also be induced by stimulants such as ephedrine, amphetamines or cocaine. Certain endocrine disorders such as pheochromocytoma can also cause epinephrine release and can result in tachycardia independent of nervous system stimulation. Hyperthyroidism can also cause tachycardia. The upper limit of normal rate for sinus tachycardia is thought to be 220 bpm minus age.
Inappropriate sinus tachycardia
Inappropriate sinus tachycardia (IST) is a diagnosis of exclusion, a rare but benign type of cardiac arrhythmia that may be caused by a structural abnormality in the sinus node. It can occur in seemingly healthy individuals with no history of cardiovascular disease. Other causes may include autonomic nervous system deficits, autoimmune response, or drug interactions. Although symptoms might be distressing, treatment is not generally needed.
Ventricular
Ventricular tachycardia (VT or V-tach) is a potentially life-threatening cardiac arrhythmia that originates in the ventricles. It is usually a regular, wide complex tachycardia with a rate between 120 and 250 beats per minute. A medically significant subvariant of ventricular tachycardia is called torsades de pointes (literally meaning "twisting of the points", due to its appearance on an EKG), which tends to result from a long QT interval.
Both of these rhythms normally last for only a few seconds to minutes (paroxysmal tachycardia), but if VT persists it is extremely dangerous, often leading to ventricular fibrillation.
Supraventricular
This is a type of tachycardia that originates from above the ventricles, such as the atria. It is sometimes known as paroxysmal atrial tachycardia (PAT). Several types of supraventricular tachycardia are known to exist.
Atrial fibrillation
Atrial fibrillation is one of the most common cardiac arrhythmias. In general, it is an irregular, narrow complex rhythm. However, it may show wide QRS complexes on the ECG if a bundle branch block is present. At high rates, the QRS complex may also become wide due to the Ashman phenomenon. It may be difficult to determine the rhythm's regularity when the rate exceeds 150 beats per minute. Depending on the patient's health and other variables such as medications taken for rate control, atrial fibrillation may cause heart rates that span from 50 to 250 beats per minute (or even higher if an accessory pathway is present). However, new-onset atrial fibrillation tends to present with rates between 100 and 150 beats per minute.
AV nodal reentrant tachycardia
AV nodal reentrant tachycardia (AVNRT) is the most common reentrant tachycardia. It is a regular narrow complex tachycardia that usually responds well to the Valsalva maneuver or the drug adenosine. However, unstable patients sometimes require synchronized cardioversion. Definitive care may include catheter ablation.
AV reentrant tachycardia
AV reentrant tachycardia (AVRT) requires an accessory pathway for its maintenance. AVRT may involve orthodromic conduction (where the impulse travels down the AV node to the ventricles and back up to the atria through the accessory pathway) or antidromic conduction (which the impulse travels down the accessory pathway and back up to the atria through the AV node). Orthodromic conduction usually results in a narrow complex tachycardia, and antidromic conduction usually results in a wide complex tachycardia that often mimics ventricular tachycardia. Most antiarrhythmics are contraindicated in the emergency treatment of AVRT, because they may paradoxically increase conduction across the accessory pathway.
Junctional tachycardia
Junctional tachycardia is an automatic tachycardia originating in the AV junction. It tends to be a regular, narrow complex tachycardia and may be a sign of digitalis toxicity.
Management
The management of tachycardia depends on its type (wide complex versus narrow complex), whether or not the person is stable or unstable, and whether the instability is due to the tachycardia. Unstable means that either important organ functions are affected or cardiac arrest is about to occur. Stable means that there is a tachycardia, but it does not seem an immediate threat for the patient's health, but only a symptom of an unknown disease, or a reaction that is not very dangerous in that moment.
Unstable
In those that are unstable with a narrow complex tachycardia, intravenous adenosine may be attempted. In all others, immediate cardioversion is recommended.
Stable
If the problem is a simple acceleration of the heart rate that worries the patient, but the heart and the general patient's health remain stable enough, it is possible to correct it by a simple deceleration using some physical maneuvers called vagal maneuvers. But, if the cause of the tachycardia is chronic (permanent), it would return after some time, unless that cause is corrected.
Besides, the patient should avoid receiving external effects that cause or increase tachycardia.
The same measures than in unstable tachycardia can also be taken, with medications and the type of cardioversion that is appropriate for the patient's tachycardia.
Terminology
The word tachycardia came to English from Neo-Latin as a neoclassical compound built from the combining forms tachy- + -cardia, which are from the Greek ταχύς tachys, "quick, rapid" and καρδία, kardia, "heart". As a matter both of usage choices in the medical literature and of idiom in natural language, the words tachycardia and tachyarrhythmia are usually used interchangeably, or loosely enough that precise differentiation is not explicit. Some careful writers have tried to maintain a logical differentiation between them, which is reflected in major medical dictionaries and major general dictionaries. The distinction is that tachycardia be reserved for the rapid heart rate itself, regardless of cause, physiologic or pathologic (that is, from healthy response to exercise or from cardiac arrhythmia), and that tachyarrhythmia be reserved for the pathologic form (that is, an arrhythmia of the rapid rate type). This is why five of the previously referenced dictionaries do not enter cross-references indicating synonymy between their entries for the two words (as they do elsewhere whenever synonymy is meant), and it is why one of them explicitly specifies that the two words not be confused. But the prescription will probably never be successfully imposed on general usage, not only because much of the existing medical literature ignores it even when the words stand alone but also because the terms for specific types of arrhythmia (standard collocations of adjectives and noun) are deeply established idiomatically with the tachycardia version as the more commonly used version. Thus SVT is called supraventricular tachycardia more than twice as often as it is called supraventricular tachyarrhythmia; moreover, those two terms are always completely synonymous—in natural language there is no such term as "healthy/physiologic supraventricular tachycardia". The same themes are also true of AVRT and AVNRT. Thus this pair is an example of when a particular prescription (which may have been tenable 50 or 100 years earlier) can no longer be invariably enforced without violating idiom. But the power to differentiate in an idiomatic way is not lost, regardless, because when the specification of physiologic tachycardia is needed, that phrase aptly conveys it.
| Biology and health sciences | Symptoms and signs | Health |
160710 | https://en.wikipedia.org/wiki/Plover | Plover | Plovers ( , ) are members of a widely distributed group of wading birds of subfamily Charadriinae. The term "plover" applies to all the members of the subfamily, though only about half of them include it in their name.
Species list in taxonomic sequence
The taxonomy of family Charadriidae is unsettled. At various times the plovers, dotterels, and lapwings of family Charadriidae have been distributed among several subfamilies, with Charadriinae including most of the species. The International Ornithological Congress (IOC) and the Clements taxonomy do not assign species to subfamilies. The South American Classification Committee of the American Ornithological Society (AOS) includes all of the species in Charadriinae. The North American Classification Committee of the AOS and BirdLife International's Handbook of the Birds of the World separate the four members of genus Pluvialis as subfamily Pluvialinae.
The IOC recognizes these 69 species of plovers, dotterels, and lapwings in family Charadriidae. They are distributed among 11 genera, some of which have only one species. This list is presented according to the IOC taxonomic sequence and can also be sorted alphabetically by common name and binomial.
Description
Plovers are found throughout the world, with the exception of the Sahara and the polar regions, and are characterised by relatively short bills. They hunt by sight, rather than by feel as longer-billed waders like snipes do. They feed mainly on insects, worms or other invertebrates, depending on the habitat, which are obtained by a run-and-pause technique, rather than the steady probing of some other wader groups. Plovers engage in false brooding, a type of distraction display. Examples include pretending to change position or to sit on an imaginary nest site.
In folklore
The European golden plover spends summers in Iceland, and in Icelandic folklore, the appearance of the first plover in the country means that spring has arrived. The Icelandic media always covers the first plover sighting.
| Biology and health sciences | Charadriiformes | Animals |
160832 | https://en.wikipedia.org/wiki/Tunnel | Tunnel | A tunnel is an underground or undersea passageway. It is dug through surrounding soil, earth or rock, or laid under water, and is usually completely enclosed except for the two portals common at each end, though there may be access and ventilation openings at various points along the length. A pipeline differs significantly from a tunnel, though some recent tunnels have used immersed tube construction techniques rather than traditional tunnel boring methods.
A tunnel may be for foot or vehicular road traffic, for rail traffic, or for a canal. The central portions of a rapid transit network are usually in the tunnel. Some tunnels are used as sewers or aqueducts to supply water for consumption or for hydroelectric stations. Utility tunnels are used for routing steam, chilled water, electrical power or telecommunication cables, as well as connecting buildings for convenient passage of people and equipment.
Secret tunnels are built for military purposes, or by civilians for smuggling of weapons, contraband, or people. Special tunnels, such as wildlife crossings, are built to allow wildlife to cross human-made barriers safely. Tunnels can be connected together in tunnel networks.
A tunnel is relatively long and narrow; the length is often much greater than twice the diameter, although similar shorter excavations can be constructed, such as cross passages between tunnels. The definition of what constitutes a tunnel can vary widely from source to source. For example, in the United Kingdom, a road tunnel is defined as "a subsurface highway structure enclosed for a length of or more." In the United States, the NFPA definition of a tunnel is "An underground structure with a design length greater than and a diameter greater than ."
Etymology
The word "tunnel" comes from the Middle English tonnelle, meaning "a net", derived from Old French tonnel, a diminutive of tonne ("cask"). The modern meaning, referring to an underground passageway, evolved in the 16th century as a metaphor for a narrow, confined space like the inside of a cask.
History
In Babylon, about 2200 B.C., it is believed that the first artificial tunnel was constructed. To join the temple of Belos with the palace, this was built with the aid of the cut and cover technique.
In the Mahabharata, the Pandavas built a secret tunnel within their new home, called "Lakshagriha" (House of Lac), which was constructed by Purochana under the orders of Duryodhana by the intention of burning them alive inside, allowing them to escape when the palace was set on fire; this act of foresight by the Pandavas saved their lives
Some of the earliest tunnels used by humans were paleoburrows excavated by prehistoric mammals.
Much of the early technology of tunnelling evolved from mining and military engineering. The etymology of the terms "mining" (for mineral extraction or for siege attacks), "military engineering", and "civil engineering" reveals these deep historic connections.
Antiquity and early middle ages
Predecessors of modern tunnels were adits that transported water for irrigation, drinking, or sewerage. The first qanats are known from before 2000 BC.
The earliest tunnel known to have been excavated from both ends is the Siloam Tunnel, built in Jerusalem by the kings of Judah around the 8th century BC. Another tunnel excavated from both ends, maybe the second known, is the Tunnel of Eupalinos, which is a tunnel aqueduct long running through Mount Kastro in Samos, Greece. It was built in the 6th century BC to serve as an aqueduct.
In Pakistan, the mughal era tunnel has been restored in the Lahore.
In Ethiopia, the Siqurto foot tunnel, hand-hewn in the Middle Ages, crosses a mountain ridge.
In the Gaza Strip, the network of tunnels was used by Jewish strategists as rock-cut shelters, in first links to Judean resistance against Roman rule in the Bar Kokhba revolt during the 2nd century AD.
Geotechnical investigation and design
A major tunnel project must start with a comprehensive investigation of ground conditions by collecting samples from boreholes and by other geophysical techniques. An informed choice can then be made of machinery and methods for excavation and ground support, which will reduce the risk of encountering unforeseen ground conditions. In planning the route, the horizontal and vertical alignments can be selected to make use of the best ground and water conditions. It is common practice to locate a tunnel deeper than otherwise would be required, in order to excavate through solid rock or other material that is easier to support during construction.
Conventional desk and preliminary site studies may yield insufficient information to assess such factors as the blocky nature of rocks, the exact location of fault zones, or the stand-up times of softer ground. This may be a particular concern in large-diameter tunnels. To give more information, a pilot tunnel (or "drift tunnel") may be driven ahead of the main excavation. This smaller tunnel is less likely to collapse catastrophically should unexpected conditions be met, and it can be incorporated into the final tunnel or used as a backup or emergency escape passage. Alternatively, horizontal boreholes may sometimes be drilled ahead of the advancing tunnel face.
Other key geotechnical factors:
Stand-up time is the amount of time a newly excavated cavity can support itself without any added structures. Knowing this parameter allows the engineers to determine how far an excavation can proceed before support is needed, which in turn affects the speed, efficiency, and cost of construction. Generally, certain configurations of rock and clay will have the greatest stand-up time, while sand and fine soils will have a much lower stand-up time.
Groundwater control is very important in tunnel construction. Water leaking into a tunnel or vertical shaft will greatly decrease stand-up time, causing the excavation to become unstable and risking collapse. The most common way to control groundwater is to install dewatering pipes into the ground and to simply pump the water out. A very effective but expensive technology is ground freezing, using pipes which are inserted into the ground surrounding the excavation, which are then cooled with special refrigerant fluids. This freezes the ground around each pipe until the whole space is surrounded with frozen soil, keeping water out until a permanent structure can be built.
Tunnel cross-sectional shape is also very important in determining stand-up time. If a tunnel excavation is wider than it is high, it will have a harder time supporting itself, decreasing its stand-up time. A square or rectangular excavation is more difficult to make self-supporting, because of a concentration of stress at the corners.
Choice of tunnels versus bridges
For water crossings, a tunnel is generally more costly to construct than a bridge. However, both navigational and traffic considerations may limit the use of high bridges or drawbridges intersecting with shipping channels, necessitating a tunnel.
Bridges usually require a larger footprint on each shore than tunnels. In areas with expensive real estate, such as Manhattan and urban Hong Kong, this is a strong factor in favor of a tunnel. Boston's Big Dig project replaced elevated roadways with a tunnel system to increase traffic capacity, hide traffic, reclaim land, redecorate, and reunite the city with the waterfront.
The 1934 Queensway Tunnel under the River Mersey at Liverpool was chosen over a massively high bridge partly for defence reasons; it was feared that aircraft could destroy a bridge in times of war, not merely impairing road traffic but blocking the river to navigation. Maintenance costs of a massive bridge to allow the world's largest ships to navigate under were considered higher than for a tunnel. Similar conclusions were reached for the 1971 Kingsway Tunnel under the Mersey. In Hampton Roads, Virginia, tunnels were chosen over bridges for strategic considerations; in the event of damage, bridges might prevent US Navy vessels from leaving Naval Station Norfolk.
Water-crossing tunnels built instead of bridges include the Seikan Tunnel in Japan; the Holland Tunnel and Lincoln Tunnel between New Jersey and Manhattan in New York City; the Queens-Midtown Tunnel between Manhattan and the borough of Queens on Long Island; the Detroit-Windsor Tunnel between Michigan and Ontario; and the Elizabeth River tunnels between Norfolk and Portsmouth, Virginia; the 1934 River Mersey road Queensway Tunnel; the Western Scheldt Tunnel, Zeeland, Netherlands; and the North Shore Connector tunnel in Pittsburgh, Pennsylvania. The Sydney Harbour Tunnel was constructed to provide a second harbour crossing and to alleviate traffic congestion on the Sydney Harbour Bridge, without spoiling the iconic view.
Other reasons for choosing a tunnel instead of a bridge include avoiding difficulties with tides, weather, and shipping during construction (as in the Channel Tunnel), aesthetic reasons (preserving the above-ground view, landscape, and scenery), and also for weight capacity reasons (it may be more feasible to build a tunnel than a sufficiently strong bridge).
Some water crossings are a mixture of bridges and tunnels, such as the Denmark to Sweden link and the Chesapeake Bay Bridge-Tunnel in Virginia.
There are particular hazards with tunnels, especially from vehicle fires when combustion gases can asphyxiate users, as happened at the Gotthard Road Tunnel in Switzerland in 2001. One of the worst railway disasters ever, the Balvano train disaster, was caused by a train stalling in the Armi tunnel in Italy in 1944, killing 426 passengers. Designers try to reduce these risks by installing emergency ventilation systems or isolated emergency escape tunnels parallel to the main passage.
Project planning and cost estimates
Government funds are often required for the creation of tunnels. When a tunnel is being planned or constructed, economics and politics play a large factor in the decision making process. Civil engineers usually use project management techniques for developing a major structure. Understanding the amount of time the project requires, and the amount of labor and materials needed is a crucial part of project planning. The project duration must be identified using a work breakdown structure and critical path method. Also, the land needed for excavation and construction staging, and the proper machinery must be selected. Large infrastructure projects require millions or even billions of dollars, involving long-term financing, usually through issuance of bonds.
The costs and benefits for an infrastructure such as a tunnel must be identified. Political disputes can occur, as in 2005 when the US House of Representatives approved a $100 million federal grant to build a tunnel under New York Harbor. However, the Port Authority of New York and New Jersey was not aware of this bill and had not asked for a grant for such a project. Increased taxes to finance a large project may cause opposition.
Construction
Tunnels are dug in types of materials varying from soft clay to hard rock. The method of tunnel construction depends on such factors as the ground conditions, the groundwater conditions, the length and diameter of the tunnel drive, the depth of the tunnel, the logistics of supporting the tunnel excavation, the final use and the shape of the tunnel and appropriate risk management.
There are three basic types of tunnel construction in common use. Cut-and-cover tunnels are constructed in a shallow trench and then covered over. Bored tunnels are constructed in situ, without removing the ground above. Finally, a tube can be sunk into a body of water, which is called an immersed tunnel.
Cut-and-cover
Cut-and-cover is a simple method of construction for shallow tunnels where a trench is excavated and roofed over with an overhead support system strong enough to carry the load of what is to be built above the tunnel.
There are two basic forms of cut-and-cover tunnelling:
Bottom-up method: A trench is excavated, with ground support as necessary, and the tunnel is constructed in it. The tunnel may be of in situ concrete, precast concrete, precast arches, or corrugated steel arches; in early days brickwork was used. The trench is then carefully back-filled and the surface is reinstated.
Top-down method: Side support walls and capping beams are constructed from ground level by such methods as slurry walling or contiguous bored piling. Only a shallow excavation is needed to construct the tunnel roof using precast beams or in situ concrete sitting on the walls. The surface is then reinstated except for access openings. This allows early reinstatement of roadways, services, and other surface features. Excavation then takes place under the permanent tunnel roof, and the base slab is constructed.
Shallow tunnels are often of the cut-and-cover type (if under water, of the immersed-tube type), while deep tunnels are excavated, often using a tunnelling shield. For intermediate levels, both methods are possible.
Large cut-and-cover boxes are often used for underground metro stations, such as Canary Wharf tube station in London. This construction form generally has two levels, which allows economical arrangements for ticket hall, station platforms, passenger access and emergency egress, ventilation and smoke control, staff rooms, and equipment rooms. The interior of Canary Wharf station has been likened to an underground cathedral, owing to the sheer size of the excavation. This contrasts with many traditional stations on London Underground, where bored tunnels were used for stations and passenger access. Nevertheless, the original parts of the London Underground network, the Metropolitan and District Railways, were constructed using cut-and-cover. These lines pre-dated electric traction and the proximity to the surface was useful to ventilate the inevitable smoke and steam.
A major disadvantage of cut-and-cover is the widespread disruption generated at the surface level during construction. This, and the availability of electric traction, brought about London Underground's switch to bored tunnels at a deeper level towards the end of the 19th century.
Prior to the replacement of manual excavation by the use of boring machines, Victorian tunnel excavators developed a specialized method called clay-kicking for digging tunnels in clay-based soils. The clay-kicker lies on a plank at a 45-degree angle away from the working face and rather than a mattock with his hands, inserts with his feet a tool with a cup-like rounded end, then turns the tool with his hands to extract a section of soil, which is then placed on the waste extract.
Clay-kicking is a specialized method developed in the United Kingdom of digging tunnels in strong clay-based soil structures. This method of cut and cover construction required relatively little disturbance of property during the renewal of the United Kingdom's then ancient sewerage systems. It was also used during the First World War by Royal Engineer tunnelling companies placing mines beneath German lines, because it was almost silent and so not susceptible to listening methods of detection.
Boring machines
Tunnel boring machines (TBMs) and associated back-up systems are used to highly automate the entire tunnelling process, reducing tunnelling costs. In certain predominantly urban applications, tunnel boring is viewed as a quick and cost-effective alternative to laying surface rails and roads. Expensive compulsory purchase of buildings and land, with potentially lengthy planning inquiries, is eliminated. Disadvantages of TBMs arise from their usually large size – the difficulty of transporting the large TBM to the site of tunnel construction, or (alternatively) the high cost of assembling the TBM on-site, often within the confines of the tunnel being constructed.
There are a variety of TBM designs that can operate in a variety of conditions, from hard rock to soft water-bearing ground. Some TBMs, the bentonite slurry and earth-pressure balance types, have pressurized compartments at the front end, allowing them to be used in difficult conditions below the water table. This pressurizes the ground ahead of the TBM cutter head to balance the water pressure. The operators work in normal air pressure behind the pressurized compartment, but may occasionally have to enter that compartment to renew or repair the cutters. This requires special precautions, such as local ground treatment or halting the TBM at a position free from water. Despite these difficulties, TBMs are now preferred over the older method of tunnelling in compressed air, with an airlock/decompression chamber some way back from the TBM, which required operators to work in high pressure and go through decompression procedures at the end of their shifts, much like deep-sea divers.
In February 2010, Aker Wirth delivered a TBM to Switzerland, for the expansion of the Linth–Limmern Power Stations located south of Linthal in the canton of Glarus. The borehole has a diameter of . The four TBMs used for excavating the Gotthard Base Tunnel, in Switzerland, had a diameter of about . A larger TBM was built to bore the Green Heart Tunnel (Dutch: Tunnel Groene Hart) as part of the HSL-Zuid in the Netherlands, with a diameter of . This in turn was superseded by the Madrid M30 ringroad, Spain, and the Chong Ming tunnels in Shanghai, China. All of these machines were built at least partly by Herrenknecht. , the world's largest TBM was "Big Bertha", a diameter machine built by Hitachi Zosen Corporation, which dug the Alaskan Way Viaduct replacement tunnel in Seattle, Washington (US).
Shafts
A temporary access shaft is sometimes necessary during the excavation of a tunnel. They are usually circular and go straight down until they reach the level at which the tunnel is going to be built. A shaft normally has concrete walls and is usually built to be permanent. Once the access shafts are complete, TBMs are lowered to the bottom and excavation can start. Shafts are the main entrance in and out of the tunnel until the project is completed. If a tunnel is going to be long, multiple shafts at various locations may be bored so that entrance to the tunnel is closer to the unexcavated area.
Once construction is complete, construction access shafts are often used as ventilation shafts, and may also be used as emergency exits.
Sprayed concrete techniques
The new Austrian tunnelling method (NATM)—also referred to as the Sequential Excavation Method (SEM)—was developed in the 1960s.
The main idea of this method is to use the geological stress of the surrounding rock mass to stabilize the tunnel, by allowing a measured relaxation and stress reassignment into the surrounding rock to prevent full loads becoming imposed on the supports. Based on geotechnical measurements, an optimal cross section is computed. The excavation is protected by a layer of sprayed concrete, commonly referred to as shotcrete. Other support measures can include steel arches, rock bolts, and mesh. Technological developments in sprayed concrete technology have resulted in steel and polypropylene fibers being added to the concrete mix to improve lining strength. This creates a natural load-bearing ring, which minimizes the rock's deformation.
By special monitoring the NATM method is flexible, even at surprising changes of the geomechanical rock consistency during the tunneling work. The measured rock properties lead to appropriate tools for tunnel strengthening.
Pipe jacking
In pipe jacking, hydraulic jacks are used to push specially made pipes through the ground behind a TBM or shield. This method is commonly used to create tunnels under existing structures, such as roads or railways. Tunnels constructed by pipe jacking are normally small diameter bores with a maximum size of around .
Box jacking
Box jacking is similar to pipe jacking, but instead of jacking tubes, a box-shaped tunnel is used. Jacked boxes can be a much larger span than a pipe jack, with the span of some box jacks in excess of . A cutting head is normally used at the front of the box being jacked, and spoil removal is normally by excavator from within the box. Recent developments of the Jacked Arch and Jacked deck have enabled longer and larger structures to be installed to close accuracy.
Underwater tunnels
There are also several approaches to underwater tunnels, the two most common being bored tunnels or immersed tubes, examples are Bjørvika Tunnel and Marmaray. Submerged floating tunnels are a novel approach under consideration; however, no such tunnels have been constructed to date.
Temporary way
During construction of a tunnel it is often convenient to install a temporary railway, particularly to remove excavated spoil, often narrow gauge so that it can be double track to allow the operation of empty and loaded trains at the same time. The temporary way is replaced by the permanent way at completion, thus explaining the term "Perway".
Enlargement
The vehicles or traffic using a tunnel can outgrow it, requiring replacement or enlargement:
The original single line Gib Tunnel near Mittagong was replaced with a double-track tunnel, with the original tunnel used for growing mushrooms.
The 1832 double-track -long tunnel from Edge Hill to Lime Street in Liverpool was near totally removed, apart from a section at Edge Hill and a section nearer to Lime Street, as four tracks were required. The tunnel was dug out into a very deep four-track cutting, with short tunnels in places along the cutting. Train services were not interrupted as the work progressed. There are other occurrences of tunnels being replaced by open cuts, for example, the Auburn Tunnel.
The Farnworth Tunnel in England was enlarged using a tunnel boring machine (TBM) in 2015. The Rhyndaston Tunnel was enlarged using a borrowed TBM so as to be able to take ISO containers.
Tunnels can also be enlarged by lowering the floor.
Open building pit
An open building pit consists of a horizontal and a vertical boundary that keeps groundwater and soil out of the pit. There are several potential alternatives and combinations for (horizontal and vertical) building pit boundaries. The most important difference with cut-and-cover is that the open building pit is muted after tunnel construction; no roof is placed.
Other construction methods
Drilling and blasting
Hydraulic splitter
Slurry-shield machine
Wall-cover construction method.
Variant tunnel types
Double-deck and multipurpose tunnels
Some tunnels are double-deck, for example, the two major segments of the San Francisco–Oakland Bay Bridge (completed in 1936) are linked by a double-deck tunnel section through Yerba Buena Island, the largest-diameter bored tunnel in the world. At construction this was a combination bidirectional rail and truck pathway on the lower deck with automobiles above, now converted to one-way road vehicle traffic on each deck.
In Turkey, the Eurasia Tunnel under the Bosphorus, opened in 2016, has at its core a two-deck road tunnel with two lanes on each deck.
Additionally, in 2015 the Turkish government announced that it will build three-level tunnel, also under the Bosporus. The tunnel is intended to carry both the Istanbul metro and a two-level highway, over a length of .
The French in west Paris consists of two bored tunnel tubes, the eastern one of which has two levels for light motorized vehicles, over a length of . Although each level offers a physical height of , only traffic up to tall is allowed in this tunnel tube, and motorcyclists are directed to the other tube. Each level was built with a three-lane roadway, but only two lanes per level are used – the third serves as a hard shoulder within the tunnel. The A86 Duplex is Europe's longest double-deck tunnel.
In Shanghai, China, a two-tube double-deck tunnel was built starting in 2002. In each tube of the both decks are for motor vehicles. In each direction, only cars and taxis travel on the high two-lane upper deck, and heavier vehicles, like trucks and buses, as well as cars, may use the high single-lane lower level.
In the Netherlands, a two-storey, eight-lane, cut-and-cover road tunnel under the city of Maastricht was opened in 2016. Each level accommodates a full height, two by two-lane highway. The two lower tubes of the tunnel carry the A2 motorway, which originates in Amsterdam, through the city; and the two upper tubes take the N2 regional highway for local traffic.
The Alaskan Way Viaduct replacement tunnel, is a $3.3 billion , double-decker bored highway tunnel under Downtown Seattle. Construction began in July 2013 using "Bertha", at the time the world's largest earth pressure balance tunnel boring machine, with a cutterhead diameter. After several delays, tunnel boring was completed in April 2017, and the tunnel opened to traffic on 4 February 2019.
New York City's 63rd Street Tunnel under the East River, between the boroughs of Manhattan and Queens, was intended to carry subway trains on the upper level and Long Island Rail Road commuter trains on the lower level. Construction started in 1969, and the two sides of the tunnel were bored through in 1972. The upper level, used by the IND 63rd Street Line () of the New York City Subway, was not opened for passenger service until 1989. The lower level, intended for commuter rail, saw passenger service after completion of the East Side Access project, in late 2022.
In the UK, the 1934 Queensway Tunnel under the River Mersey between Liverpool and Birkenhead was originally to have road vehicles running on the upper deck and trams on the lower. During construction the tram usage was cancelled. The lower section is only used for cables, pipes and emergency accident refuge enclosures.
Hong Kong's Lion Rock Tunnel, built in the mid 1960s, connecting New Kowloon and Sha Tin, carries a motorway but also serves as an aqueduct, featuring a gallery containing five water mains lines with diameters between below the road section of the tunnel.
Wuhan's Yangtze River Highway and Railway Tunnel is a two-tube double-deck tunnel under the Yangtze River completed in 2018. Each tube carries three lanes of local traffic on the top deck with one track Wuhan Metro Line 7 on the lower deck.
Mount Baker Tunnel has three levels. The bottom level is to be used by Sound Transit light rail. The middle level is used by car traffic, and the top layer is for bicycle and pedestrian access.
Some tunnels have more than one purpose. The SMART Tunnel in Malaysia is the first multipurpose "Stormwater Management And Road Tunnel" in the world, created to convey both traffic and occasional flood waters in Kuala Lumpur. When necessary, floodwater is first diverted into a separate bypass tunnel located underneath the double-deck roadway tunnel. In this scenario, traffic continues normally. Only during heavy, prolonged rains when the threat of extreme flooding is high, the upper tunnel tube is closed off to vehicles and automated flood control gates are opened so that water can be diverted through both tunnels.
Covered passageways
Over-bridges can sometimes be built by covering a road or river or railway with brick or steel arches, and then levelling the surface with earth. In railway parlance, a surface-level track which has been built or covered over is normally called a "covered way".
Snow sheds are a kind of artificial tunnel built to protect a railway from avalanches of snow. Similarly the Stanwell Park, New South Wales "steel tunnel", on the Illawarra railway line, protects the line from rockfalls.
Underpass
An underpass is a road or railway or other passageway passing under another road or railway, under an overpass. This is not strictly a tunnel.
Utility Tunnel
A Utility Tunnel is built for the purpose of carrying one or more utilities in the same space, for this reason they are also referred to as Multi-Utility Tunnels or MUTs. Through co-location of different utilities in one tunnel, organizations are able to reduce the financial and environmental costs of building and maintaining utilities. These tunnels can be used for many types of utilities, routing steam, chilled water, electrical power or telecommunication cables, as well as connecting buildings for convenient passage of people and equipment.
Safety and security
Owing to the enclosed space of a tunnel, fires can have very serious effects on users. The main dangers are gas and smoke production, with even low concentrations of carbon monoxide being highly toxic. Fires killed 11 people in the Gotthard tunnel fire of 2001 for example, all of the victims succumbing to smoke and gas inhalation. Over 400 passengers died in the Balvano train disaster in Italy in 1944, when the locomotive halted in a long tunnel. Carbon monoxide poisoning was the main cause of death. In the Caldecott Tunnel fire of 1982, the majority of fatalities were caused by toxic smoke, rather than by the initial crash. Likewise 84 people were killed in the Paris Métro train fire of 1904.
Motor vehicle tunnels usually require ventilation shafts and powered fans to remove toxic exhaust gases during routine operation.
Rail tunnels usually require fewer air changes per hour, but still may require forced-air ventilation. Both types of tunnels often have provisions to increase ventilation under emergency conditions, such as a fire. Although there is a risk of increasing the rate of combustion through increased airflow, the primary focus is on providing breathable air to persons trapped in the tunnel, as well as firefighters.
The aerodynamic pressure wave produced by high speed trains entering a tunnel reflect at its open ends and change sign (compression wavefront changes to rarefaction wavefront and vice versa). When two wavefronts of the same sign meet the train, significant and rapid air pressure may cause ear discomfort for passengers and crew. When a high-speed train exits a tunnel, a loud "Tunnel boom" may occur, which can disturb residents near the mouth of the tunnel, and it is exacerbated in mountain valleys where the sound can echo.
When there is a parallel, separate tunnel available, airtight but unlocked emergency doors are usually provided which allow trapped personnel to escape from a smoke-filled tunnel to the parallel tube.
Larger, heavily used tunnels, such as the Big Dig tunnel in Boston, Massachusetts, may have a dedicated 24-hour staffed operations center which monitors and reports on traffic conditions, and responds to emergencies. Video surveillance equipment is often used, and real-time pictures of traffic conditions for some highways may be viewable by the general public via the Internet.
A database of seismic damage to underground structures using 217 case histories shows the following general observations can be made regarding the seismic performance of underground structures:
Underground structures suffer appreciably less damage than surface structures.
Reported damage decreases with increasing over burden depth. Deep tunnels seem to be safer and less vulnerable to earthquake shaking than are shallow tunnels.
Underground facilities constructed in soils can be expected to suffer more damage compared to openings constructed in competent rock.
Lined and grouted tunnels are safer than unlined tunnels in rock. Shaking damage can be reduced by stabilizing the ground around the tunnel and by improving the contact between the lining and the surrounding ground through grouting.
Tunnels are more stable under a symmetric load, which improves ground-lining interaction. Improving the tunnel lining by placing thicker and stiffer sections without stabilizing surrounding poor ground may result in excess seismic forces in the lining. Backfilling with non-cyclically mobile material and rock-stabilizing measures may improve the safety and stability of shallow tunnels.
Damage may be related to peak ground acceleration and velocity based on the magnitude and epicentral distance of the affected earthquake.
Duration of strong-motion shaking during earthquakes is of utmost importance because it may cause fatigue failure and therefore, large deformations.
High frequency motions may explain the local spalling of rock or concrete along planes of weakness. These frequencies, which rapidly attenuate with distance, may be expected mainly at small distances from the causative fault.
Ground motion may be amplified upon incidence with a tunnel if wavelengths are between one and four times the tunnel diameter.
Damage at and near tunnel portals may be significant due to slope instability.
Earthquakes are one of nature's most formidable threats. A magnitude 6.7 earthquake shook the San Fernando valley in Los Angeles in 1994. The earthquake caused extensive damage to various structures, including buildings, freeway overpasses and road systems throughout the area. The National Center for Environmental Information estimates total damages to be 40 billion dollars. According to an article issued by Steve Hymon of TheSource – Transportation News and Views, there was no serious damage sustained by the LA subway system. Metro, the owner of the LA subway system, issued a statement through their engineering staff about the design and consideration that goes into a tunnel system. Engineers and architects perform extensive analysis as to how hard they expect earthquakes to hit that area. All of this goes into the overall design and flexibility of the tunnel.
This same trend of limited subway damage following an earthquake can be seen in many other places. In 1985 a magnitude 8.1 earthquake shook Mexico City; there was no damage to the subway system, and in fact the subway systems served as a lifeline for emergency personnel and evacuations. A magnitude 7.2 ripped through Kobe Japan in 1995, leaving no damage to the tunnels themselves. Entry portals sustained minor damages, however these damages were attributed to inadequate earthquake design that originated from the original construction date of 1965. In 2010 a magnitude 8.8, massive by any scale, afflicted Chile. Entrance stations to subway systems suffered minor damages, and the subway system was down for the rest of the day. By the next afternoon, the subway system was operational again.
Examples
In history
The history of ancient tunnels and tunneling in the world is reviewed in various sources which include many examples of these structures that were built for different purposes. Some well known ancient and modern tunnels are briefly introduced below:
The qanat or kareez of Persia are water management systems used to provide a reliable supply of water to human settlements or for irrigation in hot, arid and semi-arid climates. The deepest known qanat is in the Iranian city of Gonabad, which after 2700 years, still provides drinking and agricultural water to nearly 40,000 people. Its main well depth is more than , and its length is .
The Siloam Tunnel was built before 701 BC for a reliable supply of water, to withstand siege attacks.
The Eupalinian aqueduct on the island of Samos (North Aegean, Greece) was built in 520 BC by the ancient Greek engineer Eupalinos of Megara under a contract with the local community. Eupalinos organised the work so that the tunnel was begun from both sides of Mount Kastro. The two teams advanced simultaneously and met in the middle with excellent accuracy, something that was extremely difficult in that time. The aqueduct was of utmost defensive importance, since it ran underground, and it was not easily found by an enemy who could otherwise cut off the water supply to Pythagoreion, the ancient capital of Samos. The tunnel's existence was recorded by Herodotus (as was the mole and harbour, and the third wonder of the island, the great temple to Hera, thought by many to be the largest in the Greek world). The precise location of the tunnel was only re-established in the 19th century by German archaeologists. The tunnel proper is and visitors can still enter it.
One of the first known drainage and sewage networks in form of tunnels was constructed at Persepolis in Iran at the same time as the construction of its foundation in 518 BC. In most places the network was dug in the sound rock of the mountain and then covered by large pieces of rock and stone followed by earth and piles of rubble to level the ground. During investigations and surveys, long sections of similar rock tunnels extending beneath the palace area were traced by Herzfeld and later by Schmidt and their archaeological teams.
The Via Flaminia, an important Roman road, penetrated the Furlo pass in the Apennines through a tunnel which emperor Vespasian had ordered built in 76–77 AD. A modern road, the SS 3 Flaminia, still uses this tunnel, which had a precursor dating back to the 3rd century BC, remnants of this earlier tunnel (one of the first road tunnels) are also still visible.
The world's oldest tunnel traversing under a water body is claimed to be the Terelek kaya tüneli under Kızıl River, a little south of the towns of Boyabat and Durağan in Turkey, just downstream from where Kizil River joins its tributary Gökırmak. The tunnel is presently under a narrow part of a lake formed by a dam some kilometres further downstream. Estimated to have been built more than 2000 years ago, possibly by the same civilization that also built the royal tombs in a rock face nearby, it is assumed to have had a defensive purpose.
Sapperton Canal Tunnel on the Thames and Severn Canal in England, dug through hills, which opened in 1789, was long and allowed boat transport of coal and other goods. Above it the Sapperton Long Tunnel was constructed which carries the "Golden Valley" railway line between Swindon and Gloucester.
The 1791 Dudley canal tunnel is on the Dudley Canal, in Dudley, England. The tunnel is long. Closed in 1962 the tunnel was reopened in 1973. The series of tunnels was extended in 1984 and 1989.
Fritchley Tunnel, constructed in 1793 in Derbyshire by the Butterley Company to transport limestone to its ironworks factory. The Butterley company engineered and built its own railway. A victim of the depression the company closed after 219 years in 2009. The tunnel is the world's oldest railway tunnel traversed by rail wagons. Gravity and horse haulage was utilised. The railway was converted to steam locomotion in 1813 using a Steam Horse locomotive engineered and built by the Butterley company, however reverted to horses. Steam trains used the tunnel continuously from the 1840s when the railway was converted to a narrow gauge. The line closed in 1933. In the Second World War, the tunnel was used as an air raid shelter. Sealed up in 1977 it was rediscovered in 2013 and inspected. The tunnel was resealed to preserved the construction as it was designated an ancient monument.
The 1794 Butterley canal tunnel canal tunnel is in length on the Cromford Canal in Ripley, Derbyshire, England. The tunnel was built simultaneously with the 1793 Fritchley railway tunnel. The tunnel partially collapsed in 1900 splitting the Cromford Canal, and has not been used since. The Friends of Cromford Canal, a group of volunteers, are working at fully restoring the Cromford Canal and the Butterley Tunnel.
The 1796 Stoddart Tunnel in Chapel-en-le-Frith in Derbyshire is reputed to be the oldest rail tunnel in the world. The rail wagons were originally horse-drawn.
Derby Tunnels in Salem, Massachusetts, were built in 1801 to smuggle imports affected by President Thomas Jefferson's new customs duties. Jefferson had ordered local militias to help the Custom House in each port collect these dues, but the smugglers, led by Elias Derby, hired the Salem militia to dig the tunnels and hide the spoil.
A tunnel was created for the first true steam locomotive, from Penydarren to Abercynon. The Penydarren locomotive was built by Richard Trevithick. The locomotive made the historic journey from Penydarren to Abercynon in 1804. Part of this tunnel can still be seen at Pentrebach, Merthyr Tydfil, Wales. This is arguably the oldest railway tunnel in the world, dedicated only to self-propelled steam engines on rails.
The Montgomery Bell Tunnel in Tennessee, an water diversion tunnel, , to power a water wheel, was built by slave labour in 1819, being the first full-scale tunnel in North America.
Bourne's Tunnel, Rainhill, near Liverpool, England. It is long. Built in the late 1820s, the exact date is unknown, however probably built in 1828 or 1829. This is the first tunnel in the world constructed under a railway line. The construction of the Liverpool to Manchester Railway ran over a horse-drawn tramway that ran from the Sutton collieries to the Liverpool-Warrington turnpike road. A tunnel was bored under the railway for the tramway. As the railway was being constructed the tunnel was made operational, opening prior to the Liverpool tunnels on the Liverpool to Manchester line. The tunnel was made redundant in 1844 when the tramway was dismantled.
Crown Street station, Liverpool, England, 1829. Built by George Stephenson, a single track railway tunnel , was bored from Edge Hill to Crown Street to serve the world's first intercity passenger railway terminus station. The station was abandoned in 1836 being too far from Liverpool city centre, with the area converted for freight use. Closed down in 1972, the tunnel is disused. However it is the oldest passenger rail tunnel running under streets in the world.
The 1829 Wapping Tunnel in Liverpool, England, at long on a twin track railway, was the first rail tunnel bored under a metropolis. The tunnel's path is from Edge Hill in the east of the city to Wapping Dock in the south end Liverpool docks. The tunnel was used only for freight terminating at the Park Lane goods terminal. Currently disused since 1972, the tunnel was to be a part of the Merseyrail metro network, with work started and abandoned because of costs. The tunnel is in excellent condition and is still being considered for reuse by Merseyrail, maybe with an underground station cut into the tunnel for Liverpool university. The river portal is opposite the new King's Dock Liverpool Arena being an ideal location for a serving station. If reused the tunnel will be the oldest used underground rail tunnel in the world and oldest section of any underground metro system.
1832, Lime Street railway station tunnel, Liverpool. A two track rail tunnel, long was bored under the metropolis from Edge Hill in the east of the city to Lime Street in Liverpool's city centre. The tunnel was in use from 1832 being used to transport building materials to the new Lime St station while under construction. The station and tunnel was opened to passengers in 1836. In the 1880s the tunnel was converted to a deep cutting, open to the atmosphere, being four tracks wide. This is the only occurrence of a major tunnel being removed. Two short sections of the original tunnel still exist at Edge Hill station and further towards Lime Street, giving the two tunnels the distinction of being the oldest rail tunnels in the world still in use, and the oldest in use under streets. Over time a section of the deep cutting has been converted back into tunnel due to sections having buildings built over.
Box Tunnel in England, which opened in 1841, was the longest railway tunnel in the world at the time of construction. It was dug by hand, and has a length of .
The 1842 Prince of Wales Tunnel, in Shildon near Darlington, England, is the oldest sizeable tunnel in the world still in use under a settlement.
The Victoria Tunnel Newcastle opened in 1842, is a subterranean wagonway with a maximum depth of that drops from entrance to exit. The tunnel runs under Newcastle upon Tyne, England, and originally exited at the River Tyne. It remains largely intact. Originally designed to carry coal from Spital Tongues to the river, in WW2 part of the tunnel was used as a shelter. Under the management of a charitable foundation called the Ouseburn Trust it is currently used for heritage tours.
The Thames Tunnel, built by Marc Isambard Brunel and his son Isambard Kingdom Brunel opened in 1843, was the first tunnel (after Terelek) traversing under a water body, and the first to be built using a tunnelling shield. Originally used as a foot-tunnel, the tunnel was converted to a railway tunnel in 1869 and was a part of the East London Line of the London Underground until 2007. It was the oldest section of the network, although not the oldest purpose built rail section. From 2010 the tunnel became a part of the London Overground network.
The Victoria Tunnel/Waterloo Tunnel in Liverpool, England, was bored under a metropolis opening in 1848. The tunnel was initially used only for rail freight serving the Waterloo Freight terminal, and later freight and passengers serving the Liverpool ship liner terminal. The tunnel's path is from Edge Hill in the east of the city to the north end Liverpool docks at Waterloo Dock. The tunnel is split into two tunnels with a short open air cutting linking the two. The cutting is where the cable hauled trains from Edge Hill were hitched and unhitched. The two tunnels are effectively one on the same centre line and are regarded as one. However, as initially the long Victoria section was originally cable hauled and the shorter Waterloo section was locomotive hauled, two separate names were given, the short section was named the Waterloo Tunnel. In 1895 the two tunnels were converted to locomotive haulage. Used until 1972, the tunnel is still in excellent condition. A short section of the Victoria tunnel at Edge Hill is still used for shunting trains. The tunnel is being considered for reuse by the Merseyrail network. Stations cut into the tunnel are being considered and also reuse by a monorail system from the proposed Liverpool Waters redevelopment of Liverpool's Central Docks has been proposed.
The summit tunnel of the Semmering railway, the first Alpine tunnel, was opened in 1848 and was long. It connected rail traffic between Vienna, the capital of Austro-Hungarian Empire, and Trieste, its port.
The Giovi Rail Tunnel through the Appennini Mounts opened in 1854, linking the capital city of the Kingdom of Sardinia, Turin, to its port, Genoa. The tunnel was long.
The oldest underground sections of the London Underground were built using the cut-and-cover method in the 1860s, and opened in January 1863. What are now the Metropolitan, Hammersmith & City and Circle lines were the first to prove the success of a metro or subway system.
On 18 June 1868, the Central Pacific Railroad's Summit Tunnel (Tunnel #6) at Donner Pass in the California Sierra Nevada mountains was opened, permitting the establishment of the commercial mass transportation of passengers and freight over the Sierras for the first time. It remained in daily use until 1993, when the Southern Pacific Railroad closed it and transferred all rail traffic through the long Tunnel #41 (a.k.a. "The Big Hole") built a mile to the south in 1925.
In 1870, after fourteen years of works, the Fréjus Rail Tunnel was completed between France and Italy, being the second-oldest Alpine tunnel, long. At that time it was the longest in the world.
The third Alpine tunnel, the Gotthard Rail Tunnel, between northern and southern Switzerland, opened in 1882 and was the longest rail tunnel in the world, measuring .
The 1882 Col de Tende Road Tunnel, at long, was one of the first long road tunnels under a pass, running between France and Italy.
The Mersey Railway tunnel opened in 1886, running from Liverpool to Birkenhead under the River Mersey. The Mersey Railway was the world's first deep-level underground railway. By 1892 the extensions on land from Birkenhead Park station to Liverpool Central Low level station gave a tunnel in length. The under river section is in length, and was the longest underwater tunnel in world in January 1886.
The rail Severn Tunnel was opened in late 1886, at long, although only of the tunnel is actually under the River Severn. The tunnel replaced the Mersey Railway tunnel's longest under water record, which was held for less than a year.
James Greathead, in constructing the City & South London Railway tunnel beneath the Thames, opened in 1890, brought together three key elements of tunnel construction under water:
shield method of excavation;
permanent cast iron tunnel lining;
construction in a compressed air environment to inhibit water flowing through soft ground material into the tunnel heading.
Built in sections between 1890 and 1939, the section of London Underground's Northern line from Morden to East Finchley via Bank was the longest railway tunnel in the world at in length.
St. Clair Tunnel, also opened later in 1890, linked the elements of the Greathead tunnels on a larger scale.
In 1906 the fourth Alpine tunnel opened, the Simplon Tunnel, between Switzerland and Italy. It is long, and was the longest tunnel in the world until 1982. It was also the deepest tunnel in the world, with a maximum rock overlay of approximately .
The 1927 Holland Tunnel was the first underwater tunnel designed for automobiles. The construction required a novel ventilation system.
In 1945 the Delaware Aqueduct tunnel was completed, supplying water to New York City. At it is the longest tunnel in the world.
In 1988 the long Seikan Tunnel in Japan was completed under the Tsugaru Strait, linking the islands of Honshu and Hokkaido. It was the longest railway tunnel in the world at that time.
Ryfast is the longest undersea road tunnel. It is in length. The tunnel opened for use in 2020.
Longest
The Thirlmere Aqueduct in North West England, United Kingdom is sometimes considered the longest tunnel, of any type, in the world at , though the aqueduct's tunnel section is not continuous.
The Dahuofang Water Tunnel in China, opened in 2009, is the third longest water tunnel in the world at length.
The Gotthard Base Tunnel in Switzerland, opened in 2016, is the longest and deepest railway tunnel in the world at length and maximum depth below the Gotthard Massif. It provides a flat transit route between the North and South of Europe under the Swiss Alps, at a maximum elevation of .
The Seikan Tunnel in Japan connects the main island of Honshu with the northern island of Hokkaido by rail. It is long, of which are crossing the Tsugaru Strait undersea.
The Channel Tunnel crosses the English Channel between France and the United Kingdom. It has a total length of , of which are the world's longest undersea tunnel section.
The Lötschberg Base Tunnel in Switzerland was the longest land rail tunnel, with a length of , from its inauguration in 2007 until the completion of the Gotthard Base Tunnel in 2016.
The Lærdal Tunnel in Norway from Lærdal to Aurland is the world's longest road tunnel, intended for cars and similar vehicles, at .
The Zhongnanshan Tunnel in People's Republic of China opened in January 2007 is the world's second longest highway tunnel and the longest mountain road tunnel in Asia, at .
The longest canal tunnel is the Rove Tunnel in France, over long.
Notable
The Moffat Tunnel, opened in 1928, passes under the Continental Divide of the Americas in Colorado. The tunnel is long and at an elevation of is the highest active railroad tunnel in the U.S. (The inactive Tennessee Pass Line and the historic Alpine Tunnel are higher.)
Williamson's tunnels in Liverpool, from 1804 and completed around 1840 by a wealthy eccentric, are probably the largest underground folly in the world. The tunnels were built with no functional purpose.
The Chicago freight tunnel network is the largest urban street tunnel network, comprising of tunnels beneath the majority of downtown Chicago streets. It operated between 1906 and 1956 as a freight network, connecting building basements and railway stations. Following a 1992 flood the network was sealed, although some parts still carry utility and communications infrastructure.
The Pennsylvania Turnpike opened in 1940 with seven tunnels, most of which were bored as part of the stillborn South Pennsylvania Railroad and giving the highway the nickname "Tunnel Highway". Four of the tunnels (Allegheny Mountain, Tuscarora Mountain, Kittatinny Mountain, and Blue Mountain) remain in active use, while the other three (Laurel Hill, Rays Hill, and Sideling Hill) were bypassed in the 1960s; the latter two tunnels are on a bypassed section of the Turnpike now commonly known as the Abandoned Pennsylvania Turnpike.
The Fredhälls road tunnel was opened in 1966, in Stockholm, Sweden, and the New Elbe road tunnel opened in 1975 in Hamburg, Germany. Both tunnels handle around 150,000 vehicles a day, making them two of the most trafficked tunnels in the world.
The Honningsvåg Tunnel ( long) opened in 1999 on European route E69 in Norway as the world's northernmost road tunnel, except for mines (which exist on Svalbard).
The Central Artery road tunnel in Boston, Massachusetts, is a part of the larger Big Dig completed around 2007, and carries approximately 200,000 vehicles/day under the city along Interstate 93, US Route 1, and Massachusetts Route 3, which share a concurrency through the tunnels. The Big Dig replaced Boston's old badly deteriorated I-93 elevated highway.
The Stormwater Management And Road Tunnel or SMART Tunnel, is a combined storm drainage and road structure opened in 2007 in Kuala Lumpur, Malaysia. The tunnel is the longest stormwater drainage tunnel in South East Asia and second longest in Asia. The facility can be operated as a simultaneous traffic and stormwater passage, or dedicated exclusively to stormwater when necessary.
The Eiksund Tunnel on national road Rv 653 in Norway is the world's deepest subsea road tunnel, measuring long, with deepest point at below the sea level, opened in February 2008.
Gerrards Cross railway tunnel, in England, opened in 2010, is notable in that it converted an existing railway cutting into a tunnel to create ground to build a supermarket over the tunnel. The railway in the cutting was first opened around 1906, stretching over 104 years to complete a railway tunnel. The tunnel was built using the cover method with craned in prefabricated forms in order to keep the busy railway operating. A branch of the Tesco supermarket chain occupies the newly created ground above the railway tunnel, with an adjacent existing railway station at the end of the tunnel. During construction, a portion of the tunnel collapsed when soil cover was added. The prefabricated forms were covered with a layer of reinforced concrete after the collapse.
The Fenghuoshan tunnel, completed in 2005 on the Qinghai-Tibet railway is the world's highest railway tunnel, about above sea level and long.
The La Linea Tunnel in Colombia, 2016, is the longest, , mountain tunnel in South America. It crosses beneath a mountain at above sea level with six traffic lanes, and it has a parallel emergency tunnel. The tunnel is subject to serious groundwater pressure. The tunnel will link Bogotá and its urban area with the coffee-growing region, and with the main port on the Colombian Pacific coast.
The Chicago Deep Tunnel Project is a network of of drainage tunnels designed to reduce flooding in the Chicago area. Started in the mid-1970s, the project is due to be completed in 2029.
New York City Water Tunnel No. 3, started in 1970, has an expected completion beyond 2026, and will measure more than .
Mining
The use of tunnels for mining is called drift mining. Drift mining can help find coal, goal, iron, and other minerals, just like normal mining.
Sub-surface mining consists of digging tunnels or shafts into the earth to reach buried ore deposits.
Military use
Some tunnels are not for transport at all but rather, are fortifications, for example Mittelwerk and Cheyenne Mountain Complex. Excavation techniques, as well as the construction of underground bunkers and other habitable areas, are often associated with military use during armed conflict, or civilian responses to threat of attack. Another use for tunnels was for the storage of chemical weapons .
Secret tunnels
Secret tunnels have given entrance to or escape from an area, such as the Cu Chi Tunnels or the smuggling tunnels in the Gaza Strip which connect it to Egypt. Although the Underground Railroad network used to transport escaped slaves was "underground" mostly in the sense of secrecy, hidden tunnels were occasionally used. Secret tunnels were also used during the Cold War, under the Berlin Wall and elsewhere, to smuggle refugees, and for espionage.
Smugglers use secret tunnels to transport or store contraband, such as illegal drugs and weapons. Elaborately engineered tunnels built to smuggle drugs across the Mexico-US border were estimated to require up to 9 months to complete, and an expenditure of up to $1 million. Some of these tunnels were equipped with lighting, ventilation, telephones, drainage pumps, hydraulic elevators, and in at least one instance, an electrified rail transport system. Secret tunnels have also been used by thieves to break into bank vaults and retail stores after hours. Several tunnels have been discovered by the Border Security Forces across the Line of Control along the India-Pakistan border, mainly to allow terrorists access to the Indian territory of Jammu and Kashmir.
The actual usage of erdstall tunnels is unknown but theories connect it to a rebirth ritual.
Natural tunnels
Lava tubes are emptied lava conduits, formed during volcanic eruptions by flowing and cooling lava.
Natural Tunnel State Park (Virginia, US) features an natural tunnel, really a limestone cave, that has been used as a railroad tunnel since 1890.
Punarjani Guha in Kerala, India. Hindus believe that crawling through the tunnel (which they believe was created by a Hindu god) from one end to the other will wash away all of one's sins and thus allow one to attain rebirth. Only men are permitted to crawl through the tunnel.
Torghatten, a Norwegian island with a hat-shaped silhouette, has a natural tunnel in the middle of the hat, letting light come through. The long, high, and wide tunnel is said to be the hole made by an arrow of the angry troll Hestmannen, the hill being the hat of the troll-king of Sømna trying to save the beautiful Lekamøya. The tunnel is thought actually to be the work of ice. The sun shines through the tunnel during two few minutes long periods every year.
Major accidents
Clayton Tunnel rail crash (1861) – confusion about block signals leading to collision, 23 killed.
Welwyn Tunnel rail crash (1866) – train failed in tunnel, guard did not protect train.
Paris Métro train fire (1904) – train fire in Couronnes underground station, 84 killed by smoke and gases.
Church Hill Tunnel collapse (1925) – tunnel collapse on a work train during renovation, killing four men and trapping a steam locomotive and ten flat cars.
Balvano train disaster (1944) – asphyxiation of about 500 "unofficial" passengers on freight train.
Caldecott Tunnel fire (1982) – major motor vehicle tunnel crash and fire.
Channel Tunnel fire (1996) – Train carrying Heavy Good Vehicles (HGV) caught on fire.
Princess Diana's death (1997) – Car crash in Pont de l'Alma tunnel, Paris, which killed Princess Diana.
Mont Blanc Tunnel fire (1999) – Transport truck caught on fire and combusted inside tunnel.
Big Dig Ceiling collapse (2006) – Concrete ceiling panel falls in Fort Point tunnel, Boston, which causes the Big Dig project to be closed for a year.
| Technology | Transport infrastructure | null |
160853 | https://en.wikipedia.org/wiki/Felt | Felt | Felt is a textile that is produced by matting, condensing, and pressing fibers together. Felt can be made of natural fibers such as wool or animal fur, or from synthetic fibers such as petroleum-based acrylic or acrylonitrile or wood pulp–based rayon. Blended fibers are also common. Natural fiber felt has special properties that allow it to be used for a wide variety of purposes. It is "fire-retardant and self-extinguishing; it dampens vibration and absorbs sound; and it can hold large amounts of fluid without feeling wet..."
History
Felt from wool is one of the oldest known textiles. Many cultures have legends about the origins of felt-making. Sumerian legend claims that the secret of feltmaking was discovered by Urnamman of Lagash. The story of Saint Clement and Saint Christopher relates that the men packed their sandals with wool to prevent blisters while fleeing from persecution. At the end of their journey the movement and sweat had turned the wool into felt socks.
Most likely felt's origins can be found in central Asia, where there is evidence of feltmaking in Siberia (Altai mountains) in Northern Mongolia and more recently evidence dating back to the first century CE in Mongolia. Siberian tombs (7th to 2nd century BCE) show the broad uses of felt in that culture, including clothing, jewelry, wall hangings, and elaborate horse blankets. Employing careful color use, stitching, and other techniques, these feltmakers were able to use felt as an illustrative and decorative medium on which they could depict abstract designs and realistic scenes with great skill. Over time these makers became known for the beautiful abstract patterns they used that were derived from plant, animal, and other symbolic designs.
From Siberia and Mongolia feltmaking spread across the areas held by the Turkic-Mongolian tribes. Sheep and camel herds were central to the wealth and lifestyle of these tribes, both of which animals were critical to producing the fibers needed for felting. For nomads traveling frequently and living on fairly treeless plains felt provided housing (yurts, tents etc.), insulation, floor coverings, and inside walling, as well as many household necessities from bedding and coverings to clothing. In the case of nomadic peoples, an area where feltmaking was particularly visible was in trappings for their animals and for travel. Felt was often featured in the blankets that went under saddles.
Dyes provided rich coloring, and colored slices of pre-felts (semi-felted sheets that could be cut in decorative ways) along with dyed yarns and threads were combined to create beautiful designs on the wool backgrounds. Felt was even used to create totems and amulets with protective functions. In traditional societies the patterns embedded in the felt were also imbued with significant religious and symbolic meaning.
Feltmaking is still practised by nomadic peoples (such as Mongols and Turkic people) in Central Asia, where rugs, tents and clothing are regularly made. Some of these are traditional items, such as the classic yurt, or ger, while others are designed for the tourist market, such as decorated slippers. In the Western world, felt is widely used as a medium for expression in both textile art and contemporary art and design, where it has significance as an ecologically responsible textile and building material.
In addition to Central Asian traditions of felting, Scandinavian countries have also supported feltmaking, particularly for clothing.
Manufacturing methods
Wet felting
In the wet felting process, hot water is applied to layers of animal hairs, while repeated agitation and compression causes the fibers to hook together or weave together into a single piece of fabric. Wrapping the properly arranged fiber in a sturdy, textured material, such as a bamboo mat or burlap, will speed up the felting process. The felted material may be finished by fulling.
Only certain types of fiber can be wet felted successfully. Most types of fleece, such as those taken from the alpaca or the Merino sheep, can be put through the wet felting process. One may also use mohair (goat), angora (rabbit), or hair from rodents such as beavers and muskrats. These types of fiber are covered in tiny scales, similar to the scales found on a strand of human hair. Heat, motion, and moisture of the fleece causes the scales to open, while agitating them causes them to latch onto each other, creating felt. There is an alternative theory that the fibers wind around each other during felting. Plant fibers and synthetic fibers will not wet felt.
In order to make multi-colored designs, felters conduct a two-step process in which they create pre-felts of specialized colors—these semi-completed sheets of colored felt can then be cut with a sharp implement (knife or scissors) and the distinctive colors placed next to each other as in making a mosaic. The felting process is then resumed and the edges of the fabric attach to each other as the felting process is completed. Shyrdak carpets (Turkmenistan) use a form of this method wherein two pieces of contrasting color are cut out with the same pattern, the cut-outs are then switched, fitting one into the other, which makes a sharply defined and colorful patterned piece. In order to strengthen the joints of a mosaic style felt, feltmakers often add a backing layer of fleece that is felted along with the other components. Feltmakers can differ in their orientation to this added layer—where some will lay it on top of the design before felting and others will place the design on top of the strengthening layer.
The process of felting was adapted to the lifestyles of the different cultures in which it flourished. In Central Asia, it is common to conduct the rolling/friction process with the aid of a horse, donkey, or camel, which will pull the rolled felt until the process is complete. Alternately, a group of people in a line might roll the felt along, kicking it regularly with their feet. Further fulling can include throwing or slamming and working the edges with careful rolling. In Turkey, some baths had areas dedicated to feltmaking, making use of the steam and hot water that were already present for bathing.
Development of felting as a profession
As felting grew in importance to a society, so, too, did the knowledge about techniques and approaches. Amateur or community felting obviously continued in many communities at the same time that felting specialists and felting centers began to develop. However, the importance of felting to community life can be seen in the fact that, in many Central Asian communities, felt production is directed by a leader who oversees the process as a ritual that includes prayers—words and actions to bring good luck to the process. Successfully completing the creation of felt (certainly large felt pieces) is reason for celebration, feasting, and the sharing of traditional stories.
In Turkey, craft guilds called "ahi" came into being, and these groups were responsible for registering members and protecting the knowledge of felting. In Istanbul at one time, there were 1,000 felters working in 400 workshops registered in this ahi.
Needle felting
Needle felting is a method of creating felt that uses specially designed needles instead of water. Felting needles have angled notches along the shaft that catch fibers and tangle them together to produce felt. These notches are sometimes erroneously called "barbs", but barbs are protrusions (like barbed wire) and would be too difficult to thrust into the wool and nearly impossible to pull out. Felting needles are thin and sharp, with shafts of a variety of different gauges and shapes. Needle felting is used in industrial felt making as well as for individual art and craft applications.
Felting needles are sometimes fitted in holders that allow the use of 2 or more needles at one time to sculpt wool objects and shapes. Individual needles are often used for detail while multiple needles that are paired together are used for larger areas or to form the base of the project. At any point in time a variety of fibers and fiber colors may be added, using needles to incorporate them into the project.
Needle felting can be used to create both 2 dimensional and 3 dimensional artwork, including soft sculpture, dolls, figurines, jewelry, and 2 dimensional wool paintings. Needle felting is popular with artists and craftspeople worldwide. One example is Ikuyo Fujita(藤田育代 Fujita Ikuyo), a Japanese artist who works primarily in needle felt painting and mogol (pipe cleaner) art.
Recently, needle-felting machines have become popular for art or craft felters. Similar to a sewing machine, these tools have several needles that punch fibers together. These machines can be used to create felted products more efficiently. The embellishment machine allows the user to create unique combinations of fibers and designs.
Carroting
Invented in the mid 17th century and used until the mid-20th centuries, a process called "carroting" was used in the manufacture of good quality felt for making men's hats. Beaver, rabbit or hare skins were treated with a dilute solution of the mercury compound mercuric nitrate. The skins were dried in an oven where the thin fur at the sides turned orange, the color of carrots. Pelts were stretched over a bar in a cutting machine, and the skin was sliced off in thin shreds, with the fleece coming away entirely. The fur was blown onto a cone-shaped colander and then treated with hot water to consolidate it. The cone then peeled off and passed through wet rollers to cause the fur to felt. These 'hoods' were then dyed and blocked to make hats. The toxic solutions from the carrot and the vapours it produced resulted in widespread cases of mercury poisoning among hatters. This may be the origin of the phrase "mad as a hatter" which was used to humorous effect by Lewis Carroll in the chapter "A Mad Tea Party" of the novel Alice in Wonderland.
Uses
Felt is used in a wide range of industries and manufacturing processes, from the automotive industry and casinos to musical instruments and home construction, as well as in gun wadding, either inside cartridges or pushed down the barrel of a muzzleloader. Felt had many uses in ancient times and continues to be widely used today.
Industrial uses
Felt is frequently used in industry as a sound or vibration damper, as a non-woven fabric for air filtration, and in machinery for cushioning and padding moving parts.
Home Decor
Felt can be used in home furnishings like table runners, placemats, coasters, and even as backing for area rugs. It can add a touch of warmth and texture to a space.
Clothing
During the 18th and 19th centuries gentlemen's headwear made from beaver felt were popular. In the early part of the 20th century, cloth felt hats, such as fedoras, trilbies and homburgs, were worn by many men in the western world. Felt is often used in footwear as boot liners, with the Russian valenki being an example.
Musical instruments
Many musical instruments use felt. It is often used as a damper. On drum cymbal stands, it protects the cymbal from cracking and ensures a clean sound. It is used to wrap bass drum strikers and timpani mallets. Felt is used extensively in pianos; for example, piano hammers are made of wool felt around a wooden core. The density and springiness of the felt is a major part of what creates a piano's tone. As the felt becomes grooved and "packed" with use and age, the tone suffers. Felt is placed under the piano keys on accordions to control touch and key noise; it is also used on the pallets to silence notes not sounded by preventing air flow. Felt is used with other instruments, particularly stringed instruments, as a damper to reduce volume or eliminate unwanted sounds.
Arts and crafts
Felt is used for framing paintings. It is laid between the slip mount and picture as a protective measure to avoid damage from rubbing to the edge of the painting. This is commonly found as a preventive measure on paintings which have already been restored or professionally framed. It is widely used to protect paintings executed on various surfaces including canvas, wood panel and copper plate.
A felt-covered board can be used in storytelling to small children. Small felt cutouts or figures of animals, people, or other objects will adhere to a felt board, and in the process of telling the story, the storyteller also acts it out on the board with the animals or people. Puppets can also be made with felt. The best known example of felt puppets are Jim Henson's Muppets. Felt pressed dolls, such as Lenci dolls, were very popular in the nineteenth century and just after World War I.
As part of the overall renewal of interest in textile and fiber arts, beginning in the 1970s and continuing through today, felt has experienced a strong revival in interest, including its historical roots. Polly Stirling, a fiber artist from New South Wales, Australia, is commonly associated with the development of nuno felting, a key technique for contemporary art felting. German artist Joseph Beuys prominently integrates felt within his works. English artist Jenny Cowern shifted from traditional drawing and painting media into using felt as her primary media.
Modern day felters with access to a broad range of sheep and other animal fibers have exploited knowledge of these different breeds to produce special effects in their felt. Fleece locks are classified by the Bradford or Micron count, both which designate the fineness to coarseness of the material. Fine wools range from 64 to 80 (Bradford); medium 40–60 (Bradford); and coarse 36–60 (Bradford). Merino, the finest and most delicate sheep fleece, will be employed for clothing that goes next to the body. Claudy Jongstra raises traditional and rare breeds of sheep with much hardier coats (Drenthe, Heath, Gotland, Schoonbeek, and Wensleydale) on her property in Friesland and these are used in her interior design projects. Exploitation of these characteristics of the fleece in tandem with the use of other techniques, such as stitching and incorporation of other fibers, provides felters with a broad range of possibilities
| Technology | Fabrics and fibers | null |
160878 | https://en.wikipedia.org/wiki/Cockpit | Cockpit | A cockpit or flight deck is the area, on the front part of an aircraft, spacecraft, or submersible, from which a pilot controls the vehicle.
The cockpit of an aircraft contains flight instruments on an instrument panel, and the controls that enable the pilot to fly the aircraft. In most airliners, a door separates the cockpit from the aircraft cabin. After the September 11, 2001 attacks, all major airlines fortified their cockpits against access by hijackers.
Etymology
The word cockpit seems to have been used as a nautical term in the 17th century, without reference to cock fighting. It referred to an area in the rear of a ship where the cockswain's station was located, the cockswain being the pilot of a smaller "boat" that could be dispatched from the ship to board another ship or to bring people ashore. The word "cockswain" in turn derives from the old English terms for "boat-servant" (coque is the French word for "shell"; and swain was old English for boy or servant). The midshipmen and master's mates were later berthed in the cockpit, and it served as the action station for the ship's surgeon and his mates during battle. Thus by the 18th century, "cockpit" had come to designate an area in the rear lower deck of a warship where the wounded were taken. The same term later came to designate the place from which a sailing vessel is steered, because it is also located in the rear, and is often in a well or "pit".
However, a convergent etymology does involve reference to cock fighting. According to the Barnhart Concise Dictionary of Etymology, the buildings in London where the king's cabinet worked (the Treasury and the Privy Council) were called the "Cockpit" because they were built on the site of a theater called The Cockpit (torn down in 1635), which itself was built in the place where a "cockpit" for cock-fighting had once stood prior to the 1580s. Thus the word Cockpit came to mean a control center.
The original meaning of "cockpit", first attested in the 1580s, is "a pit for fighting cocks", referring to the place where cockfights were held. This meaning no doubt influenced both lines of evolution of the term, since a cockpit in this sense was a tight enclosure where a great deal of stress or tension would occur.
From about 1935, cockpit came to be used informally to refer to the driver's cabin, especially in high performance cars, and this is official terminology used to describe the compartment that the driver occupies in a Formula One car.
In an airliner, the cockpit is usually referred to as the flight deck, the term deriving from its use by the RAF for the separate, upper platform in large flying boats where the pilot and co-pilot sat. In the USA and many other countries, however, the term cockpit is also used for airliners.
The seat of a powerboat racing craft is also referred to as the cockpit.
Ergonomics
The first airplane with an enclosed cabin appeared in 1912 on the Avro Type F; however, during the early 1920s there were many passenger aircraft in which the crew remained open to the air while the passengers sat in a cabin. Military biplanes and the first single-engined fighters and attack aircraft also had open cockpits, some as late as the Second World War when enclosed cockpits became the norm.
The largest impediment to having closed cabins was the material used to make the windows. Prior to Perspex becoming available in 1933, windows were either safety glass, which was heavy, or cellulose nitrate (i.e.: guncotton), which yellowed quickly and was extremely flammable. In the mid-1920s many aircraft manufacturers began using enclosed cockpits for the first time. Early airplanes with closed cockpits include the 1924 Fokker F.VII, the 1926 German Junkers W 34 transport, the 1926 Ford Trimotor, the 1927 Lockheed Vega, the Spirit of St. Louis and the passenger aircraft manufactured by the Douglas and Boeing companies during the mid-1930s. Open-cockpit airplanes were almost extinct by the mid-1950s, with the exception of training planes, crop-dusters and homebuilt aircraft designs.
Cockpit windows may be equipped with a sun shield. Most cockpits have windows that can be opened when the aircraft is on the ground. Nearly all glass windows in large aircraft have an anti-reflective coating, and an internal heating element to melt ice. Smaller aircraft may be equipped with a transparent aircraft canopy.
In most cockpits the pilot's control column or joystick is located centrally (centre stick), although in some military fast jets the side-stick is located on the right hand side. In some commercial airliners (i.e.: Airbus—which features the glass cockpit concept) both pilots use a side-stick located on the outboard side, so Captain's side-stick on the left and First-officer's seat on the right.
Except for some helicopters, the right seat in the cockpit of an aircraft is the seat used by the co-pilot. The captain or pilot in command sits in the , so that they can operate the throttles and other pedestal instruments with their right hand. The tradition has been maintained to this day, with the co-pilot on the right hand side.
The layout of the cockpit, especially in the military fast jet, has undergone standardisation, both within and between aircraft, manufacturers and even nations. An important development was the "Basic Six" pattern, later the "Basic T", developed from 1937 onwards by the Royal Air Force, designed to optimise pilot instrument scanning.
Ergonomics and Human Factors concerns are important in the design of modern cockpits. The layout and function of cockpit displays controls are designed to increase pilot situation awareness without causing information overload. In the past, many cockpits, especially in fighter aircraft, limited the size of the pilots that could fit into them. Now, cockpits are being designed to accommodate from the 1st percentile female physical size to the 99th percentile male size.
In the design of the cockpit in a military fast jet, the traditional "knobs and dials" associated with the cockpit are mainly absent. Instrument panels are now almost wholly replaced by electronic displays, which are themselves often re-configurable to save space. While some hard-wired dedicated switches must still be used for reasons of integrity and safety, many traditional controls are replaced by multi-function re-configurable controls or so-called "soft keys". Controls are incorporated onto the stick and throttle to enable the pilot to maintain a head-up and eyes-out position – the Hands On Throttle And Stick or HOTAS concept. These controls may be then further augmented by control media such as head pointing with a Helmet Mounted Sighting System or Direct voice input (DVI). Advances in auditory displays allow for Direct Voice Output of aircraft status information and for the spatial localisation of warning sounds for improved monitoring of aircraft systems.
The layout of control panels in modern airliners has become largely unified across the industry. The majority of the systems-related controls (such as electrical, fuel, hydraulics and pressurization) for example, are usually located in the ceiling on an overhead panel. Radios are generally placed on a panel between the pilot's seats known as the pedestal. Automatic flight controls such as the autopilot are usually placed just below the windscreen and above the main instrument panel on the glareshield. A central concept in the design of the cockpit is the Design Eye Position or "DEP", from which point all displays should be visible.
Most modern cockpits will also include some kind of integrated warning system.
A study undertaken in 2013, to assess methods for cockpit-user menu navigation, found that touchscreen produced the "best scores".
After the September 11, 2001 attacks, all major airlines fortified their cockpits against access by hijackers.
Flight instruments
In the modern electronic cockpit, the electronic flight instruments usually regarded as essential are MFD, PFD, ND, EICAS, FMS/CDU and back-up instruments.
MCP
A Mode control panel, usually a long narrow panel located centrally in front of the pilot, may be used to control heading, speed, altitude, vertical speed, vertical navigation and lateral navigation. It may also be used to engage or disengage both the autopilot and the autothrottle. The panel as an area is usually referred to as the "glareshield panel". MCP is a Boeing designation (that has been informally adopted as a generic name for the unit/panel) for a unit that allows for the selection and parameter setting of the different autoflight functions, the same unit on an Airbus aircraft is referred to as the FCU (Flight Control unit).
PFD
The primary flight display is usually located in a prominent position, either centrally or on either side of the cockpit. It will in most cases include a digitized presentation of the attitude indicator, air speed and altitude indicators (usually as a tape display) and the vertical speed indicator. It will in many cases include some form of heading indicator and ILS/VOR deviation indicators. In many cases an indicator of the engaged and armed autoflight system modes will be present along with some form of indication of the selected values for altitude, speed, vertical speed and heading. It may be pilot selectable to swap with the ND.
ND
A navigation display, which may be adjacent to the PFD, shows the route and information on the next waypoint, wind speed and wind direction. It may be pilot selectable to swap with the PFD.
EICAS/ECAM
The Engine Indication and Crew Alerting System (EICAS), used by Boeing and Embraer, or the Electronic Centralized Aircraft Monitor (ECAM), used by Airbus, allow the pilot to monitor the following information: values for N1, N2 and N3, fuel temperature, fuel flow, the electrical system, cockpit or cabin temperature and pressure, control surfaces and so on. The pilot may select display of information by means of button press.
FMS/MCDU
The flight management system/control and/or display unit may be used by the pilot to enter and check for the following information: flight plan, speed control, navigation control, etc.
Back-up instruments
In a less prominent part of the cockpit, in case of failure of the other instruments, there will be a battery-powered integrated standby instrument system along with a magnetic compass, showing essential flight information such as speed, altitude, attitude and heading.
Aerospace industry technologies
In the U.S. the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) have researched the ergonomic aspects of cockpit design and have conducted investigations of airline industry accidents. Cockpit design disciplines include Cognitive science, Neuroscience, Human–computer interaction, Human Factors Engineering, Anthropometry and Ergonomics.
Aircraft designs have adopted the fully digital "glass cockpit". In such designs, instruments and gauges, including navigational map displays, use a user interface markup language known as ARINC 661. This standard defines the interface between an independent cockpit display system, generally produced by a single manufacturer, and the avionics equipment and user applications it is required to support, by means of displays and controls, often made by different manufacturers. The separation between the overall display system, and the applications driving it, allows for specialization and independence.
| Technology | Aircraft components | null |
160891 | https://en.wikipedia.org/wiki/Fuselage | Fuselage | The fuselage (; from the French fuselé "spindle-shaped") is an aircraft's main body section. It holds crew, passengers, or cargo. In single-engine aircraft, it will usually contain an engine as well, although in some amphibious aircraft the single engine is mounted on a pylon attached to the fuselage, which in turn is used as a floating hull. The fuselage also serves to position the control and stabilization surfaces in specific relationships to lifting surfaces, which is required for aircraft stability and maneuverability.
Types of structures
Truss structure
This type of structure is still in use in many lightweight aircraft using welded steel tube trusses.
A box truss fuselage structure can also be built out of wood—often covered with plywood. Simple box structures may be rounded by the addition of supported lightweight stringers, allowing the fabric covering to form a more aerodynamic shape, or one more pleasing to the eye.
Geodesic construction
Geodesic structural elements were used by Barnes Wallis for British Vickers between the wars and into World War II to form the whole of the fuselage, including its aerodynamic shape. In this type of construction multiple flat strip stringers are wound about the formers in opposite spiral directions, forming a basket-like appearance. This proved to be light, strong, and rigid and had the advantage of being made almost entirely of wood. A similar construction using aluminum alloy was used in the Vickers Warwick with less material than would be required for other structural types. The geodesic structure is also redundant and so can survive localized damage without catastrophic failure. A fabric covering over the structure completed the aerodynamic shell (see the Vickers Wellington for an example of a large warplane which uses this process). The logical evolution of this is the creation of fuselages using molded plywood, in which several sheets are laid with the grain in differing directions to give the monocoque type below.
Monocoque shell
In this method, the exterior surface of the fuselage is also the primary structure. A typical early form of this (see the Lockheed Vega) was built using molded plywood, where the layers of plywood are formed over a "plug" or within a mold. A later form of this structure uses fiberglass cloth impregnated with polyester or epoxy resin as the skin, instead of plywood. A simple form of this used in some amateur-built aircraft uses rigid expanded foam plastic as the core, with a fiberglass covering, eliminating the necessity of fabricating molds, but requiring more effort in finishing (see the Rutan VariEze). An example of a larger molded plywood aircraft is the de Havilland Mosquito fighter/light bomber of World War II.
No plywood-skin fuselage is truly monocoque, since stiffening elements are incorporated into the structure to carry concentrated loads that would otherwise buckle the thin skin.
The use of molded fiberglass using negative ("female") molds (which give a nearly finished product) is prevalent in the series production of many modern sailplanes. The use of molded composites for fuselage structures is being extended to large passenger aircraft such as the Boeing 787 Dreamliner (using pressure-molding on female molds).
Semi-monocoque
This is the preferred method of constructing an all-aluminum fuselage. First, a series of formers in the shape of the fuselage cross sections are held in position on a rigid fixture. These formers are then joined with lightweight longitudinal elements called stringers. These are in turn covered with a skin of sheet aluminum, attached by riveting or by bonding with special adhesives. The fixture is then disassembled and removed from the completed fuselage shell, which is then fitted out with wiring, controls, and interior equipment such as seats and luggage bins. Most modern large aircraft are built using this technique, but use several large sections constructed in this fashion which are then joined with fasteners to form the complete fuselage. As the accuracy of the final product is determined largely by the costly fixture, this form is suitable for series production, where many identical aircraft are to be produced. Early examples of this type include the Douglas Aircraft DC-2 and DC-3 civil aircraft and the Boeing B-17 Flying Fortress. Most metal light aircraft are constructed using this process.
Both monocoque and semi-monocoque are referred to as "stressed skin" structures as all or a portion of the external load (i.e. from wings and empennage, and from discrete masses such as the engine) is taken by the surface covering. In addition, all the load from internal pressurization is carried (as skin tension) by the external skin.
The proportioning of loads between the components is a design choice dictated largely by the dimensions, strength, and elasticity of the components available for construction and whether or not a design is intended to be "self jigging", not requiring a complete fixture for alignment.
Materials
Early aircraft were constructed of wood frames covered in fabric. As monoplanes became popular, metal frames improved the strength, which eventually led to all-metal-structure aircraft, with metal covering for all its exterior surfaces - this was first pioneered in the second half of 1915. Some modern aircraft are constructed with composite materials for major control surfaces, wings, or the entire fuselage such as the Boeing 787. On the 787, it makes possible higher pressurization levels and larger windows for passenger comfort as well as lower weight to reduce operating costs. The Boeing 787 weighs less than if it were an all-aluminum assembly.
Windows
Cockpit windshields on the Airbus A320 must withstand bird strikes up to and are made of chemically strengthened glass. They are usually composed of three layers or plies, of glass or plastic : the inner two are 8 mm (0.3 in.) thick each and are structural, while the outer ply, about 3 mm thick, is a barrier against foreign object damage and abrasion, with often a hydrophobic coating. It must prevent fogging inside the cabin and de-ice from . This was previously done with thin wires similar to a rear car window but is now accomplished with a transparent, nanometers-thick coating of indium tin oxide sitting between plies, electrically conductive and thus transmitting heat. Curved glass improves aerodynamics but sight criteria also needs larger panes. A cockpit windshield is composed of 4–6 panels, 35 kg (77 lb) each on an Airbus A320. In its lifetime, an average aircraft goes through three or four windshields, and the market is shared evenly between OEM and higher margins aftermarket.
Cabin windows, made from much lighter than glass stretched acrylic glass, consists of multiple panes: an outer one built to support four times the maximum cabin pressure, an inner one for redundancy and a scratch pane near the passenger. Acrylic is susceptible to crazing : a network of fine cracks appears but can be polished to restore optical transparency, removal and polishing typically undergo every 2–3 years for uncoated windows.
Wing integration
"Flying wing" aircraft, such as the Northrop YB-49 Flying Wing and the Northrop B-2 Spirit bomber have no separate fuselage; instead what would be the fuselage is a thickened portion of the wing structure.
Conversely, there have been a small number of aircraft designs which have no separate wing, but use the fuselage to generate lift. Examples include National Aeronautics and Space Administration's experimental lifting body designs and the Vought XF5U-1 Flying Flapjack.
A blended wing body can be considered a mixture of the above. It carries the useful load in a fuselage producing lift. A modern example is Boeing X-48. One of the earliest aircraft using this design approach is Burnelli CBY-3, which fuselage was airfoil shaped to produce lift.
Gallery
| Technology | Aircraft components | null |
160960 | https://en.wikipedia.org/wiki/Box%20plot | Box plot | In descriptive statistics, a box plot or boxplot is a method for demonstrating graphically the locality, spread and skewness groups of numerical data through their quartiles. In addition to the box on a box plot, there can be lines (which are called whiskers) extending from the box indicating variability outside the upper and lower quartiles, thus, the plot is also called the box-and-whisker plot and the box-and-whisker diagram. Outliers that differ significantly from the rest of the dataset may be plotted as individual points beyond the whiskers on the box-plot.
Box plots are non-parametric: they display variation in samples of a statistical population without making any assumptions of the underlying statistical distribution (though Tukey's boxplot assumes symmetry for the whiskers and normality for their length). The spacings in each subsection of the box-plot indicate the degree of dispersion (spread) and skewness of the data, which are usually described using the five-number summary. In addition, the box-plot allows one to visually estimate various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean. Box plots can be drawn either horizontally or vertically.
History
The range-bar method was first introduced by Mary Eleanor Spear in her book "Charting Statistics" in 1952 and again in her book "Practical Charting Techniques" in 1969. The box-and-whisker plot was first introduced in 1970 by John Tukey, who later published on the subject in his book "Exploratory Data Analysis" in 1977.
Elements
A boxplot is a standardized way of displaying the dataset based on the five-number summary: the minimum, the maximum, the sample median, and the first and third quartiles.
Minimum (Q0 or 0th percentile): the lowest data point in the data set excluding any outliers
Maximum (Q4 or 100th percentile): the highest data point in the data set excluding any outliers
Median (Q2 or 50th percentile): the middle value in the data set
First quartile (Q1 or 25th percentile): also known as the lower quartile qn(0.25), it is the median of the lower half of the dataset.
Third quartile (Q3 or 75th percentile): also known as the upper quartile qn(0.75), it is the median of the upper half of the dataset.
In addition to the minimum and maximum values used to construct a box-plot, another important element that can also be employed to obtain a box-plot is the interquartile range (IQR), as denoted below:
Interquartile range (IQR) : the distance between the upper and lower quartiles
A box-plot usually includes two parts, a box and a set of whiskers as shown in Figure 2.
Box
The box is drawn from Q1 to Q3 with a horizontal line drawn inside it to denote the median. Some box plots include an additional character to represent the mean of the data.
Whiskers
The whiskers must end at an observed data point, but can be defined in various ways. In the most straightforward method, the boundary of the lower whisker is the minimum value of the data set, and the boundary of the upper whisker is the maximum value of the data set. Because of this variability, it is appropriate to describe the convention that is being used for the whiskers and outliers in the caption of the box-plot.
Another popular choice for the boundaries of the whiskers is based on the 1.5 IQR value. From above the upper quartile (Q3), a distance of 1.5 times the IQR is measured out and a whisker is drawn up to the largest observed data point from the dataset that falls within this distance. Similarly, a distance of 1.5 times the IQR is measured out below the lower quartile (Q1) and a whisker is drawn down to the lowest observed data point from the dataset that falls within this distance. Because the whiskers must end at an observed data point, the whisker lengths can look unequal, even though 1.5 IQR is the same for both sides. All other observed data points outside the boundary of the whiskers are plotted as outliers. The outliers can be plotted on the box-plot as a dot, a small circle, a star, etc. (see example below).
There are other representations in which the whiskers can stand for several other things, such as:
One standard deviation above and below the mean of the data set
The 9th percentile and the 91st percentile of the data set
The 2nd percentile and the 98th percentile of the data set
Rarely, box-plot can be plotted without the whiskers. This can be appropriate for sensitive information to avoid whiskers (and outliers) disclosing actual values observed.
The unusual percentiles 2%, 9%, 91%, 98% are sometimes used for whisker cross-hatches and whisker ends to depict the seven-number summary. If the data are normally distributed, the locations of the seven marks on the box plot will be equally spaced. On some box plots, a cross-hatch is placed before the end of each whisker.
Variations
Since the mathematician John W. Tukey first popularized this type of visual data display in 1969, several variations on the classical box plot have been developed, and the two most commonly found variations are the variable-width box plots and the notched box plots shown in Figure 4.
Variable-width box plots illustrate the size of each group whose data is being plotted by making the width of the box proportional to the size of the group. A popular convention is to make the box width proportional to the square root of the size of the group.
Notched box plots apply a "notch" or narrowing of the box around the median. Notches are useful in offering a rough guide of the significance of the difference of medians; if the notches of two boxes do not overlap, this will provide evidence of a statistically significant difference between the medians. The height of the notches is proportional to the interquartile range (IQR) of the sample and is inversely proportional to the square root of the size of the sample. However, there is an uncertainty about the most appropriate multiplier (as this may vary depending on the similarity of the variances of the samples). The width of the notch is arbitrarily chosen to be visually pleasing, and should be consistent amongst all box plots being displayed on the same page.
One convention for obtaining the boundaries of these notches is to use a distance of around the median.
Adjusted box plots are intended to describe skew distributions, and they rely on the medcouple statistic of skewness. For a medcouple value of MC, the lengths of the upper and lower whiskers on the box-plot are respectively defined to be:
For a symmetrical data distribution, the medcouple will be zero, and this reduces the adjusted box-plot to the Tukey's box-plot with equal whisker lengths of for both whiskers.
Other kinds of box plots, such as the violin plots and the bean plots can show the difference between single-modal and multimodal distributions, which cannot be observed from the original classical box-plot.
Examples
Example without outliers
A series of hourly temperatures were measured throughout the day in degrees Fahrenheit. The recorded values are listed in order as follows (°F): 57, 57, 57, 58, 63, 66, 66, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81.
A box plot of the data set can be generated by first calculating five relevant values of this data set: minimum, maximum, median (Q2), first quartile (Q1), and third quartile (Q3).
The minimum is the smallest number of the data set. In this case, the minimum recorded day temperature is 57°F.
The maximum is the largest number of the data set. In this case, the maximum recorded day temperature is 81°F.
The median is the "middle" number of the ordered data set. This means that exactly 50% of the elements are below the median and 50% of the elements are greater than the median. The median of this ordered data set is 70°F.
The first quartile value (Q1 or 25th percentile) is the number that marks one quarter of the ordered data set. In other words, there are exactly 25% of the elements that are less than the first quartile and exactly 75% of the elements that are greater than it. The first quartile value can be easily determined by finding the "middle" number between the minimum and the median. For the hourly temperatures, the "middle" number found between 57°F and 70°F is 66°F.
The third quartile value (Q3 or 75th percentile) is the number that marks three quarters of the ordered data set. In other words, there are exactly 75% of the elements that are less than the third quartile and 25% of the elements that are greater than it. The third quartile value can be easily obtained by finding the "middle" number between the median and the maximum. For the hourly temperatures, the "middle" number between 70°F and 81°F is 75°F.
The interquartile range, or IQR, can be calculated by subtracting the first quartile value (Q1) from the third quartile value (Q3):
Hence,
1.5 IQR above the third quartile is:
1.5 IQR below the first quartile is:
The upper whisker boundary of the box-plot is the largest data value that is within 1.5 IQR above the third quartile. Here, 1.5 IQR above the third quartile is 88.5°F and the maximum is 81°F. Therefore, the upper whisker is drawn at the value of the maximum, which is 81°F.
Similarly, the lower whisker boundary of the box plot is the smallest data value that is within 1.5 IQR below the first quartile. Here, 1.5 IQR below the first quartile is 52.5°F and the minimum is 57°F. Therefore, the lower whisker is drawn at the value of the minimum, which is 57°F.
Example with outliers
Above is an example without outliers. Here is a followup example for generating box-plot with outliers:
The ordered set for the recorded temperatures is (°F): 52, 57, 57, 58, 63, 66, 66, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 89.
In this example, only the first and the last number are changed. The median, third quartile, and first quartile remain the same.
In this case, the maximum value in this data set is 89°F, and 1.5 IQR above the third quartile is 88.5°F. The maximum is greater than 1.5 IQR plus the third quartile, so the maximum is an outlier. Therefore, the upper whisker is drawn at the greatest value smaller than 1.5 IQR above the third quartile, which is 79°F.
Similarly, the minimum value in this data set is 52°F, and 1.5 IQR below the first quartile is 52.5°F. The minimum is smaller than 1.5 IQR minus the first quartile, so the minimum is also an outlier. Therefore, the lower whisker is drawn at the smallest value greater than 1.5 IQR below the first quartile, which is 57°F.
In the case of large datasets
An additional example for obtaining box-plot from a data set containing a large number of data points is:
General equation to compute empirical quantiles
Here stands for the general ordering of the data points (i.e. if , then )
Using the above example that has 24 data points (n = 24), one can calculate the median, first and third quartile either mathematically or visually.
Median
First quartile
Third quartile
Visualization
Although box plots may seem more primitive than histograms or kernel density estimates, they do have a number of advantages. First, the box plot enables statisticians to do a quick graphical examination on one or more data sets. Box-plots also take up less space and are therefore particularly useful for comparing distributions between several groups or sets of data in parallel (see Figure 1 for an example). Lastly, the overall structure of histograms and kernel density estimate can be strongly influenced by the choice of number and width of bins techniques and the choice of bandwidth, respectively.
Although looking at a statistical distribution is more common than looking at a box plot, it can be useful to compare the box plot against the probability density function (theoretical histogram) for a normal N(0,σ2) distribution and observe their characteristics directly (as shown in Figure 7).
| Mathematics | Statistics | null |
160990 | https://en.wikipedia.org/wiki/Infinitesimal | Infinitesimal | In mathematics, an infinitesimal number is a non-zero quantity that is closer to 0 than any non-zero real number is. The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinity-eth" item in a sequence.
Infinitesimals do not exist in the standard real number system, but they do exist in other number systems, such as the surreal number system and the hyperreal number system, which can be thought of as the real numbers augmented with both infinitesimal and infinite quantities; the augmentations are the reciprocals of one another.
Infinitesimal numbers were introduced in the development of calculus, in which the derivative was first conceived as a ratio of two infinitesimal quantities. This definition was not rigorously formalized. As calculus developed further, infinitesimals were replaced by limits, which can be calculated using the standard real numbers.
In the 3rd century BC Archimedes used what eventually came to be known as the method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids. In his formal published treatises, Archimedes solved the same problem using the method of exhaustion.
Infinitesimals regained popularity in the 20th century with Abraham Robinson's development of nonstandard analysis and the hyperreal numbers, which, after centuries of controversy, showed that a formal treatment of infinitesimal calculus was possible. Following this, mathematicians developed surreal numbers, a related formalization of infinite and infinitesimal numbers that include both hyperreal cardinal and ordinal numbers, which is the largest ordered field.
Vladimir Arnold wrote in 1990:
The crucial insight for making infinitesimals feasible mathematical entities was that they could still retain certain properties such as angle or slope, even if these entities were infinitely small.
Infinitesimals are a basic ingredient in calculus as developed by Leibniz, including the law of continuity and the transcendental law of homogeneity. In common speech, an infinitesimal object is an object that is smaller than any feasible measurement, but not zero in size—or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective in mathematics, infinitesimal means infinitely small, smaller than any standard real number. Infinitesimals are often compared to other infinitesimals of similar size, as in examining the derivative of a function. An infinite number of infinitesimals are summed to calculate an integral.
The modern concept of infinitesimals was introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz. The 15th century saw the work of Nicholas of Cusa, further developed in the 17th century by Johannes Kepler, in particular, the calculation of the area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin's work on the decimal representation of all numbers in the 16th century prepared the ground for the real continuum. Bonaventura Cavalieri's method of indivisibles led to an extension of the results of the classical authors. The method of indivisibles related to geometrical figures as being composed of entities of codimension 1. John Wallis's infinitesimals differed from indivisibles in that he would decompose geometrical figures into infinitely thin building blocks of the same dimension as the figure, preparing the ground for general methods of the integral calculus. He exploited an infinitesimal denoted 1/∞ in area calculations.
The use of infinitesimals by Leibniz relied upon heuristic principles, such as the law of continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the transcendental law of homogeneity that specifies procedures for replacing expressions involving unassignable quantities, by expressions involving only assignable ones. The 18th century saw routine use of infinitesimals by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange. Augustin-Louis Cauchy exploited infinitesimals both in defining continuity in his Cours d'Analyse, and in defining an early form of a Dirac delta function. As Cantor and Dedekind were developing more abstract versions of Stevin's continuum, Paul du Bois-Reymond wrote a series of papers on infinitesimal-enriched continua based on growth rates of functions. Du Bois-Reymond's work inspired both Émile Borel and Thoralf Skolem. Borel explicitly linked du Bois-Reymond's work to Cauchy's work on rates of growth of infinitesimals. Skolem developed the first non-standard models of arithmetic in 1934. A mathematical implementation of both the law of continuity and infinitesimals was achieved by Abraham Robinson in 1961, who developed nonstandard analysis based on earlier work by Edwin Hewitt in 1948 and Jerzy Łoś in 1955. The hyperreals implement an infinitesimal-enriched continuum and the transfer principle implements Leibniz's law of continuity. The standard part function implements Fermat's adequality.
History of the infinitesimal
The notion of infinitely small quantities was discussed by the Eleatic School. The Greek mathematician Archimedes (c. 287 BC – c. 212 BC), in The Method of Mechanical Theorems, was the first to propose a logically rigorous definition of infinitesimals. His Archimedean property defines a number x as infinite if it satisfies the conditions ..., and infinitesimal if and a similar set of conditions holds for x and the reciprocals of the positive integers. A number system is said to be Archimedean if it contains no infinite or infinitesimal members.
The English mathematician John Wallis introduced the expression 1/∞ in his 1655 book Treatise on the Conic Sections. The symbol, which denotes the reciprocal, or inverse, of ∞, is the symbolic representation of the mathematical concept of an infinitesimal. In his Treatise on the Conic Sections, Wallis also discusses the concept of a relationship between the symbolic representation of infinitesimal 1/∞ that he introduced and the concept of infinity for which he introduced the symbol ∞. The concept suggests a thought experiment of adding an infinite number of parallelograms of infinitesimal width to form a finite area. This concept was the predecessor to the modern method of integration used in integral calculus. The conceptual origins of the concept of the infinitesimal 1/∞ can be traced as far back as the Greek philosopher Zeno of Elea, whose Zeno's dichotomy paradox was the first mathematical concept to consider the relationship between a finite interval and an interval approaching that of an infinitesimal-sized interval.
Infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632.
Prior to the invention of calculus mathematicians were able to calculate tangent lines using Pierre de Fermat's method of adequality and René Descartes' method of normals. There is debate among scholars as to whether the method was infinitesimal or algebraic in nature. When Newton and Leibniz invented the calculus, they made use of infinitesimals, Newton's fluxions and Leibniz' differential. The use of infinitesimals was attacked as incorrect by Bishop Berkeley in his work The Analyst. Mathematicians, scientists, and engineers continued to use infinitesimals to produce correct results. In the second half of the nineteenth century, the calculus was reformulated by Augustin-Louis Cauchy, Bernard Bolzano, Karl Weierstrass, Cantor, Dedekind, and others using the (ε, δ)-definition of limit and set theory.
While the followers of Cantor, Dedekind, and Weierstrass sought to rid analysis of infinitesimals, and their philosophical allies like Bertrand Russell and Rudolf Carnap declared that infinitesimals are pseudoconcepts, Hermann Cohen and his Marburg school of neo-Kantianism sought to develop a working logic of infinitesimals. The mathematical study of systems containing infinitesimals continued through the work of Levi-Civita, Giuseppe Veronese, Paul du Bois-Reymond, and others, throughout the late nineteenth and the twentieth centuries, as documented by Philip Ehrlich (2006). In the 20th century, it was found that infinitesimals could serve as a basis for calculus and analysis (see hyperreal numbers).
First-order properties
In extending the real numbers to include infinite and infinitesimal quantities, one typically wishes to be as conservative as possible by not changing any of their elementary properties. This guarantees that as many familiar results as possible are still available. Typically, elementary means that there is no quantification over sets, but only over elements. This limitation allows statements of the form "for any number x..." For example, the axiom that states "for any number x, x + 0 = x" would still apply. The same is true for quantification over several numbers, e.g., "for any numbers x and y, xy = yx." However, statements of the form "for any set S of numbers ..." may not carry over. Logic with this limitation on quantification is referred to as first-order logic.
The resulting extended number system cannot agree with the reals on all properties that can be expressed by quantification over sets, because the goal is to construct a non-Archimedean system, and the Archimedean principle can be expressed by quantification over sets. One can conservatively extend any theory including reals, including set theory, to include infinitesimals, just by adding a countably infinite list of axioms that assert that a number is smaller than 1/2, 1/3, 1/4, and so on. Similarly, the completeness property cannot be expected to carry over, because the reals are the unique complete ordered field up to isomorphism.
We can distinguish three levels at which a non-Archimedean number system could have first-order properties compatible with those of the reals:
An ordered field obeys all the usual axioms of the real number system that can be stated in first-order logic. For example, the commutativity axiom x + y = y + x holds.
A real closed field has all the first-order properties of the real number system, regardless of whether they are usually taken as axiomatic, for statements involving the basic ordered-field relations +, ×, and ≤. This is a stronger condition than obeying the ordered-field axioms. More specifically, one includes additional first-order properties, such as the existence of a root for every odd-degree polynomial. For example, every number must have a cube root.
The system could have all the first-order properties of the real number system for statements involving any relations (regardless of whether those relations can be expressed using +, ×, and ≤). For example, there would have to be a sine function that is well defined for infinite inputs; the same is true for every real function.
Systems in category 1, at the weak end of the spectrum, are relatively easy to construct but do not allow a full treatment of classical analysis using infinitesimals in the spirit of Newton and Leibniz. For example, the transcendental functions are defined in terms of infinite limiting processes, and therefore there is typically no way to define them in first-order logic. Increasing the analytic strength of the system by passing to categories 2 and 3, we find that the flavor of the treatment tends to become less constructive, and it becomes more difficult to say anything concrete about the hierarchical structure of infinities and infinitesimals.
Number systems that include infinitesimals
Formal series
Laurent series
An example from category 1 above is the field of Laurent series with a finite number of negative-power terms. For example, the Laurent series consisting only of the constant term 1 is identified with the real number 1, and the series with only the linear term x is thought of as the simplest infinitesimal, from which the other infinitesimals are constructed. Dictionary ordering is used, which is equivalent to considering higher powers of x as negligible compared to lower powers. David O. Tall refers to this system as the super-reals, not to be confused with the superreal number system of Dales and Woodin. Since a Taylor series evaluated with a Laurent series as its argument is still a Laurent series, the system can be used to do calculus on transcendental functions if they are analytic. These infinitesimals have different first-order properties than the reals because, for example, the basic infinitesimal x does not have a square root.
The Levi-Civita field
The Levi-Civita field is similar to the Laurent series, but is algebraically closed. For example, the basic infinitesimal x has a square root. This field is rich enough to allow a significant amount of analysis to be done, but its elements can still be represented on a computer in the same sense that real numbers can be represented in floating-point.
Transseries
The field of transseries is larger than the Levi-Civita field. An example of a transseries is:
where for purposes of ordering x is considered infinite.
Surreal numbers
Conway's surreal numbers fall into category 2, except that the surreal numbers form a proper class and not a set. They are a system designed to be as rich as possible in different sizes of numbers, but not necessarily for convenience in doing analysis, in the sense that every ordered field is a subfield of the surreal numbers. There is a natural extension of the exponential function to the surreal numbers.
Hyperreals
The most widespread technique for handling infinitesimals is the hyperreals, developed by Abraham Robinson in the 1960s. They fall into category 3 above, having been designed that way so all of classical analysis can be carried over from the reals. This property of being able to carry over all relations in a natural way is known as the transfer principle, proved by Jerzy Łoś in 1955. For example, the transcendental function sin has a natural counterpart *sin that takes a hyperreal input and gives a hyperreal output, and similarly the set of natural numbers has a natural counterpart , which contains both finite and infinite integers. A proposition such as carries over to the hyperreals as .
Superreals
The superreal number system of Dales and Woodin is a generalization of the hyperreals. It is different from the super-real system defined by David Tall.
Dual numbers
In linear algebra, the dual numbers extend the reals by adjoining one infinitesimal, the new element ε with the property ε2 = 0 (that is, ε is nilpotent). Every dual number has the form z = a + bε with a and b being uniquely determined real numbers.
One application of dual numbers is automatic differentiation. This application can be generalized to polynomials in n variables, using the Exterior algebra of an n-dimensional vector space.
Smooth infinitesimal analysis
Synthetic differential geometry or smooth infinitesimal analysis have roots in category theory. This approach departs from the classical logic used in conventional mathematics by denying the general applicability of the law of excluded middle – i.e., not (a ≠ b) does not have to mean a = b. A nilsquare or nilpotent infinitesimal can then be defined. This is a number x where x2 = 0 is true, but x = 0 need not be true at the same time. Since the background logic is intuitionistic logic, it is not immediately clear how to classify this system with regard to classes 1, 2, and 3. Intuitionistic analogues of these classes would have to be developed first.
Infinitesimal delta functions
Cauchy used an infinitesimal to write down a unit impulse, infinitely tall and narrow Dirac-type delta function satisfying in a number of articles in 1827, see Laugwitz (1989). Cauchy defined an infinitesimal in 1821 (Cours d'Analyse) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a suitable ultrafilter. The article by Yamashita (2007) contains bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals.
Logical properties
The method of constructing infinitesimals of the kind used in nonstandard analysis depends on the model and which collection of axioms are used. We consider here systems where infinitesimals can be shown to exist.
In 1936 Maltsev proved the compactness theorem. This theorem is fundamental for the existence of infinitesimals as it proves that it is possible to formalise them. A consequence of this theorem is that if there is a number system in which it is true that for any positive integer n there is a positive number x such that 0 < x < 1/n, then there exists an extension of that number system in which it is true that there exists a positive number x such that for any positive integer n we have 0 < x < 1/n. The possibility to switch "for any" and "there exists" is crucial. The first statement is true in the real numbers as given in ZFC set theory : for any positive integer n it is possible to find a real number between 1/n and zero, but this real number depends on n. Here, one chooses n first, then one finds the corresponding x. In the second expression, the statement says that there is an x (at least one), chosen first, which is between 0 and 1/n for any n. In this case x is infinitesimal. This is not true in the real numbers (R) given by ZFC. Nonetheless, the theorem proves that there is a model (a number system) in which this is true. The question is: what is this model? What are its properties? Is there only one such model?
There are in fact many ways to construct such a one-dimensional linearly ordered set of numbers, but fundamentally, there are two different approaches:
Extend the number system so that it contains more numbers than the real numbers.
Extend the axioms (or extend the language) so that the distinction between the infinitesimals and non-infinitesimals can be made in the real numbers themselves.
In 1960, Abraham Robinson provided an answer following the first approach. The extended set is called the hyperreals and contains numbers less in absolute value than any positive real number. The method may be considered relatively complex but it does prove that infinitesimals exist in the universe of ZFC set theory. The real numbers are called standard numbers and the new non-real hyperreals are called nonstandard.
In 1977 Edward Nelson provided an answer following the second approach. The extended axioms are IST, which stands either for Internal set theory or for the initials of the three extra axioms: Idealization, Standardization, Transfer. In this system, we consider that the language is extended in such a way that we can express facts about infinitesimals. The real numbers are either standard or nonstandard. An infinitesimal is a nonstandard real number that is less, in absolute value, than any positive standard real number.
In 2006 Karel Hrbacek developed an extension of Nelson's approach in which the real numbers are stratified in (infinitely) many levels; i.e., in the coarsest level, there are no infinitesimals nor unlimited numbers. Infinitesimals are at a finer level and there are also infinitesimals with respect to this new level and so on.
Infinitesimals in teaching
Calculus textbooks based on infinitesimals include the classic Calculus Made Easy by Silvanus P. Thompson (bearing the motto "What one fool can do another can") and the German text Mathematik fur Mittlere Technische Fachschulen der Maschinenindustrie by R. Neuendorff. Pioneering works based on Abraham Robinson's infinitesimals include texts by Stroyan (dating from 1972) and Howard Jerome Keisler (Elementary Calculus: An Infinitesimal Approach). Students easily relate to the intuitive notion of an infinitesimal difference 1-"0.999...", where "0.999..." differs from its standard meaning as the real number 1, and is reinterpreted as an infinite terminating extended decimal that is strictly less than 1.
Another elementary calculus text that uses the theory of infinitesimals as developed by Robinson is Infinitesimal Calculus by Henle and Kleinberg, originally published in 1979. The authors introduce the language of first-order logic, and demonstrate the construction of a first order model of the hyperreal numbers. The text provides an introduction to the basics of integral and differential calculus in one dimension, including sequences and series of functions. In an Appendix, they also treat the extension of their model to the hyperhyperreals, and demonstrate some applications for the extended model.
An elementary calculus text based on smooth infinitesimal analysis is Bell, John L. (2008). A Primer of Infinitesimal Analysis, 2nd Edition. Cambridge University Press. ISBN 9780521887182.
A more recent calculus text utilizing infinitesimals is Dawson, C. Bryan (2022), Calculus Set Free: Infinitesimals to the Rescue, Oxford University Press. ISBN 9780192895608.
Functions tending to zero
In a related but somewhat different sense, which evolved from the original definition of "infinitesimal" as an infinitely small quantity, the term has also been used to refer to a function tending to zero. More precisely, Loomis and Sternberg's Advanced Calculus defines the function class of infinitesimals, , as a subset of functions between normed vector spaces by , as well as two related classes (see Big-O notation) by , and.The set inclusions generally hold. That the inclusions are proper is demonstrated by the real-valued functions of a real variable , , and : but and .As an application of these definitions, a mapping between normed vector spaces is defined to be differentiable at if there is a [i.e, a bounded linear map ] such that in a neighborhood of . If such a map exists, it is unique; this map is called the differential and is denoted by , coinciding with the traditional notation for the classical (though logically flawed) notion of a differential as an infinitely small "piece" of F. This definition represents a generalization of the usual definition of differentiability for vector-valued functions of (open subsets of) Euclidean spaces.
Array of random variables
Let be a probability space and let . An array of random variables is called infinitesimal if for every , we have:
The notion of infinitesimal array is essential in some central limit theorems and it is easily seen by monotonicity of the expectation operator that any array satisfying Lindeberg's condition is infinitesimal, thus playing an important role in Lindeberg's Central Limit Theorem (a generalization of the central limit theorem).
| Mathematics | Basics_2 | null |
160993 | https://en.wikipedia.org/wiki/Generating%20function | Generating function | In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations on the formal series.
There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed.
Generating functions are sometimes called generating series, in that a series of terms can be said to be the generator of its sequence of term coefficients.
History
Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem.
George Pólya writes in Mathematics and plausible reasoning:
The name "generating function" is due to Laplace. Yet, without giving it a name, Euler used the device of generating functions long before Laplace [..]. He applied this mathematical tool to several problems in Combinatory Analysis and the Theory of Numbers.
Definition
Convergence
Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers. Thus generating functions are not functions in the formal sense of a mapping from a domain to a codomain.
These expressions in terms of the indeterminate may involve arithmetic operations, differentiation with respect to and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of . Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of , and which has the formal series as its series expansion; this explains the designation "generating functions". However such interpretation is not required to be possible, because formal series are not required to give a convergent series when a nonzero numeric value is substituted for .
Limitations
Not all expressions that are meaningful as functions of are meaningful as expressions designating formal series; for example, negative and fractional powers of are examples of functions that do not have a corresponding formal power series.
Types
Ordinary generating function (OGF)
When the term generating function is used without qualification, it is usually taken to mean an ordinary generating function. The ordinary generating function of a sequence is:
If is the probability mass function of a discrete random variable, then its ordinary generating function is called a probability-generating function.
Exponential generating function (EGF)
The exponential generating function of a sequence is
Exponential generating functions are generally more convenient than ordinary generating functions for combinatorial enumeration problems that involve labelled objects.
Another benefit of exponential generating functions is that they are useful in transferring linear recurrence relations to the realm of differential equations. For example, take the Fibonacci sequence that satisfies the linear recurrence relation . The corresponding exponential generating function has the form
and its derivatives can readily be shown to satisfy the differential equation as a direct analogue with the recurrence relation above. In this view, the factorial term is merely a counter-term to normalise the derivative operator acting on .
Poisson generating function
The Poisson generating function of a sequence is
Lambert series
The Lambert series of a sequence is
Note that in a Lambert series the index starts at 1, not at 0, as the first term would otherwise be undefined.
The Lambert series coefficients in the power series expansions
for integers are related by the divisor sum
The main article provides several more classical, or at least well-known examples related to special arithmetic functions in number theory. As an example of a Lambert series identity not given in the main article, we can show that for we have that
where we have the special case identity for the generating function of the divisor function, , given by
Bell series
The Bell series of a sequence is an expression in terms of both an indeterminate and a prime and is given by:
Dirichlet series generating functions (DGFs)
Formal Dirichlet series are often classified as generating functions, although they are not strictly formal power series. The Dirichlet series generating function of a sequence is:
The Dirichlet series generating function is especially useful when is a multiplicative function, in which case it has an Euler product expression in terms of the function's Bell series:
If is a Dirichlet character then its Dirichlet series generating function is called a Dirichlet -series. We also have a relation between the pair of coefficients in the Lambert series expansions above and their DGFs. Namely, we can prove that:
if and only if
where is the Riemann zeta function.
The sequence generated by a Dirichlet series generating function (DGF) corresponding to:has the ordinary generating function:
Polynomial sequence generating functions
The idea of generating functions can be extended to sequences of other objects. Thus, for example, polynomial sequences of binomial type are generated by:
where is a sequence of polynomials and is a function of a certain form. Sheffer sequences are generated in a similar way. See the main article generalized Appell polynomials for more information.
Examples of polynomial sequences generated by more complex generating functions include:
Appell polynomials
Chebyshev polynomials
Difference polynomials
Generalized Appell polynomials
-difference polynomials
Other generating functions
Other sequences generated by more complex generating functions include:
Double exponential generating functions. For example: Aitken's Array: Triangle of Numbers
Hadamard products of generating functions and diagonal generating functions, and their corresponding integral transformations
Convolution polynomials
Knuth's article titled "Convolution Polynomials" defines a generalized class of convolution polynomial sequences by their special generating functions of the form
for some analytic function with a power series expansion such that .
We say that a family of polynomials, , forms a convolution family if and if the following convolution condition holds for all , and for all :
We see that for non-identically zero convolution families, this definition is equivalent to requiring that the sequence have an ordinary generating function of the first form given above.
A sequence of convolution polynomials defined in the notation above has the following properties:
The sequence is of binomial type
Special values of the sequence include and , and
For arbitrary (fixed) , these polynomials satisfy convolution formulas of the form
For a fixed non-zero parameter , we have modified generating functions for these convolution polynomial sequences given by
where is implicitly defined by a functional equation of the form . Moreover, we can use matrix methods (as in the reference) to prove that given two convolution polynomial sequences, and , with respective corresponding generating functions, and , then for arbitrary we have the identity
Examples of convolution polynomial sequences include the binomial power series, , so-termed tree polynomials, the Bell numbers, , the Laguerre polynomials, and the Stirling convolution polynomials.
Ordinary generating functions
Examples for simple sequences
Polynomials are a special case of ordinary generating functions, corresponding to finite sequences, or equivalently sequences that vanish after a certain point. These are important in that many finite sequences can usefully be interpreted as generating functions, such as the Poincaré polynomial and others.
A fundamental generating function is that of the constant sequence , whose ordinary generating function is the geometric series
The left-hand side is the Maclaurin series expansion of the right-hand side. Alternatively, the equality can be justified by multiplying the power series on the left by , and checking that the result is the constant power series 1 (in other words, that all coefficients except the one of are equal to 0). Moreover, there can be no other power series with this property. The left-hand side therefore designates the multiplicative inverse of in the ring of power series.
Expressions for the ordinary generating function of other sequences are easily derived from this one. For instance, the substitution gives the generating function for the geometric sequence for any constant :
(The equality also follows directly from the fact that the left-hand side is the Maclaurin series expansion of the right-hand side.) In particular,
One can also introduce regular gaps in the sequence by replacing by some power of , so for instance for the sequence (which skips over ) one gets the generating function
By squaring the initial generating function, or by finding the derivative of both sides with respect to and making a change of running variable , one sees that the coefficients form the sequence , so one has
and the third power has as coefficients the triangular numbers whose term is the binomial coefficient , so that
More generally, for any non-negative integer and non-zero real value , it is true that
Since
one can find the ordinary generating function for the sequence of square numbers by linear combination of binomial-coefficient generating sequences:
We may also expand alternately to generate this same sequence of squares as a sum of derivatives of the geometric series in the following form:
By induction, we can similarly show for positive integers that
where denote the Stirling numbers of the second kind and where the generating function
so that we can form the analogous generating functions over the integral th powers generalizing the result in the square case above. In particular, since we can write
we can apply a well-known finite sum identity involving the Stirling numbers to obtain that
Rational functions
The ordinary generating function of a sequence can be expressed as a rational function (the ratio of two finite-degree polynomials) if and only if the sequence is a linear recursive sequence with constant coefficients; this generalizes the examples above. Conversely, every sequence generated by a fraction of polynomials satisfies a linear recurrence with constant coefficients; these coefficients are identical to the coefficients of the fraction denominator polynomial (so they can be directly read off). This observation shows it is easy to solve for generating functions of sequences defined by a linear finite difference equation with constant coefficients, and then hence, for explicit closed-form formulas for the coefficients of these generating functions. The prototypical example here is to derive Binet's formula for the Fibonacci numbers via generating function techniques.
We also notice that the class of rational generating functions precisely corresponds to the generating functions that enumerate quasi-polynomial sequences of the form
where the reciprocal roots, , are fixed scalars and where is a polynomial in for all .
In general, Hadamard products of rational functions produce rational generating functions. Similarly, if
is a bivariate rational generating function, then its corresponding diagonal generating function,
is algebraic. For example, if we let
then this generating function's diagonal coefficient generating function is given by the well-known OGF formula
This result is computed in many ways, including Cauchy's integral formula or contour integration, taking complex residues, or by direct manipulations of formal power series in two variables.
Operations on generating functions
Multiplication yields convolution
Multiplication of ordinary generating functions yields a discrete convolution (the Cauchy product) of the sequences. For example, the sequence of cumulative sums (compare to the slightly more general Euler–Maclaurin formula)
of a sequence with ordinary generating function has the generating function
because is the ordinary generating function for the sequence . | Mathematics | Sequences and series | null |
160995 | https://en.wikipedia.org/wiki/Statistical%20significance | Statistical significance | In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by , is the probability of the study rejecting the null hypothesis, given that the null hypothesis is true; and the p-value of a result, , is the probability of obtaining a result at least as extreme, given that the null hypothesis is true. The result is statistically significant, by the standards of the study, when .<ref
name="Johnson"></ref> The significance level for a study is chosen before data collection, and is typically set to 5% or much lower—depending on the field of study.
In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone. But if the p-value of an observed effect is less than (or equal to) the significance level, an investigator may conclude that the effect reflects the characteristics of the whole population, thereby rejecting the null hypothesis.
This technique for testing the statistical significance of results was developed in the early 20th century. The term significance does not imply importance here, and the term statistical significance is not the same as research significance, theoretical significance, or practical significance. For example, the term clinical significance refers to the practical importance of a treatment effect.
History
Statistical significance dates to the 18th century, in the work of John Arbuthnot and Pierre-Simon Laplace, who computed the p-value for the human sex ratio at birth, assuming a null hypothesis of equal probability of male and female births; see for details.
In 1925, Ronald Fisher advanced the idea of statistical hypothesis testing, which he called "tests of significance", in his publication Statistical Methods for Research Workers. Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis. In a 1933 paper, Jerzy Neyman and Egon Pearson called this cutoff the significance level, which they named . They recommended that be set ahead of time, prior to any data collection.
Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed. In his 1956 publication Statistical Methods and Scientific Inference, he recommended that significance levels be set according to specific circumstances.
Related concepts
The significance level is the threshold for below which the null hypothesis is rejected even though by assumption it were true, and something else is going on. This means that is also the probability of mistakenly rejecting the null hypothesis, if the null hypothesis is true. This is also called false positive and type I error.
Sometimes researchers talk about the confidence level instead. This is the probability of not rejecting the null hypothesis given that it is true. Confidence levels and confidence intervals were introduced by Neyman in 1937.
Role in statistical hypothesis testing
Statistical significance plays a pivotal role in statistical hypothesis testing. It is used to determine whether the null hypothesis should be rejected or retained. The null hypothesis is the hypothesis that no effect exists in the phenomenon being studied. For the null hypothesis to be rejected, an observed result has to be statistically significant, i.e. the observed p-value is less than the pre-specified significance level .
To determine whether a result is statistically significant, a researcher calculates a p-value, which is the probability of observing an effect of the same magnitude or more extreme given that the null hypothesis is true. The null hypothesis is rejected if the p-value is less than (or equal to) a predetermined level, . is also called the significance level, and is the probability of rejecting the null hypothesis given that it is true (a type I error). It is usually set at or below 5%.
For example, when is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%, and a statistically significant result is one where the observed p-value is less than (or equal to) 5%. When drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution. These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution, as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution.
The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better. A two-tailed test may still be used but it will be less powerful than a one-tailed test, because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used. The one-tailed test is only more powerful than a two-tailed test if the specified direction of the alternative hypothesis is correct. If it is wrong, however, then the one-tailed test has no power.
Significance thresholds in specific fields
In specific fields such as particle physics and manufacturing, statistical significance is often expressed in multiples of the standard deviation or sigma (σ) of a normal distribution, with significance thresholds set at a much stricter level (for example 5σ). For instance, the certainty of the Higgs boson particle's existence was based on the 5σ criterion, which corresponds to a p-value of about 1 in 3.5 million.
In other fields of scientific research such as genome-wide association studies, significance levels as low as are not uncommon—as the number of tests performed is extremely large.
Limitations
Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive and not replicable. There is also a difference between statistical significance and practical significance. A study that is found to be statistically significant may not necessarily be practically significant.
Effect size
Effect size is a measure of a study's practical significance. A statistically significant result may have a weak effect. To gauge the research significance of their result, researchers are encouraged to always report an effect size along with p-values. An effect size measure quantifies the strength of an effect, such as the distance between two means in units of standard deviation (cf. Cohen's d), the correlation coefficient between two variables or its square, and other measures.
Reproducibility
A statistically significant result may not be easy to reproduce. In particular, some statistically significant results will in fact be false positives. Each failed attempt to reproduce a result increases the likelihood that the result was a false positive.
Challenges
Overuse in some journals
Starting in the 2010s, some journals began questioning whether significance testing, and particularly using a threshold of =5%, was being relied on too heavily as the primary measure of validity of a hypothesis. Some journals encouraged authors to do more detailed analysis than just a statistical significance test. In social psychology, the journal Basic and Applied Social Psychology banned the use of significance testing altogether from papers it published, requiring authors to use other measures to evaluate hypotheses and impact.
Other editors, commenting on this ban have noted: "Banning the reporting of p-values, as Basic and Applied Social Psychology recently did, is not going to solve the problem because it is merely treating a symptom of the problem. There is nothing wrong with hypothesis testing and p-values per se as long as authors, reviewers, and action editors use them correctly." Some statisticians prefer to use alternative measures of evidence, such as likelihood ratios or Bayes factors. Using Bayesian statistics can avoid confidence levels, but also requires making additional assumptions, and may not necessarily improve practice regarding statistical testing.
The widespread abuse of statistical significance represents an important topic of research in metascience.
Redefining significance
In 2016, the American Statistical Association (ASA) published a statement on p-values, saying that "the widespread use of 'statistical significance' (generally interpreted as 'p ≤ 0.05') as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process". In 2017, a group of 72 authors proposed to enhance reproducibility by changing the p-value threshold for statistical significance from 0.05 to 0.005. Other researchers responded that imposing a more stringent significance threshold would aggravate problems such as data dredging; alternative propositions are thus to select and justify flexible p-value thresholds before collecting data, or to interpret p-values as continuous indices, thereby discarding thresholds and statistical significance. Additionally, the change to 0.005 would increase the likelihood of false negatives, whereby the effect being studied is real, but the test fails to show it.
In 2019, over 800 statisticians and scientists signed a message calling for the abandonment of the term "statistical significance" in science, and the ASA published a further official statement declaring (page 2):
| Mathematics | Statistics | null |
161019 | https://en.wikipedia.org/wiki/Negation | Negation | In logic, negation, also called the logical not or logical complement, is an operation that takes a proposition to another proposition "not ", written , , or . It is interpreted intuitively as being true when is false, and false when is true. For example, if is "Spot runs", then "not " is "Spot does not run". An operand of a negation is called a negand or negatum.
Negation is a unary logical connective. It may furthermore be applied not only to propositions, but also to notions, truth values, or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes truth to falsity (and vice versa). In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation, the negation of a proposition is the proposition whose proofs are the refutations of .
Definition
Classical negation is an operation on one logical value, typically the value of a proposition, that produces a value of true when its operand is false, and a value of false when its operand is true. Thus if statement is true, then (pronounced "not P") would then be false; and conversely, if is true, then would be false.
The truth table of is as follows:
{| class="wikitable" style="text-align:center; background-color: #ddffdd;"
|- bgcolor="#ddeeff"
| ||
|-
| ||
|-
| ||
|}
Negation can be defined in terms of other logical operations. For example, can be defined as (where is logical consequence and is absolute falsehood). Conversely, one can define as for any proposition (where is logical conjunction). The idea here is that any contradiction is false, and while these ideas work in both classical and intuitionistic logic, they do not work in paraconsistent logic, where contradictions are not necessarily false. As a further example, negation can be defined in terms of NAND and can also be defined in terms of NOR.
Algebraically, classical negation corresponds to complementation in a Boolean algebra, and intuitionistic negation to pseudocomplementation in a Heyting algebra. These algebras provide a semantics for classical and intuitionistic logic.
Notation
The negation of a proposition is notated in different ways, in various contexts of discussion and fields of application. The following table documents some of these variants:
The notation is Polish notation.
In set theory, is also used to indicate 'not in the set of': is the set of all members of that are not members of .
Regardless how it is notated or symbolized, the negation can be read as "it is not the case that ", "not that ", or usually more simply as "not ".
Precedence
As a way of reducing the number of necessary parentheses, one may introduce precedence rules: ¬ has higher precedence than ∧, ∧ higher than ∨, and ∨ higher than →. So for example, is short for
Here is a table that shows a commonly used precedence of logical operators.
Properties
Double negation
Within a system of classical logic, double negation, that is, the negation of the negation of a proposition , is logically equivalent to . Expressed in symbolic terms, . In intuitionistic logic, a proposition implies its double negation, but not conversely. This marks one important difference between classical and intuitionistic negation. Algebraically, classical negation is called an involution of period two.
However, in intuitionistic logic, the weaker equivalence does hold. This is because in intuitionistic logic, is just a shorthand for , and we also have . Composing that last implication with triple negation implies that .
As a result, in the propositional case, a sentence is classically provable if its double negation is intuitionistically provable. This result is known as Glivenko's theorem.
Distributivity
De Morgan's laws provide a way of distributing negation over disjunction and conjunction:
, and
.
Linearity
Let denote the logical xor operation. In Boolean algebra, a linear function is one such that:
If there exists ,
,
for all .
Another way to express this is that each variable always makes a difference in the truth-value of the operation, or it never makes a difference. Negation is a linear logical operator.
Self dual
In Boolean algebra, a self dual function is a function such that:
for all
.
Negation is a self dual logical operator.
Negations of quantifiers
In first-order logic, there are two quantifiers, one is the universal quantifier (means "for all") and the other is the existential quantifier (means "there exists"). The negation of one quantifier is the other quantifier ( and ). For example, with the predicate P as "x is mortal" and the domain of x as the collection of all humans, means "a person x in all humans is mortal" or "all humans are mortal". The negation of it is , meaning "there exists a person x in all humans who is not mortal", or "there exists someone who lives forever".
Rules of inference
There are a number of equivalent ways to formulate rules for negation. One usual way to formulate classical negation in a natural deduction setting is to take as primitive rules of inference negation introduction (from a derivation of to both and , infer ; this rule also being called reductio ad absurdum), negation elimination (from and infer ; this rule also being called ex falso quodlibet), and double negation elimination (from infer ). One obtains the rules for intuitionistic negation the same way but by excluding double negation elimination.
Negation introduction states that if an absurdity can be drawn as conclusion from then must not be the case (i.e. is false (classically) or refutable (intuitionistically) or etc.). Negation elimination states that anything follows from an absurdity. Sometimes negation elimination is formulated using a primitive absurdity sign . In this case the rule says that from and follows an absurdity. Together with double negation elimination one may infer our originally formulated rule, namely that anything follows from an absurdity.
Typically the intuitionistic negation of is defined as . Then negation introduction and elimination are just special cases of implication introduction (conditional proof) and elimination (modus ponens). In this case one must also add as a primitive rule ex falso quodlibet.
Programming language and ordinary language
As in mathematics, negation is used in computer science to construct logical statements.
if (!(r == t))
{
/*...statements executed when r does NOT equal t...*/
}
The exclamation mark "!" signifies logical NOT in B, C, and languages with a C-inspired syntax such as C++, Java, JavaScript, Perl, and PHP. "NOT" is the operator used in ALGOL 60, BASIC, and languages with an ALGOL- or BASIC-inspired syntax such as Pascal, Ada, Eiffel and Seed7. Some languages (C++, Perl, etc.) provide more than one operator for negation. A few languages like PL/I and Ratfor use ¬ for negation. Most modern languages allow the above statement to be shortened from if (!(r == t)) to if (r != t), which allows sometimes, when the compiler/interpreter is not able to optimize it, faster programs.
In computer science there is also bitwise negation. This takes the value given and switches all the binary 1s to 0s and 0s to 1s. This is often used to create ones' complement (or "~" in C or C++) and two's complement (just simplified to "-" or the negative sign, as this is equivalent to taking the arithmetic negation of the number).
To get the absolute (positive equivalent) value of a given integer the following would work as the "-" changes it from negative to positive (it is negative because "x < 0" yields true)
unsigned int abs(int x)
{
if (x < 0)
return -x;
else
return x;
}
To demonstrate logical negation:
unsigned int abs(int x)
{
if (!(x < 0))
return x;
else
return -x;
}
Inverting the condition and reversing the outcomes produces code that is logically equivalent to the original code, i.e. will have identical results for any input (depending on the compiler used, the actual instructions performed by the computer may differ).
In C (and some other languages descended from C), double negation (!!x) is used as an idiom to convert x to a canonical Boolean, ie. an integer with a value of either 0 or 1 and no other. Although any integer other than 0 is logically true in C and 1 is not special in this regard, it is sometimes important to ensure that a canonical value is used, for example for printing or if the number is subsequently used for arithmetic operations.
The convention of using ! to signify negation occasionally surfaces in ordinary written speech, as computer-related slang for not. For example, the phrase !voting means "not voting". Another example is the phrase !clue which is used as a synonym for "no-clue" or "clueless".
Kripke semantics
In Kripke semantics where the semantic values of formulae are sets of possible worlds, negation can be taken to mean set-theoretic complementation (see also possible world semantics for more).
| Mathematics | Specific functions | null |
161228 | https://en.wikipedia.org/wiki/Document | Document | A document is a written, drawn, presented, or memorialized representation of thought, often the manifestation of non-fictional, as well as fictional, content. The word originates from the Latin , which denotes a "teaching" or "lesson": the verb denotes "to teach". In the past, the word was usually used to denote written proof useful as evidence of a truth or fact. In the Computer Age, "document" usually denotes a primarily textual computer file, including its structure and format, e.g. fonts, colors, and images. Contemporarily, "document" is not defined by its transmission medium, e.g., paper, given the existence of electronic documents. "Documentation" is distinct because it has more denotations than "document". Documents are also distinguished from "realia", which are three-dimensional objects that would otherwise satisfy the definition of "document" because they memorialize or represent thought; documents are considered more as two-dimensional representations. While documents can have large varieties of customization, all documents can be shared freely and have the right to do so, creativity can be represented by documents, also. History, events, examples, opinions, stories etc. all can be expressed in documents.
Abstract definitions
The concept of "document" has been defined by Suzanne Briet as "any concrete or symbolic indication, preserved or recorded, for reconstructing or for proving a phenomenon, whether physical or mental."
An often-cited article concludes that "the evolving notion of document" among Jonathan Priest, Paul Otlet, Briet, Walter Schürmeyer, and the other documentalists increasingly emphasized whatever functioned as a document rather than traditional physical forms of documents. The shift to digital technology would seem to make this distinction even more important. David M. Levy has said that an emphasis on the technology of digital documents has impeded our understanding of digital documents as documents.
A conventional document, such as a mail message or a technical report, exists physically in digital technology as a string of bits, as does everything else in a digital environment. As an object of study, it has been made into a document. It has become physical evidence by those who study it.
"Document" is defined in library and information science and documentation science as a fundamental, abstract idea: the word denotes everything that may be represented or memorialized to serve as evidence. The classic example provided by Briet is an antelope: "An antelope running wild on the plains of Africa should not be considered a document[;] she rules. But if it were to be captured, taken to a zoo and made an object of study, it has been made into a document. It has become physical evidence being used by those who study it. Indeed, scholarly articles written about the antelope are secondary documents, since the antelope itself is the primary document." This opinion has been interpreted as an early expression of actor–network theory.
Kinds
A document can be structured, like tabular documents, lists, forms, or scientific charts, semi-structured like a book or a newspaper article, or unstructured like a handwritten note. Documents are sometimes classified as secret, private, or public. They may also be described as drafts or proofs. When a document is copied, the source is denominated the "original".
Documents are used in numerous fields, e.g.:
Academia:
manuscript,
thesis,
paper,
journal,
chart,
and technical drawing
Media:
mock-up,
script,
image,
photography,
and newspaper article
Administration, law, and politics:
application,
brief,
certificate,
commission,
constitutional document,
form,
gazette,
identity document,
license,
manifesto,
summons,
census,
and white paper
Business:
invoice,
request for proposal,
proposal,
contract,
packing slip,
manifest,
report (detailed and summary),
spreadsheet,
material safety data sheet,
waybill,
bill of lading,
financial statement,
nondisclosure agreement (NDA),
mutual nondisclosure agreement,
and user guide
Geography and planning:
topographic map,
cadastre,
legend,
and architectural plan
Such standard documents can be drafted based on a template.
Drafting
The page layout of a document is how information is graphically arranged in the space of the document, e.g., on a page. If the appearance of the document is of concern, the page layout is generally the responsibility of a graphic designer. Typography concerns the design of letter and symbol forms and their physical arrangement in the document (see typesetting). Information design concerns the effective communication of information, especially in industrial documents and public signs. Simple textual documents may not require visual design and may be drafted only by an author, clerk, or transcriber. Forms may require a visual design for their initial fields, but not to complete the forms.
Media
Traditionally, the medium of a document was paper and the information was applied to it in ink, either by handwriting (to make a manuscript) or by a mechanical process (e.g., a printing press or laser printer). Today, some short documents also may consist of sheets of paper stapled together.
Historically, documents were inscribed with ink on papyrus (starting in ancient Egypt) or parchment; scratched as runes or carved on stone using a sharp tool, e.g., the Tablets of Stone described in the Bible; stamped or incised in clay and then baked to make clay tablets, e.g., in the Sumerian and other Mesopotamian civilizations. The papyrus or parchment was often rolled into a scroll or cut into sheets and bound into a codex (book).
Contemporary electronic means of memorializing and displaying documents include:
Monitor of a desktop computer, laptop, tablet; optionally with a printer to produce a hard copy;
Personal digital assistant;
Dedicated e-book device;
Electronic paper, typically, using the Portable Document Format (PDF);
Information appliance;
Digital audio player; and
Radio and television service provider.
Digital documents usually require a specific file format to be presentable in a specific medium.
In law
Documents in all forms frequently serve as material evidence in criminal and civil proceedings. The forensic analysis of such a document is within the scope of questioned document examination. To catalog and manage the large number of documents that may be produced during litigation, Bates numbering is often applied to all documents in the lawsuit so that each document has a unique, arbitrary, identification number.
| Technology | Media and communication: Basics | null |
161253 | https://en.wikipedia.org/wiki/Quantum%20fluctuation | Quantum fluctuation | In quantum physics, a quantum fluctuation (also known as a vacuum state fluctuation or vacuum fluctuation) is the temporary random change in the amount of energy in a point in space, as prescribed by Werner Heisenberg's uncertainty principle. They are minute random fluctuations in the values of the fields which represent elementary particles, such as electric and magnetic fields which represent the electromagnetic force carried by photons, W and Z fields which carry the weak force, and gluon fields which carry the strong force.
The uncertainty principle states the uncertainty in energy and time can be related by , where ≈ . This means that pairs of virtual particles with energy and lifetime shorter than are continually created and annihilated in empty space. Although the particles are not directly detectable, the cumulative effects of these particles are measurable. For example, without quantum fluctuations, the "bare" mass and charge of elementary particles would be infinite; from renormalization theory the shielding effect of the cloud of virtual particles is responsible for the finite mass and charge of elementary particles.
Another consequence is the Casimir effect. One of the first observations which was evidence for vacuum fluctuations was the Lamb shift in hydrogen. In July 2020, scientists reported that quantum vacuum fluctuations can influence the motion of macroscopic, human-scale objects by measuring correlations below the standard quantum limit between the position/momentum uncertainty of the mirrors of LIGO and the photon number/phase uncertainty of light that they reflect.
Field fluctuations
In quantum field theory, fields undergo quantum fluctuations. A reasonably clear distinction can be made between quantum fluctuations and thermal fluctuations of a quantum field (at least for a free field; for interacting fields, renormalization substantially complicates matters). An illustration of this distinction can be seen by considering quantum and classical Klein–Gordon fields: For the quantized Klein–Gordon field in the vacuum state, we can calculate the probability density that we would observe a configuration at a time in terms of its Fourier transform to be
In contrast, for the classical Klein–Gordon field at non-zero temperature, the Gibbs probability density that we would observe a configuration at a time is
These probability distributions illustrate that every possible configuration of the field is possible, with the amplitude of quantum fluctuations controlled by the Planck constant , just as the amplitude of thermal fluctuations is controlled by , where is the Boltzmann constant. Note that the following three points are closely related:
the Planck constant has units of action (joule-seconds) instead of units of energy (joules),
the quantum kernel is instead of (the quantum kernel is nonlocal from a classical heat kernel viewpoint, but it is local in the sense that it does not allow signals to be transmitted),
the quantum vacuum state is Lorentz-invariant (although not manifestly in the above), whereas the classical thermal state is not (the classical dynamics is Lorentz-invariant, but the Gibbs probability density is not a Lorentz-invariant initial condition).
A classical continuous random field can be constructed that has the same probability density as the quantum vacuum state, so that the principal difference from quantum field theory is the measurement theory (measurement in quantum theory is different from measurement for a classical continuous random field, in that classical measurements are always mutually compatible – in quantum-mechanical terms they always commute).
| Physical sciences | Quantum mechanics | Physics |
161278 | https://en.wikipedia.org/wiki/Period%205%20element | Period 5 element | A period 5 element is one of the chemical elements in the fifth row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that elements with similar behaviour fall into the same vertical columns. The fifth period contains 18 elements, beginning with rubidium and ending with xenon. As a rule, period 5 elements fill their 5s shells first, then their 4d, and 5p shells, in that order; however, there are exceptions, such as rhodium.
Physical properties
This period contains technetium, one of the two elements until lead that has no stable isotopes (along with promethium), as well as molybdenum and iodine, two of the heaviest elements with a known biological role. Niobium has the largest known magnetic penetration depth of all the elements. Zirconium is one of the main components of zircon crystals, currently the oldest known minerals in the Earth's crust. Many later transition metals, such as rhodium, are very commonly used in jewelry as they are very shiny.
This period is known to have a large number of exceptions to the Madelung rule.
Elements and their properties
{| class="wikitable sortable"
! colspan="3" | Chemical element
! Block
! Electron configuration
|-
!
!
!
!
!
|- bgcolor=""
|| 37 || Rb || Rubidium || s-block || [Kr] 5s1
|- bgcolor=""
|| 38 || Sr || Strontium || s-block || [Kr] 5s2
|- bgcolor=""
|| 39 || Y || Yttrium || d-block || [Kr] 4d1 5s2
|- bgcolor=""
|| 40 || Zr || Zirconium || d-block || [Kr] 4d2 5s2
|- bgcolor=""
|| 41 || Nb || Niobium || d-block || [Kr] 4d4 5s1 (*)
|- bgcolor=""
|| 42 || Mo || Molybdenum || d-block || [Kr] 4d5 5s1 (*)
|- bgcolor=""
|| 43 || Tc || Technetium || d-block || [Kr] 4d5 5s2
|- bgcolor=""
|| 44 || Ru || Ruthenium || d-block || [Kr] 4d7 5s1 (*)
|- bgcolor=""
|| 45 || Rh || Rhodium || d-block|| [Kr] 4d8 5s1 (*)
|- bgcolor=""
|| 46 || Pd || Palladium || d-block || [Kr] 4d10 (*)
|- bgcolor=""
|| 47 || Ag || Silver || d-block || [Kr] 4d10 5s1 (*)
|- bgcolor=""
|| 48 || Cd || Cadmium || d-block || [Kr] 4d10 5s2
|- bgcolor=""
|| 49 || In || Indium || p-block || [Kr] 4d10 5s2 5p1
|- bgcolor=""
|| 50 || Sn || Tin || p-block || [Kr] 4d10 5s2 5p2
|- bgcolor=""
|| 51 || Sb || Antimony || p-block || [Kr] 4d10 5s2 5p3
|- bgcolor=""
|| 52 || Te || Tellurium || p-block || [Kr] 4d10 5s2 5p4
|- bgcolor=""
|| 53 || I || Iodine || p-block || [Kr] 4d10 5s2 5p5
|- bgcolor=""
|| 54 || Xe || Xenon || p-block || [Kr] 4d10 5s2 5p6
|}
(*) Exception to the Madelung rule
s-block elements
Rubidium
Rubidium is the first element placed in period 5. It is an alkali metal, the most reactive group in the periodic table, having properties and similarities with both other alkali metals and other period 5 elements. For example, rubidium has 5 electron shells, a property found in all other period 5 elements, whereas its electron configuration's ending is similar to all other alkali metals: s1. Rubidium also follows the trend of increasing reactivity as the atomic number increases in the alkali metals, for it is more reactive than potassium, but less so than caesium. In addition, both potassium and rubidium yield almost the same hue when ignited, so researchers must use different methods to differentiate between these two 1st group elements. Rubidium is very susceptible to oxidation in air, similar to most of the other alkali metals, so it readily transforms into rubidium oxide, a yellow solid with the chemical formula Rb2O.
Strontium
Strontium is the second element placed in the 5th period. It is an alkaline earth metal, a relatively reactive group, although not nearly as reactive as the alkali metals. Like rubidium, it has 5 electron shells or energy levels, and in accordance with the Madelung rule it has two electrons in its 5s subshell. Strontium is a soft metal and is extremely reactive upon contact with water. If it comes in contact with water, it will combine with the atoms of both oxygen and hydrogen to form strontium hydroxide and pure hydrogen gas which quickly diffuses in the air. In addition, strontium, like rubidium, oxidizes in air and turns a yellow color. When ignited, it will burn with a strong red flame.
d-block elements
Yttrium
Yttrium is a chemical element with symbol Y and atomic number 39. It is a silvery-metallic transition metal chemically similar to the lanthanides and it has often been classified as a "rare earth element". Yttrium is almost always found combined with the lanthanides in rare earth minerals and is never found in nature as a free element. Its only stable isotope, 89Y, is also its only naturally occurring isotope.
In 1787, Carl Axel Arrhenius found a new mineral near Ytterby in Sweden and named it ytterbite, after the village. Johan Gadolin discovered yttrium's oxide in Arrhenius' sample in 1789, and Anders Gustaf Ekeberg named the new oxide yttria. Elemental yttrium was first isolated in 1828 by Friedrich Wöhler.
The most important use of yttrium is in making phosphors, such as the red ones used in television set cathode-ray tube (CRT) displays and in LEDs. Other uses include the production of electrodes, electrolytes, electronic filters, lasers and superconductors; various medical applications; and as traces in various materials to enhance their properties. Yttrium has no known biological role, and exposure to yttrium compounds can cause lung disease in humans.
Zirconium
Zirconium is a chemical element with the symbol Zr and atomic number 40. The name of zirconium is taken from the mineral zircon. Its atomic mass is 91.224. It is a lustrous, gray-white, strong transition metal that resembles titanium. Zirconium is mainly used as a refractory and opacifier, although minor amounts are used as alloying agent for its strong resistance to corrosion. Zirconium is obtained mainly from the mineral zircon, which is the most important form of zirconium in use.
Zirconium forms a variety of inorganic and organometallic compounds such as zirconium dioxide and zirconocene dichloride, respectively. Five isotopes occur naturally, three of which are stable. Zirconium compounds have no biological role.
Niobium
Niobium, or columbium, is a chemical element with the symbol Nb and atomic number 41. It is a soft, grey, ductile transition metal, which is often found in the pyrochlore mineral, the main commercial source for niobium, and columbite. The name comes from Greek mythology: Niobe, daughter of Tantalus.
Niobium has physical and chemical properties similar to those of the element tantalum, and the two are therefore difficult to distinguish. The English chemist Charles Hatchett reported a new element similar to tantalum in 1801, and named it columbium. In 1809, the English chemist William Hyde Wollaston wrongly concluded that tantalum and columbium were identical. The German chemist Heinrich Rose determined in 1846 that tantalum ores contain a second element, which he named niobium. In 1864 and 1865, a series of scientific findings clarified that niobium and columbium were the same element (as distinguished from tantalum), and for a century both names were used interchangeably. The name of the element was officially adopted as niobium in 1949.
It was not until the early 20th century that niobium was first used commercially. Brazil is the leading producer of niobium and ferroniobium, an alloy of niobium and iron. Niobium is used mostly in alloys, the largest part in special steel such as that used in gas pipelines. Although alloys contain only a maximum of 0.1%, that small percentage of niobium improves the strength of the steel. The temperature stability of niobium-containing superalloys is important for its use in jet and rocket engines. Niobium is used in various superconducting materials. These superconducting alloys, also containing titanium and tin, are widely used in the superconducting magnets of MRI scanners. Other applications of niobium include its use in welding, nuclear industries, electronics, optics, numismatics and jewelry. In the last two applications, niobium's low toxicity and ability to be colored by anodization are particular advantages.
Molybdenum
Molybdenum is a Group 6 chemical element with the symbol Mo and atomic number 42. The name is from Neo-Latin Molybdaenum, from Ancient Greek , meaning lead, itself proposed as a loanword from Anatolian Luvian and Lydian languages, since its ores were confused with lead ores. The free element, which is a silvery metal, has the sixth-highest melting point of any element. It readily forms hard, stable carbides, and for this reason it is often used in high-strength steel alloys. Molybdenum does not occur as a free metal on Earth, but rather in various oxidation states in minerals. Industrially, molybdenum compounds are used in high-pressure and high-temperature applications, as pigments and catalysts.
Molybdenum minerals have long been known, but the element was "discovered" (in the sense of differentiating it as a new entity from the mineral salts of other metals) in 1778 by Carl Wilhelm Scheele. The metal was first isolated in 1781 by Peter Jacob Hjelm.
Most molybdenum compounds have low solubility in water, but the molybdate ion MoO42− is soluble and forms when molybdenum-containing minerals are in contact with oxygen and water.
Technetium
Technetium is the chemical element with atomic number 43 and symbol Tc. It is the lowest atomic number element without any stable isotopes; every form of it is radioactive. Nearly all technetium is produced synthetically and only minute amounts are found in nature. Naturally occurring technetium occurs as a spontaneous fission product in uranium ore or by neutron capture in molybdenum ores. The chemical properties of this silvery gray, crystalline transition metal are intermediate between rhenium and manganese.
Many of technetium's properties were predicted by Dmitri Mendeleev before the element was discovered. Mendeleev noted a gap in his periodic table and gave the undiscovered element the provisional name ekamanganese (Em). In 1937 technetium (specifically the technetium-97 isotope) became the first predominantly artificial element to be produced, hence its name (from the Greek , meaning "artificial").
Its short-lived gamma ray-emitting nuclear isomer—technetium-99m—is used in nuclear medicine for a wide variety of diagnostic tests. Technetium-99 is used as a gamma ray-free source of beta particles. Long-lived technetium isotopes produced commercially are by-products of fission of uranium-235 in nuclear reactors and are extracted from nuclear fuel rods. Because no isotope of technetium has a half-life longer than 4.2 million years (technetium-98), its detection in red giants in 1952, which are billions of years old, helped bolster the theory that stars can produce heavier elements.
Ruthenium
Ruthenium is a chemical element with symbol Ru and atomic number 44. It is a rare transition metal belonging to the platinum group of the periodic table. Like the other metals of the platinum group, ruthenium is inert to most chemicals. The Russian scientist Karl Ernst Claus discovered the element in 1844 and named it after Ruthenia, the Latin word for Rus'. Ruthenium usually occurs as a minor component of platinum ores and its annual production is only about 12 tonnes worldwide. Most ruthenium is used for wear-resistant electrical contacts and the production of thick-film resistors. A minor application of ruthenium is its use in some platinum alloys.
Rhodium
Rhodium is a chemical element that is a rare, silvery-white, hard, and chemically inert transition metal and a member of the platinum group. It has the chemical symbol Rh and atomic number 45. It is composed of only one isotope, 103Rh. Naturally occurring rhodium is found as the free metal, alloyed with similar metals, and never as a chemical compound. It is one of the rarest precious metals and one of the most costly (gold has since taken over the top spot of cost per ounce).
Rhodium is a so-called noble metal, resistant to corrosion, found in platinum or nickel ores together with the other members of the platinum group metals. It was discovered in 1803 by William Hyde Wollaston in one such ore, and named for the rose color of one of its chlorine compounds, produced after it reacted with the powerful acid mixture aqua regia.
The element's major use (about 80% of world rhodium production) is as one of the catalysts in the three-way catalytic converters of automobiles. Because rhodium metal is inert against corrosion and most aggressive chemicals, and because of its rarity, rhodium is usually alloyed with platinum or palladium and applied in high-temperature and corrosion-resistive coatings. White gold is often plated with a thin rhodium layer to improve its optical impression while sterling silver is often rhodium plated for tarnish resistance.
Rhodium detectors are used in nuclear reactors to measure the neutron flux level.
Palladium
Palladium is a chemical element with the chemical symbol Pd and an atomic number of 46. It is a rare and lustrous silvery-white metal discovered in 1803 by William Hyde Wollaston. He named it after the asteroid Pallas, which was itself named after the epithet of the Greek goddess Athena, acquired by her when she slew Pallas. Palladium, platinum, rhodium, ruthenium, iridium and osmium form a group of elements referred to as the platinum group metals (PGMs). These have similar chemical properties, but palladium has the lowest melting point and is the least dense of them.
The unique properties of palladium and other platinum group metals account for their widespread use. A quarter of all goods manufactured today either contain PGMs or have a significant part in their manufacturing process played by PGMs. Over half of the supply of palladium and its congener platinum goes into catalytic converters, which convert up to 90% of harmful gases from auto exhaust (hydrocarbons, carbon monoxide, and nitrogen dioxide) into less-harmful substances (nitrogen, carbon dioxide and water vapor). Palladium is also used in electronics, dentistry, medicine, hydrogen purification, chemical applications, and groundwater treatment. Palladium plays a key role in the technology used for fuel cells, which combine hydrogen and oxygen to produce electricity, heat, and water.
Ore deposits of palladium and other PGMs are rare, and the most extensive deposits have been found in the norite belt of the Bushveld Igneous Complex covering the Transvaal Basin in South Africa, the Stillwater Complex in Montana, United States, the Thunder Bay District of Ontario, Canada, and the Norilsk Complex in Russia. Recycling is also a source of palladium, mostly from scrapped catalytic converters. The numerous applications and limited supply sources of palladium result in the metal attracting considerable investment interest.
Silver
Silver is a metallic chemical element with the chemical symbol Ag (, from the Indo-European root *arg- for "grey" or "shining") and atomic number 47. A soft, white, lustrous transition metal, it has the highest electrical conductivity of any element and the highest thermal conductivity of any metal. The metal occurs naturally in its pure, free form (native silver), as an alloy with gold and other metals, and in minerals such as argentite and chlorargyrite. Most silver is produced as a byproduct of copper, gold, lead, and zinc refining.
Silver has long been valued as a precious metal, and it is used to make ornaments, jewelry, high-value tableware, utensils (hence the term silverware), and currency coins. Today, silver metal is also used in electrical contacts and conductors, in mirrors and in catalysis of chemical reactions. Its compounds are used in photographic film, and dilute silver nitrate solutions and other silver compounds are used as disinfectants and microbiocides. While many medical antimicrobial uses of silver have been supplanted by antibiotics, further research into clinical potential continues.
Cadmium
Cadmium is a chemical element with the symbol Cd and atomic number 48. This soft, bluish-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it prefers oxidation state +2 in most of its compounds and like mercury it shows a low melting point compared to transition metals. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in the Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate.
Cadmium occurs as a minor component in most zinc ores and therefore is a byproduct of zinc production. It was used for a long time as a pigment and for corrosion resistant plating on steel while cadmium compounds were used to stabilize plastic. With the exception of its use in nickel–cadmium batteries and cadmium telluride solar panels, the use of cadmium is generally decreasing. These declines have been due to competing technologies, cadmium's toxicity in certain forms and concentration and resulting regulations.
p-block elements
Indium
Indium is a chemical element with the symbol In and atomic number 49. This rare, very soft, malleable and easily fusible other metal is chemically similar to gallium and thallium, and shows the intermediate properties between these two. Indium was discovered in 1863 and named for the indigo blue line in its spectrum that was the first indication of its existence in zinc ores, as a new and unknown element. The metal was first isolated in the following year. Zinc ores continue to be the primary source of indium, where it is found in compound form. Very rarely the element can be found as grains of native (free) metal, but these are not of commercial importance.
Indium's current primary application is to form transparent electrodes from indium tin oxide in liquid crystal displays and touchscreens, and this use largely determines its global mining production. It is widely used in thin-films to form lubricated layers (during World War II it was widely used to coat bearings in high-performance aircraft). It is also used for making particularly low melting point alloys, and is a component in some lead-free solders.
Indium is not known to be used by any organism. In a similar way to aluminium salts, indium(III) ions can be toxic to the kidney when given by injection, but oral indium compounds do not have the chronic toxicity of salts of heavy metals, probably due to poor absorption in basic conditions. Radioactive indium-111 (in very small amounts on a chemical basis) is used in nuclear medicine tests, as a radiotracer to follow the movement of labeled proteins and white blood cells in the body.
Tin
Tin is a chemical element with the symbol Sn (for ) and atomic number 50. It is a main-group metal in group 14 of the periodic table. Tin shows chemical similarity to both neighboring group 14 elements, germanium and lead and has two possible oxidation states, +2 and the slightly more stable +4. Tin is the 49th most abundant element and has, with 10 stable isotopes, the largest number of stable isotopes in the periodic table. Tin is obtained chiefly from the mineral cassiterite, where it occurs as tin dioxide, SnO2.
This silvery, malleable post-transition metal is not easily oxidized in air and is used to coat other metals to prevent corrosion. The first alloy, used in large scale since 3000 BC, was bronze, an alloy of tin and copper. After 600 BC pure metallic tin was produced. Pewter, which is an alloy of 85–90% tin with the remainder commonly consisting of copper, antimony and lead, was used for tableware from the Bronze Age until the 20th century. In modern times tin is used in many alloys, most notably tin/lead soft solders, typically containing 60% or more of tin. Another large application for tin is corrosion-resistant tin plating of steel. Because of its low toxicity, tin-plated metal is also used for food packaging, giving the name to tin cans, which are made mostly of steel.
Antimony
Antimony () is a toxic chemical element with the symbol Sb and an atomic number of 51. A lustrous grey metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were used for cosmetics, metallic antimony was also known but mostly identified as lead.
For some time China has been the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. Antimony compounds are prominent additives for chlorine and bromine containing fire retardants found in many commercial and domestic products. The largest application for metallic antimony is as alloying material for lead and tin. It improves the properties of the alloys which are used as in solders, bullets and ball bearings. An emerging application is the use of antimony in microelectronics.
Tellurium
Tellurium is a chemical element that has the symbol Te and atomic number 52. A brittle, mildly toxic, rare, silver-white metalloid which looks similar to tin, tellurium is chemically related to selenium and sulfur. It is occasionally found in native form, as elemental crystals. Tellurium is far more common in the universe than on Earth. Its extreme rarity in the Earth's crust, comparable to that of platinum, is partly due to its high atomic number, but also due to its formation of a volatile hydride which caused the element to be lost to space as a gas during the hot nebular formation of the planet.
Tellurium was discovered in Transylvania (today part of Romania) in 1782 by Franz-Joseph Müller von Reichenstein in a mineral containing tellurium and gold. Martin Heinrich Klaproth named the new element in 1798 after the Latin word for "earth", tellus. Gold telluride minerals (responsible for the name of Telluride, Colorado) are the most notable natural gold compounds. However, they are not a commercially significant source of tellurium itself, which is normally extracted as by-product of copper and lead production.
Tellurium is commercially primarily used in alloys, foremost in steel and copper to improve machinability. Applications in solar panels and as a semiconductor material also consume a considerable fraction of tellurium production.
Iodine
Iodine is a chemical element with the symbol I and atomic number 53. The name is from Greek ioeidēs, meaning violet or purple, due to the color of elemental iodine vapor.
Iodine and its compounds are primarily used in nutrition, and industrially in the production of acetic acid and certain polymers. Iodine's relatively high atomic number, low toxicity, and ease of attachment to organic compounds have made it a part of many X-ray contrast materials in modern medicine. Iodine has only one stable isotope. A number of iodine radioisotopes are also used in medical applications.
Iodine is found on Earth mainly as the highly water-soluble iodide I−, which concentrates it in oceans and brine pools. Like the other halogens, free iodine occurs mainly as a diatomic molecule I2, and then only momentarily after being oxidized from iodide by an oxidant like free oxygen. In the universe and on Earth, iodine's high atomic number makes it a relatively rare element. However, its presence in ocean water has given it a role in biology (see below).
Xenon
Xenon is a chemical element with the symbol Xe and atomic number 54. A colorless, heavy, odorless noble gas, xenon occurs in the Earth's atmosphere in trace amounts. Although generally unreactive, xenon can undergo a few chemical reactions such as the formation of xenon hexafluoroplatinate, the first noble gas compound to be synthesized.
Naturally occurring xenon consists of nine stable isotopes. There are also over 40 unstable isotopes that undergo radioactive decay. The isotope ratios of xenon are an important tool for studying the early history of the Solar System. Radioactive xenon-135 is produced from iodine-135 as a result of nuclear fission, and it acts as the most significant neutron absorber in nuclear reactors.
Xenon is used in flash lamps and arc lamps, and as a general anesthetic. The first excimer laser design used a xenon dimer molecule (Xe2) as its lasing medium, and the earliest laser designs used xenon flash lamps as pumps. Xenon is also being used to search for hypothetical weakly interacting massive particles and as the propellant for ion thrusters in spacecraft.
Biological role
Rubidium, strontium, yttrium, zirconium, and niobium have no biological role. Yttrium can cause lung disease in humans.
Molybdenum-containing enzymes are used as catalysts by some bacteria to break the chemical bond in atmospheric molecular nitrogen, allowing biological nitrogen fixation. At least 50 molybdenum-containing enzymes are now known in bacteria and animals, though only the bacterial and cyanobacterial enzymes are involved in nitrogen fixation. Owing to the diverse functions of the remainder of the enzymes, molybdenum is a required element for life in higher organisms (eukaryotes), though not in all bacteria.
Technetium, ruthenium, rhodium, palladium, and silver have no biological role. Although cadmium has no known biological role in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms. Rats fed a tin-free diet exhibited improper growth, but the evidence for essentiality is otherwise limited. Indium has no biological role and can be toxic as well as antimony.
Tellurium has no biological role, although fungi can incorporate it in place of sulfur and selenium into amino acids such as tellurocysteine and telluromethionine. In humans, tellurium is partly metabolized into dimethyl telluride, (CH3)2Te, a gas with a garlic-like odor which is exhaled in the breath of victims of tellurium toxicity or exposure.
Iodine is the heaviest essential element utilized widely by life in biological functions (only tungsten, employed in enzymes by a few species of bacteria, is heavier). Iodine's rarity in many soils, due to initial low abundance as a crust-element, and also leaching of soluble iodide by rainwater, has led to many deficiency problems in land animals and inland human populations. Iodine deficiency affects about two billion people and is the leading preventable cause of intellectual disabilities. Iodine is required by higher animals, which use it to synthesize thyroid hormones, which contain the element. Because of this function, radioisotopes of iodine are concentrated in the thyroid gland along with nonradioactive iodine. The radioisotope iodine-131, which has a high fission product yield, concentrates in the thyroid, and is one of the most carcinogenic of nuclear fission products.
Xenon has no biological role, and is used as a general anaesthetic.
| Physical sciences | Periods | Chemistry |
161291 | https://en.wikipedia.org/wiki/Noble%20metal | Noble metal | A noble metal is ordinarily regarded as a metallic element that is generally resistant to corrosion and is usually found in nature in its raw form. Gold, platinum, and the other platinum group metals (ruthenium, rhodium, palladium, osmium, iridium) are most often so classified. Silver, copper, and mercury are sometimes included as noble metals, but each of these usually occurs in nature combined with sulfur.
In more specialized fields of study and applications the number of elements counted as noble metals can be smaller or larger. It is sometimes used for the three metals copper, silver, and gold which have filled d-bands, while it is often used mainly for silver and gold when discussing surface-enhanced Raman spectroscopy involving metal nanoparticles. It is sometimes applied more broadly to any metallic or semimetallic element that does not react with a weak acid and give off hydrogen gas in the process. This broader set includes copper, mercury, technetium, rhenium, arsenic, antimony, bismuth, polonium, gold, the six platinum group metals, and silver.
Many of the noble metals are used in alloys for jewelry or coinage. In dentistry, silver is not always considered a noble metal because it is subject to corrosion when present in the mouth. All the metals are important heterogeneous catalysts.
Meaning and history
While lists of noble metals can differ, they tend to cluster around gold and the six platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum.
In addition to this term's function as a compound noun, there are circumstances where noble is used as an adjective for the noun metal. A galvanic series is a hierarchy of metals (or other electrically conductive materials, including composites and semimetals) that runs from noble to active, and allows one to predict how materials will interact in the environment used to generate the series. In this sense of the word, graphite is more noble than silver and the relative nobility of many materials is highly dependent upon context, as for aluminium and stainless steel in conditions of varying pH.
The term noble metal can be traced back to at least the late 14th century and has slightly different meanings in different fields of study and application.
Prior to Mendeleev's publication in 1869 of the first (eventually) widely accepted periodic table, Odling published a table in 1864, in which the "noble metals" rhodium, ruthenium, palladium; and platinum, iridium, and osmium were grouped together, and adjacent to silver and gold.
Properties
Geochemical
The noble metals are siderophiles (iron-lovers). They tend to sink into the Earth's core because they dissolve readily in iron either as solid solutions or in the molten state. Most siderophile elements have practically no affinity whatsoever for oxygen: indeed, oxides of gold are thermodynamically unstable with respect to the elements.
Copper, silver, gold, and the six platinum group metals are the only native metals that occur naturally in relatively large amounts.
Corrosion resistance
Noble metals tend to be resistant to oxidation and other forms of corrosion, and this corrosion resistance is often considered to be a defining characteristic. Some exceptions are described below.
Copper is dissolved by nitric acid and aqueous potassium cyanide.
Ruthenium can be dissolved in aqua regia, a highly concentrated mixture of hydrochloric acid and nitric acid, only when in the presence of oxygen, while rhodium must be in a fine pulverized form. Palladium and silver are soluble in nitric acid, while silver's solubility in aqua regia is limited by the formation of silver chloride precipitate.
Rhenium reacts with oxidizing acids, and hydrogen peroxide, and is said to be tarnished by moist air. Osmium and iridium are chemically inert in ambient conditions. Platinum and gold can be dissolved in aqua regia. Mercury reacts with oxidising acids.
In 2010, US researchers discovered that an organic "aqua regia" in the form of a mixture of thionyl chloride SOCl2 and the organic solvent pyridine C5H5N achieved "high dissolution rates of noble metals under mild conditions, with the added benefit of being tunable to a specific metal" for example, gold but not palladium or platinum.
However, Gold can be dissolved in Selenic Acid (H2SeO4).
Anion (-ide) formation
The noble elements Gold and Platinum also have a comparatively high electronegativity for a metallic element, thus alowing them to exist as single-metallic anions.
For example:
Cs + Au -> CsAu
(Caesium Auride, a yellow crystalline salt with the ion). Platinum also exhibits similar properties with
BaPt, BaPt2, Cs2Pt (Barium and Caesium Platinides, which are reddish salts).
Electronic
The expression noble metal is sometimes confined to copper, silver, and gold since their full d-subshells can contribute to their noble character. There are also known to be significant contributions from how readily there is overlap of the d-electron states with the orbitals of other elements, particularly for gold. Relativistic contributions are also important, playing a role in the catalytic properties of gold.
The elements to the left of gold and silver have incompletely filled d-bands, which is believed to play a role in their catalytic properties. A common explanation is the d-band filling model of Hammer and Jens Nørskov, where the total d-bands are considered, not just the unoccupied states.
The low-energy plasmon properties are also of some importance, particularly those of silver and gold nanoparticles for surface-enhanced Raman spectroscopy, localized surface plasmons and other plasmonic properties.
Electrochemical
Standard reduction potentials in aqueous solution are also a useful way of predicting the non-aqueous chemistry of the metals involved. Thus, metals with high negative potentials, such as sodium, or potassium, will ignite in air, forming the respective oxides. These fires cannot be extinguished with water, which also react with the metals involved to give hydrogen, which is itself explosive. Noble metals, in contrast, are disinclined to react with oxygen and, for that reason (as well as their scarcity) have been valued for millennia, and used in jewellery and coins.
The adjacent table lists standard reduction potential in volts; electronegativity (revised Pauling); and electron affinity values (kJ/mol), for some metals and metalloids.
The simplified entries in the reaction column can be read in detail from the Pourbaix diagrams of the considered element in water. Noble metals have large positive potentials; elements not in this table have a negative standard potential or are not metals.
Electronegativity is included since it is reckoned to be, "a major driver of metal nobleness and reactivity".
The black tarnish commonly seen on silver arises from its sensitivity to sulphur containing gases such as hydrogen sulfide:
2 Ag + H2S + O2 → Ag2S + H2O.
Rayner-Canham contends that, "silver is so much more chemically-reactive and has such a different chemistry, that it should not be considered as a 'noble metal'." In dentistry, silver is not regarded as a noble metal due to its tendency to corrode in the oral environment.
The relevance of the entry for water is addressed by Li et al. in the context of galvanic corrosion. Such a process will only occur when:
"(1) two metals which have different electrochemical potentials are...connected, (2) an aqueous phase with electrolyte exists, and (3) one of the two metals has...potential lower than the potential of the reaction ( + 4e + = 4 OH•) which is 0.4 V...The...metal with...a potential less than 0.4 V acts as an anode...loses electrons...and dissolves in the aqueous medium. The noble metal (with higher electrochemical potential) acts as a cathode and, under many conditions, the reaction on this electrode is generally − 4 e• − = 4 OH•)."
The superheavy elements from hassium (element 108) to livermorium (116) inclusive are expected to be "partially very noble metals"; chemical investigations of hassium has established that it behaves like its lighter congener osmium, and preliminary investigations of nihonium and flerovium have suggested but not definitively established noble behavior. Copernicium's behaviour seems to partly resemble both its lighter congener mercury and the noble gas radon.
Oxides
As long ago as 1890, Hiorns observed as follows:
"Noble Metals. Gold, Platinum, Silver, and a few rare metals. The members of this class have little or no tendency to unite with oxygen in the free state, and when placed in water at a red heat do not alter its composition. The oxides are readily decomposed by heat in consequence of the feeble affinity between the metal and oxygen."
Smith, writing in 1946, continued the theme:
"There is no sharp dividing line [between 'noble metals' and 'base metals'] but perhaps the best definition of a noble metal is a metal whose oxide is easily decomposed at a temperature below a red heat."
"It follows from this that noble metals...have little attraction for oxygen and are consequently not oxidised or discoloured at moderate temperatures."
Such nobility is mainly associated with the relatively high electronegativity values of the noble metals, resulting in only weakly polar covalent bonding with oxygen. The table lists the melting points of the oxides of the noble metals, and for some of those of the non-noble metals, for the elements in their most stable oxidation states.
Catalytic properties
All the noble metals can act as catalysts. For example, platinum is used in catalytic converters, devices which convert toxic gases produced in car engines, such as the oxides of nitrogen, into non-polluting substances.
Gold has many industrial applications; it is used as a catalyst in hydrogenation and the water gas shift reaction.
| Physical sciences | d-Block | Chemistry |
161293 | https://en.wikipedia.org/wiki/Amphoterism | Amphoterism | In chemistry, an amphoteric compound () is a molecule or ion that can react both as an acid and as a base. What exactly this can mean depends on which definitions of acids and bases are being used.
Etymology and terminology
Amphoteric is derived from the Greek word () meaning "both". Related words in acid-base chemistry are amphichromatic and amphichroic, both describing substances such as acid-base indicators which give one colour on reaction with an acid and another colour on reaction with a base.
Amphiprotism
Amphiprotism is exhibited by compounds with both Brønsted acidic and basic properties. A prime example is H2O.
Amphiprotic molecules can either donate or accept a proton (). Amino acids (and proteins) are amphiprotic molecules because of their amine () and carboxylic acid () groups.
Ampholytes
Ampholytes are zwitterions. Molecules or ions that contain both acidic and basic functional groups. Amino acids hav both a basic group and an acidic group . Often such species exists as several structures in chemical equilibrium:
H2N-RCH-CO2H + H2O<=> H2N-RCH-COO- + H3O+<=> H3N+-RCH-COOH + OH-<=> H3N+-RCH-COO- + H2O
In approximately neutral aqueous solution (pH ≅ 7), the basic amino group is mostly protonated and the carboxylic acid is mostly deprotonated, so that the predominant species is the zwitterion . The pH at which the average charge is zero is known as the molecule's isoelectric point. Ampholytes are used to establish a stable pH gradient for use in isoelectric focusing.
Metal oxides which react with both acids as well as bases to produce salts and water are known as amphoteric oxides. Many metals (such as zinc, tin, lead, aluminium, and beryllium) form amphoteric oxides or hydroxides. Aluminium oxide () is an example of an amphoteric oxide. Amphoterism depends on the oxidation states of the oxide. Amphoteric oxides include lead(II) oxide and zinc oxide, among many others.
Amphiprotic molecules
According to the Brønsted-Lowry theory of acids and bases, acids are proton donors and bases are proton acceptors. An amphiprotic molecule (or ion) can either donate or accept a proton, thus acting either as an acid or a base. Water, amino acids, hydrogencarbonate ion (or bicarbonate ion) , dihydrogen phosphate ion , and hydrogensulfate ion (or bisulfate ion) are common examples of amphiprotic species. Since they can donate a proton, all amphiprotic substances contain a hydrogen atom. Also, since they can act like an acid or a base, they are amphoteric.
Examples
The water molecule is amphoteric in aqueous solution. It can either gain a proton to form a hydronium ion , or else lose a proton to form a hydroxide ion .
Another possibility is the molecular autoionization reaction between two water molecules, in which one water molecule acts as an acid and another as a base.
H2O + H2O <=> H3O+ + OH-
The bicarbonate ion, , is amphoteric as it can act as either an acid or a base:
As an acid, losing a proton: HCO3- + OH- <=> CO3^2- + H2O
As a base, accepting a proton: HCO3- + H+ <=> H2CO3
Note: in dilute aqueous solution the formation of the hydronium ion, , is effectively complete, so that hydration of the proton can be ignored in relation to the equilibria.
Other examples of inorganic polyprotic acids include anions of sulfuric acid, phosphoric acid and hydrogen sulfide that have lost one or more protons. In organic chemistry and biochemistry, important examples include amino acids and derivatives of citric acid.
Although an amphiprotic species must be amphoteric, the converse is not true. For example, a metal oxide such as zinc oxide, ZnO, contains no hydrogen and so cannot donate a proton. Nevertheless, it can act as an acid by reacting with the hydroxide ion, a base:
Zinc oxide can also act as a base:
Oxides
Zinc oxide (ZnO) reacts both with acids and with bases:
ZnO + \overset{acid}{H2SO4} -> ZnSO4 + H2O
ZnO + \overset{base}{2 NaOH} + H2O -> Na2[Zn(OH)4]
This reactivity can be used to separate different cations, for instance zinc(II), which dissolves in base, from manganese(II), which does not dissolve in base.
Lead oxide (PbO):
PbO + \overset{acid}{2 HCl} -> PbCl2 + H2O
PbO + \overset{base}{2 NaOH} + H2O -> Na2[Pb(OH)4]
Lead oxide ():
PbO2 + \overset{acid}{4 HCl} -> PbCl4 + 2H2O
PbO2 + \overset{base}{2 NaOH} + 2H2O -> Na2[Pb(OH)6]
Aluminium oxide ():
Al2O3 + \overset{acid}{6 HCl} -> 2 AlCl3 + 3 H2O
Al2O3 + \overset{base}{2 NaOH} + 3 H2O -> 2 Na[Al(OH)4] (hydrated sodium aluminate)
Stannous oxide (SnO):
SnO + \overset{acid}{2 HCl} <=> SnCl2 + H2O
SnO + \overset{base}{4 NaOH} + H2O <=> Na4[Sn(OH)6]
Stannic oxide ():
SnO2 + \overset{acid}{4 HCl} <=> SnCl4 + 2H2O
SnO2 + \overset{base}{4 NaOH} + 2H2O <=> Na4[Sn(OH)8]
Vanadium dioxide ():
VO2 + \overset{acid}{2 HCl} -> VOCl2 + H2O
4 VO2 + \overset{base}{2 NaOH} -> Na2V4O9 + H2O
Some other elements which form amphoteric oxides are gallium, indium, scandium, titanium, zirconium, chromium, iron, cobalt, copper, silver, gold, germanium, antimony, bismuth, beryllium, and tellurium.
Hydroxides
Aluminium hydroxide is also amphoteric:
Al(OH)3 + \overset{acid}{3 HCl} -> AlCl3 + 3 H2O
Al(OH)3 + \overset{base}{NaOH} -> Na[Al(OH)4]
Beryllium hydroxide:
Be(OH)2 + \overset{acid}{2 HCl} -> BeCl2 + 2 H2O
Be(OH)2 + \overset{base}{2 NaOH} -> Na2[Be(OH)4]
Chromium hydroxide:
Cr(OH)3 + \overset{acid}{3 HCl} -> CrCl3 + 3H2O
Cr(OH)3 + \overset{base}{NaOH} -> Na[Cr(OH)4]
| Physical sciences | Concepts | Chemistry |
161296 | https://en.wikipedia.org/wiki/Colony%20%28biology%29 | Colony (biology) | In biology, a colony is composed of two or more conspecific individuals living in close association with, or connected to, one another. This association is usually for mutual benefit such as stronger defense or the ability to attack bigger prey.
Colonies can form in various shapes and ways depending on the organism involved. For instance, the bacterial colony is a cluster of identical cells (clones). These colonies often form and grow on the surface of (or within) a solid medium, usually derived from a single parent cell.
Colonies, in the context of development, may be composed of two or more unitary (or solitary) organisms or be modular organisms. Unitary organisms have determinate development (set life stages) from zygote to adult form and individuals or groups of individuals (colonies) are visually distinct. Modular organisms have indeterminate growth forms (life stages not set) through repeated iteration of genetically identical modules (or individuals), and it can be difficult to distinguish between the colony as a whole and the modules within. In the latter case, modules may have specific functions within the colony.
In contrast, solitary organisms do not associate with colonies; they are ones in which all individuals live independently and have all of the functions needed to survive and reproduce.
Some organisms are primarily independent and form facultative colonies in reply to environmental conditions while others must live in a colony to survive (obligate). For example, some carpenter bees will form colonies when a dominant hierarchy is formed between two or more nest foundresses (facultative colony), while corals are animals that are physically connected by living tissue (the coenosarc) that contains a shared gastrovascular cavity.
Colony types
Social colonies
Unicellular and multicellular unitary organisms may aggregate to form colonies. For example,
Protists such as slime molds are many unicellular organisms that aggregate to form colonies when food resources are hard to come by, as together they are more reactive to chemical cues released by preferred prey.
Eusocial insects like ants and honey bees are multicellular animals that live in colonies with a highly organized social structure. Colonies of some social insects may be deemed superorganisms.
Animals, such as humans and rodents, form breeding or nesting colonies, potentially for more successful mating and to better protect offspring.
The Bracken Cave is the summer home to a colony of around 20 million Mexican free-tailed bats, making it the largest known concentration of mammals.
Modular organisms
Modular organisms are those in which a genet (or genetic individual formed from a sexually-produced zygote) asexually reproduces to form genetically identical clones called ramets.
A clonal colony is when the ramets of a genet live in close proximity or are physically connected. Ramets may have all of the functions needed to survive on their own or be interdependent on other ramets. For example, some sea anemones go through the process of pedal laceration in which a genetically identical individual is asexually produced from tissue broken off from the anemone's pedal disc. In plants, clonal colonies are created through the propagation of genetically identical individuals by stolons or rhizomes.
Colonial organisms are clonal colonies composed of many physically connected, interdependent individuals. The subunits of colonial organisms can be unicellular, as in the alga Volvox (a coenobium), or multicellular, as in the phylum Bryozoa. Colonial organisms may have been the first step toward multicellular organisms. Individuals within a multicellular colonial organism may be called ramets, modules, or zooids. Structural and functional variation (polymorphism), when present, designates ramet responsibilities such as feeding, reproduction, and defense. To that end, being physically connected allows the colonial organism to distribute nutrients and energy obtained by feeding zooids throughout the colony. The hydrozoan Portuguese man o' war is a classic example of a colonial organism, one of many in the taxonomic class.
Microbial colonies
A microbial colony is defined as a visible cluster of microorganisms growing on the surface of or within a solid medium, presumably cultured from a single cell. Because the colony is clonal, with all organisms in it descending from a single ancestor (assuming no contamination), they are genetically identical, except for any mutations (which occur at low frequencies). Obtaining such genetically identical organisms (or pure strains) can be useful; this is done by spreading organisms on a culture plate and starting a new stock from a single resulting colony.
A biofilm is a colony of microorganisms often comprising several species, with properties and capabilities greater than the aggregate of capabilities of the individual organisms.
Colony ontogeny for eusocial insects
Colony ontogeny refers to the developmental process and progression of a colony. It describes the various stages and changes that occur within a colony from its initial formation to its mature state. The exact duration and dynamics of colony ontogeny can vary greatly depending on the species and environmental conditions. Factors such as resource availability, competition, and environmental cues can influence the progression and outcome of colony development.
During colony ontogeny for eusocial insects such as ants and bees, a colony goes through several distinct phases, each characterised by specific behavioural patterns, division of labor, and structural modifications. While the exact details can vary depending on the species, the general progression typically involves a number of well-defined stages, detailed below.
Founding stage
In this initial stage, a single female individual or small group of female individuals, often called the foundress(es), queen(s) (and kings for termites) or primary reproductive(s), establish a new colony. The foundresses build a basic nest structure and begin to lay eggs. The foundresses can also perform non-reproductive tasks at this early stage, such as nursing these first eggs and leaving the nest to gather resources.
Worker emergence
This is also known as the ergonomic stage. As the eggs laid by the foundresses develop, they give rise to the first generation of workers. These workers can assume various tasks, such as foraging, brood care, and nest maintenance. Initially, the worker population is relatively small, and their tasks are not as specialised. As the colony grows, more workers emerge, and the division of labor becomes more pronounced. Some individuals may specialise in tasks like foraging, defense, or tending to the brood, while others may take on general tasks within the nest. These specialised tasks can change throughout the life of a worker.
Reproductive phase
At a certain point in the colony ontogeny, usually after a period of growth and maturation, the colony produces reproductives, including new virgin queens (princesses) and males. These individuals have the potential to leave the nest and start new colonies, ensuring the transmission of the gene pool of its natal colony.
Colony death
Over time, colonies may go through a senescence phase where the reproductive output declines, and the colony's overall vitality diminishes. Eventually, the colony may die off or be replaced by a new generation of reproductives. After the death of the queen in a monogyne colony, possible fates other than colony death include serial polygyny (when a virgin queen of the colony replaces the dead queen as the primary reproductive) or colony inheritance (when a worker takes over as primary reproductive).
Life history
Individuals in social colonies and modular organisms receive benefit to such a lifestyle. For example, it may be easier to seek out food, defend a nesting site, or increase competitive ability against other species. Modular organisms' ability to reproduce asexually in addition to sexually allows them unique benefits that social colonies do not have.
The energy required for sexual reproduction varies based on the frequency and length of reproductive activity, number and size of offspring, and parental care. While solitary individuals bear all of those energy costs, individuals in some social colonies share a portion of those costs.
Modular organisms save energy by using asexual reproduction during their life. Energy reserved in this way allows them to put more energy towards colony growth, regenerating lost modules (due to predation or other cause of death), or response to environmental conditions.
| Biology and health sciences | Ecology | Biology |
15988913 | https://en.wikipedia.org/wiki/Stripping%20%28chemistry%29 | Stripping (chemistry) | Stripping is a physical separation process where one or more components are removed from a liquid stream by a vapor stream. In industrial applications the liquid and vapor streams can have co-current or countercurrent flows. Stripping is usually carried out in either a packed or trayed column.
Theory
Stripping works on the basis of mass transfer. The idea is to make the conditions favorable for the component, A, in the liquid phase to transfer to the vapor phase. This involves a gas–liquid interface that A must cross. The total amount of A that has moved across this boundary can be defined as the flux of A, NA.
Equipment
Stripping is mainly conducted in trayed towers (plate columns) and packed columns, and less often in spray towers, bubble columns, and centrifugal contactors.
Trayed towers consist of a vertical column with liquid flowing in the top and out the bottom. The vapor phase enters in the bottom of the column and exits out of the top. Inside of the column are trays or plates. These trays force the liquid to flow back and forth horizontally while the vapor bubbles up through holes in the trays. The purpose of these trays is to increase the amount of contact area between the liquid and vapor phases.
Packed columns are similar to trayed columns in that the liquid and vapor flows enter and exit in the same manner. The difference is that in packed towers there are no trays. Instead, packing is used to increase the contact area between the liquid and vapor phases. There are many different types of packing used and each one has advantages and disadvantages.
Variables
The variables and design considerations for strippers are many. Among them are the entering conditions, the degree of recovery of the solute needed, the choice of the stripping agent and its flow, the operating conditions, the number of stages, the heat effects, and the type and size of the equipment.
The degree of recovery is often determined by environmental regulations, such as for volatile organic compounds like chloroform.
Frequently, steam, air, inert gases, and hydrocarbon gases are used as stripping agents. This is based on solubility, stability, degree of corrosiveness, cost, and availability. As stripping agents are gases, operation at nearly the highest temperature and lowest pressure that will maintain the components and not vaporize the liquid feed stream is desired. This allows for the minimization of flow. As with all other variables, minimizing cost while achieving efficient separation is the ultimate goal.
The size of the equipment, and particularly the height and diameter, is important in determining the possibility of flow channeling that would reduce the contact area between the liquid and vapor streams. If flow channeling is suspected to be occurring, a redistribution plate is often necessary to, as the name indicates, redistribute the liquid flow evenly to reestablish a higher contact area.
As mentioned previously, strippers can be trayed or packed. Packed columns, and particularly when random packing is used, are usually favored for smaller columns with a diameter less than 2 feet and a packed height of not more than 20 feet. Packed columns can also be advantageous for corrosive fluids, high foaming fluids, when fluid velocity is high, and when particularly low pressure drop is desired. Trayed strippers are advantageous because of ease of design and scale up. Structured packing can be used similar to trays despite possibly being the same material as dumped (random) packing. Using structured packing is a common method to increase the capacity for separation or to replace damaged trays.
Trayed strippers can have sieve, valve, or bubble cap trays while packed strippers can have either structured packing or random packing. Trays and packing are used to increase the contact area over which mass transfer can occur as mass transfer theory dictates. Packing can have varying material, surface area, flow area, and associated pressure drop. Older generation packing include ceramic Raschig rings and Berl saddles. More common packing materials are metal and plastic Pall rings, metal and plastic Zbigniew Białecki rings, and ceramic Intalox saddles. Each packing material of this newer generation improves the surface area, the flow area, and/or the associated pressure drop across the packing. Also important, is the ability of the packing material to not stack on top of itself. If such stacking occurs, it drastically reduces the surface area of the material. Lattice design work has been increasing of late that will further improve these characteristics.
During operation, monitoring the pressure drop across the column can help to determine the performance of the stripper. A changed pressure drop over a significant range of time can be an indication that the packing may need to be replaced or cleaned.
Typical applications
Stripping is commonly used in industrial applications to remove harmful contaminants from waste streams. One example would be the removal of TBT and PAH contaminants from harbor soils. The soils are dredged from the bottom of contaminated harbors, mixed with water to make a slurry and then stripped with steam. The cleaned soil and contaminant rich steam mixture are then separated. This process is able to decontaminate soils almost completely.
Steam is also frequently used as a stripping agent for water treatment. Volatile organic compounds are partially soluble in water and because of environmental considerations and regulations, must be removed from groundwater, surface water, and wastewater. These compounds can be present because of industrial, agricultural, and commercial activity.
| Physical sciences | Other separations | Chemistry |
7167911 | https://en.wikipedia.org/wiki/Pelletizing | Pelletizing | Pelletizing is the process of compressing or molding a material into the shape of a pellet. A wide range of different materials are pelletized including chemicals, iron ore, animal compound feed, plastics, waste materials, and more. The process is considered an excellent option for the storage and transport of said materials. The technology is widely used in the powder metallurgy engineering and medicine industries.
Pelletizing of iron ore
Edward W Davis of the University of Minnesota is credited for devising the process of pelletizing iron ore.
Pelletizing iron ore is undertaken due to the excellent physical and metallurgical properties of iron ore pellets. Iron ore pellets are spheres of typically to be used as raw material for blast furnaces. They typically contain 64–72% Fe and various additional material adjusting the chemical composition and the metallurgic properties of the pellets. Typically limestone, dolomite and olivine is added and Bentonite is used as binder.
The process of pelletizing combines mixing of the raw material, forming the pellet and a thermal treatment baking the soft raw pellet to hard spheres. The raw material is rolled into a ball, then fired in a kiln or in travelling grate to sinter the particles into a hard sphere.
The configuration of iron ore pellets as packed spheres in the blast furnace allows air to flow between the pellets, decreasing the resistance to the air that flows up through the layers of material during the smelting. The configuration of iron ore powder in a blast furnace is more tightly-packed and restricts the air flow. This is the reason that iron ore is preferred in the form of pellets rather than in the form of finer particles. The quality of the iron ore pellets depends on different factors, which include feed particle size, amount of water used, disc rotating speed, inclination angle of the disc bottom, residence time in the disc as well as the quality and quantity of the binder(s) used.
Preparation of raw materials
Additional materials are added to the iron ore (pellet feed) to meet the requirements of the final pellets. This is done by placing the mixture in the pelletizer, which can hold different types of ores and additives, and mixing to adjust the chemical composition and the metallurgic properties of the pellets. In general, the following stages are included in this period of processing: concentration / separation, homogenization of the substance ratios, milling, classification, increasing thickness, homogenization of the pulp and filtering.
Formation of the raw Pellets
The formation of raw iron ore pellets, also known as pelletizing, has the objective of producing pellets in an appropriate band of sizes and with mechanical properties high usefulness during the stresses of transference, transport, and use. For example, waste materials are ground before being heated and introduced into a press for compression. Both mechanical force and thermal processes are used to produce the correct pellet properties. From an equipment point of view there are two alternatives for industrial production of iron ore pellets: the drum and the pelletizing disk.
Thermal processing
In order to confer to the pellets high resistance metallurgic mechanics and appropriate characteristics, the pellets are subjected to thermal processing, which involves stages of drying, preheating, firing, after-firing and cooling. The duration of each stage and the temperature that the pellets are subjected to have a strong influence on the final product quality.
Pharmaceutical industry
In the field of medicine, pelletization is referred to as the agglomeration process that converts fine powders or granules into more or less spherical pellets. The use of the technology increased because it allows for the controlled release of dosage form, which also lead to a uniform absorption with less mucosal irritation within the gastrointestinal tract. There are different pelletization processes applied in the pharmaceutical industry and these typically vary according to the bonding forces. Some examples of the processes include balling, compression, and spray congealing. Balling is similar to the wet (or green) pelletization used in the iron ore industry.
Pelletizing of animal feeds
Pelletizing of animal feeds can result in pellets from (shrimp feeds), through to (poultry feeds) up to (stock feeds). The pelletizing of stock feed is done with the pellet mill machinery, which is done in a feed mill.
Preparation of raw ingredients
Feed ingredients are normally first hammered to reduce the particle size of the ingredients. Ingredients are then batched, and then combined and mixed thoroughly by a feed mixer. Once the feed has been prepared to this stage the feed is ready to be pelletized.
Formation of the feed pellets
Pelletizing is done in a pellet mill, where feed is normally conditioned and thermal-treated in the fitted conditioners of a pellet mill. The feed is then pushed through the holes and exit the pellet mill as pelleted feed.
Pelletizing of wood
Wood pellets made by compressing sawdust or other ground woody materials are used in a variety of energy and non-energy applications. In the energy sector, wood pellets are often used to replace coal with power plants such as Drax, in England, replacing most of their coal use with woody pellet. As sustainably harvested wood does not lead to a long-term increase in atmospheric carbon dioxide levels, wood fuels are considered to be a low-carbon form of energy. Wood pellets are also used for domestic and commercial heating either in the form of automated boilers or pellet stoves. Compared to other fuels made from wood, pellets have the advantage of higher energy density, simpler handling as it flows similar to grain, and low moisture.
Concerns have been raised about the short-term carbon balance of wood pellet production, particularly if it is driving the harvesting of old or mature harvests that would otherwise not be logged. Areas of concern include the inland rainforests of British Columbia These claims are contested by the pellet and forest industries.
After pelleting processes
After pelleting, the pellets are cooled with a cooler to bring the temperature of the feed down. Other post pelleting applications include post-pelleting conditioning, sorting via a screen, and maybe coating if required.
Gallery
| Technology | Industry: General | null |
9320833 | https://en.wikipedia.org/wiki/Portlandite | Portlandite | Portlandite is a hydroxide-bearing mineral typically included in the oxide mineral class. It is the naturally occurring form of calcium hydroxide (Ca(OH)2) and the calcium analogue of brucite (Mg(OH)2).
Occurrence
Portlandite occurs in a variety of environments. At the type location in Northern Ireland it occurs as an alteration of calc–silicate rocks by contact metamorphism of larnite–spurrite. It occurs as fumarole deposits in the Vesuvius area. In Jebel Awq, Oman, it occurs as precipitates from an alkaline spring emanating from ultramafic bedrock. In the Chelyabinsk coal basin of Russia it is produced by combustion of coal seams and similarly by spontaneous combustion of bitumen in the Hatrurim Formation of the Negev desert in Israel and the Maqarin area, Jordan. It also occurs in the manganese mining area of Kuruman, Cape Province, South Africa in the Kalahari Desert where it occurs as large crystals and masses.
It occurs in association with afwillite, calcite, larnite, spurrite, halite, brownmillerite, hydrocalumite, mayenite and ettringite.
It was first described in 1933 for an occurrence at Scawt Hill, Larne, County Antrim, Northern Ireland. It was named portlandite because the chemical calcium hydroxide is a common hydrolysis product of Portland cement.
| Physical sciences | Minerals | Earth science |
11811193 | https://en.wikipedia.org/wiki/Shingle%20beach | Shingle beach | A shingle beach, also known as either a cobble beach or gravel beach, is a commonly narrow beach that is composed of coarse, loose, well-rounded, and waterworn gravel, called shingle. The gravel (shingle) typically consists of smooth, spheroidal to flattened, pebbles, cobbles, and sometimes small boulders, generally in the size range. Shingle beaches typically have a steep slope on both their landward and seaward sides. Shingle beaches form in wave-dominated locations where resistant bedrock cliffs provide gravel-sized rock debris. They are also found in high latitudes and temperate shores where the erosion of Quaternary glacial deposits provide gravel-size rock fragments. This term is most widely used in Great Britain.
While this type of beach is most commonly found in Europe, examples are also found in Bahrain, North America, and a number of other world regions, such as the west coast of New Zealand's South Island, where they are associated with the shingle fans of braided rivers. Though created at shorelines, post-glacial rebound can raise shingle beaches as high as above sea level, as on the High Coast in Sweden.
The ecosystems formed by this association of rock and sand allow colonization by a variety of rare and endangered species.
Formation
Shingle beaches are typically steep, because the waves easily flow through the coarse, porous surface of the beach, decreasing the effect of backwash erosion and increasing the formation of sediment into a steeply sloping beach.
Tourism
Shingle beaches are rare, made up of thousands of smooth rocks with varying geological qualities. The ocean naturally smooths the various rocks over time with crashing waves. Shingle beaches are popular for the varying rock types that can be found.
| Physical sciences | Oceanic and coastal landforms | Earth science |
3007794 | https://en.wikipedia.org/wiki/Working%20set | Working set | Working set is a concept in computer science which defines the amount of memory that a process requires in a given time interval.
Definition
Peter Denning (1968) defines "the working set of information of a process at time to be the collection of information referenced by the process during the process time interval ".
Typically the units of information in question are considered to be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future (say during the next time units), and more specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most progress to be made in the execution of that process.
Rationale
The effect of the choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then its page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system approaches zero.
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often approximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for other processes to use.
Often a heavily loaded computer has so many processes queued up that, if all the processes were allowed to run for one scheduling time slice, they would refer to more pages than there is RAM, causing the computer to "thrash".
By swapping some processes from memory, the result is that processes—even processes that were temporarily removed from memory—finish much sooner than they would if the computer attempted to run them all at once. The processes also finish much sooner than they would if the computer only ran one process at a time to completion since it allows other processes to run and make progress during times that one process is waiting on the hard drive or some other global resource.
In other words, the working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible. Thus it optimizes CPU utilization and throughput.
Implementation
The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a moving window. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. A page is in the working set if it is referenced in the working set window.
To avoid the overhead of keeping a list of the last k referenced pages, the working set is often implemented by keeping track of the time t of the last reference, and considering the working set to be all pages referenced within a certain period of time.
The working set isn't a page replacement algorithm, but page-replacement algorithms can be designed to only remove pages that aren't in the working set for a particular process. One example is a modified version of the clock algorithm called WSClock.
Variants
Working set can be divided into code working set and data working set. This distinction is important when code and data are separate at the relevant level of the memory hierarchy, as if either working set does not fit in that level of the hierarchy, thrashing will occur. In addition to the code and data themselves, on systems with virtual memory, the memory map (of virtual memory to physical memory) entries of the pages of the working set must be cached in the translation lookaside buffer (TLB) for the process to progress efficiently. This distinction exists because code and data are cached in small blocks (cache lines), not entire pages, but address lookup is done at the page level. Thus even if the code and data working sets fit into cache, if the working sets are split across many pages, the virtual address working set may not fit into TLB, causing TLB thrashing.
Analogs of working set exist for other limited resources, most significantly processes. If a set of processes requires frequent interaction between multiple processes, then it has a that must be coscheduled in order to progress:
If the processes are not scheduled simultaneously – for example, if there are two processes but only one core on which to execute them – then the processes can only advance at the rate of one interaction per time slice.
Other resources include file handles or network sockets – for example, copying one file to another is most simply done with two file handles: one for input, one for output, and thus has a "file handle working set" size of two. If only one file handle is available, copying can still be done, but requires acquiring a file handle for the input, reading from it (say into a buffer), releasing it, then acquiring a file handle for the output, writing to it, releasing it, then acquiring the input file handle again and repeating. Similarly a server may require many sockets, and if it is limited would need to repeatedly release and re-acquire sockets. Rather than thrashing, these resources are typically required for the program, and if it cannot acquire enough resources, it simply fails.
| Technology | Operating systems | null |
5501614 | https://en.wikipedia.org/wiki/Radio-quiet%20neutron%20star | Radio-quiet neutron star | A radio-quiet neutron star is a neutron star that does not seem to emit radio emissions, but is still visible to Earth through electromagnetic radiation at other parts of the spectrum, particularly X-rays and gamma rays.
Background
Most detected neutron stars are pulsars, and emit radio-frequency electromagnetic radiation. About 700 radio pulsars are listed in the Princeton catalog, and all but one emit radio waves at the 400 MHz and 1400 MHz frequencies. That exception is Geminga, which is radio quiet at frequencies above 100 MHz, but is a strong emitter of X-rays and gamma rays.
In all, ten bodies have been proposed as rotation-powered neutron stars that are not visible as radio sources, but are visible as X-ray and gamma ray sources. Indicators that they are indeed neutron stars include them having a high X-ray to lower frequencies emission ratio, a constant X-ray emission profile, and coincidence with a gamma ray source.
Hypotheses
Quark stars, hypothetical neutron star-like objects composed of quark matter, have been proposed to be radio-quiet.
More plausibly, however, radio-quiet neutron stars may simply be pulsars which do not pulse in our direction. As pulsars spin, it is hypothesized that they emit radiation from their magnetic poles. When the magnetic poles do not lie on the axis of rotation, and cross the line of sight of the observer, one can detect radio emission emitted near the star's magnetic poles. Due to the star's rotation this radiation appears to pulse, colloquially called the "lighthouse effect". Radio-quiet neutron stars may be neutron stars whose magnetic poles do not point towards the Earth during their rotation.
The group of radio-quiet neutrons stars informally known as the Magnificent Seven are thought to emit mainly thermal radiation.
Possibly some powerful neutron star radio emissions are caused by a positron-electron jet emanating from the star blasting through outer material such as a cloud or accretion material. Note some radio quiet neutron stars listed in this article do not have accretion material.
Magnetars
Magnetars, the most widely accepted explanation for soft gamma repeaters (SGRs) and anomalous X-ray pulsars (AXPs), are often characterized as being radio-quiet. However, magnetars can produce radio emissions, but the radio spectrums tend to be flat, with only intermittent broad pulses of variable length.
List of radio-quiet neutron stars
X-ray Dim Isolated Neutron Stars
Can be classified as XDINS (X-ray Dim Isolated Neutron Stars), XTINS (X-ray Thermal Isolated Neutron Stars), XINS (X-ray Isolated Neutron Stars), TEINS (Thermally Emitting Neutron Star), INS (Isolated Neutron Stars).
Defined as thermally emitting neutron stars of high magnetic fields, although lower than that of magnetars. Identified in thermal X-rays, and thought to be radio-quiet.
A group of seven individual, physically similar and relatively nearby neutron stars nicknamed The Magnificent Seven, consisting of:
RX J185635-3754
RX J0720.4-3125
RBS1556
RBS1223
RX J0806.4-4132
RX J0420.0-5022
MS 0317.7-6647
1RXS J214303.7+065419/RBS 1774
Compact Central Objects in Supernova remnants
Compact Central Objects in Supernova remnants (CCOs in SNRs) are identified as being radio-quiet compact X-ray sources surrounded by supernova remnants. They have thermal emission spectra, and lower magnetic fields than XDINSs and magnetars.
RX J0822-4300 (1E 0820–4247) in the Puppis A supernova remnant (SNR G260.4-3.4).
1E 1207.4-5209 in the PKS 1209-51/52 supernova remnant (SNR G296.5+10).
RXJ0007.0+7302 (in SNR G119.5+10.2, CTA1)
RXJ0201.8+6435 (in SNR G130.7+3.1, 3C58)
1E 161348–5055 (in SNR G332.4-0.4, RCW103)
RXJ2020.2+4026 (in SNR G078.2+2.1, γ–Cyg)
Other neutron stars
IGR J11014-6103: a runaway pulsar ejected from a supernova remnant.
| Physical sciences | Stellar astronomy | Astronomy |
5504707 | https://en.wikipedia.org/wiki/Antilocapridae | Antilocapridae | The Antilocapridae are a family of ruminant artiodactyls endemic to North America. Their closest extant relatives are the giraffids. Only one species, the pronghorn (Antilocapra americana), is living today; all other members of the family are extinct. The living pronghorn is a small ruminant mammal resembling an antelope.
Description
In most respects, antilocaprids resemble other ruminants. They have a complex, four-chambered stomach for digesting tough plant matter, cloven hooves, and small, forked horns. Their horns resemble those of the bovids, in that they have a true horny sheath, but, uniquely, they are shed outside the breeding season, and subsequently regrown. Their lateral toes are even further diminished than in bovids, with the digits themselves being entirely lost, and only the cannon bones remaining. Antilocaprids have the same dental formula as most other ruminants: .
Classification
The antilocaprids are ruminants of the clade Pecora. Other extant pecorans are the families Giraffidae (giraffes), Cervidae (deer), Moschidae (musk deer), and Bovidae (cattle, goats and sheep, wildebeests and allies, and antelopes). The exact interrelationships among the pecorans have been debated, mainly focusing on the placement of Giraffidae, but a large-scale ruminant genome sequencing study in 2019 suggested that Antilocapridae are the sister taxon to Giraffidae, as shown in the cladogram below.
Evolution
The ancestors of pronghorn diverged from the giraffids in the Early Miocene. This was in part of a relatively late mammal diversification following a climate change that transformed subtropical woodlands into open savannah grasslands.
The antilocaprids evolved in North America, where they filled a niche similar to that of the bovids that evolved in the Old World. During the Miocene and Pliocene, they were a diverse and successful group, with many different species. Some had horns with bizarre shapes, or had four, or even six, horns. Examples include Osbornoceros, with smooth, slightly curved horns, Paracosoryx, with flattened horns that widened to forked tips, Ramoceros, with fan-shaped horns, and Hayoceros, with four horns.
Species
Subfamily Antilocaprinae
Tribe Antilocaprini
Genus Antilocapra
Antilocapra americana - pronghorn
A. a. americana - Common pronghorn
A. a. mexicana - Mexican pronghorn
A. a. peninsularis - Baja California pronghorn
A. a. sonoriensis - Sonoran pronghorn
A. a. oregona - Oregon pronghorn
†Antilocapra pacifica
Genus †Texoceros
Texoceros altidens
Texoceros edensis
Texoceros guymonensis
Texoceros minorei
Texoceros texanus
Texoceros vaughani
Tribe †Ilingoceratini
Genus †Ilingoceros
Ilingoceros alexandrae
Ilingoceros schizoceros
Genus †Ottoceros
Ottoceros peacevalleyensis
Genus †Plioceros
Plioceros blicki
Plioceros dehlini
Plioceros floblairi
Genus †Sphenophalos
Sphenophalos garciae
Sphenophalos middleswarti
Sphenophalos nevadanus
Tribe †Proantilocaprini
Genus †Proantilocapra
Proantilocapra platycornea
Genus †Osbornoceros
Osbornoceros osborni
Tribe Stockoceratini
Genus †Capromeryx - (junior synonym Breameryx)
Capromeryx arizonensis - (junior synonym B. arizonensis)
Capromeryx furcifer - (junior synonyms B. minimus, C. minimus)Capromeryx gidleyi - (junior synonym B. gidleyi)Capromeryx mexicana - (junior synonym B. mexicana)Capromeryx minor - (junior synonym B. minor)Capromeryx tauntonensisGenus †CeratomeryxCeratomeryx prenticeiGenus †HayocerosHayoceros barbouriHayoceros falkenbachiGenus †HexameryxHexameryx simpsoniGenus †HexobelomeryxHexobelomeryx frickiHexobelomeryx simpsoniGenus †StockocerosStockoceros conklingi (junior synonym S. onusrosagris)
Genus †TetrameryxTetrameryx irvingtonensisTetrameryx knoxensisTetrameryx mooseriTetrameryx shuleriTetrameryx tacubayensisSubfamily †Merycodontinae
Genus †CosoryxCosoryx cerroensisCosoryx furcatusCosoryx ilfonensisGenus †MerriamocerosMerriamoceros coronatusGenus †Merycodus (syn. Meryceros and Submeryceros)Merycodus crucensisMerycodus hookwayiMerycodus jorakiMerycodus majorMerycodus minimusMerycodus minorMerycodus necatusMerycodus nenzelensisMerycodus prodromusMerycodus sabulonisMerycodus warreniGenus †ParacosoryxParacosoryx alticornisParacosoryx burgensisParacosoryx dawesensisParacosoryx furlongiParacosoryx loxocerosParacosoryx nevadensisParacosoryx wilsoniGenus †RamocerosRamoceros brevicornisRamoceros marthaeRamoceros merriamiRamoceros osborniRamoceros palmatusRamoceros ramosus''
| Biology and health sciences | Other artiodactyla | Animals |
13313000 | https://en.wikipedia.org/wiki/Substellar%20object | Substellar object | A substellar object, sometimes called a substar, is an astronomical object, the mass of which is smaller than the smallest mass at which hydrogen fusion can be sustained (approximately 0.08 solar masses). This definition includes brown dwarfs and former stars similar to EF Eridani B, and can also include objects of planetary mass, regardless of their formation mechanism and whether or not they are associated with a primary star.
Assuming that a substellar object has a composition similar to the Sun's and at least the mass of Jupiter (approximately 0.001 solar masses), its radius will be comparable to that of Jupiter (approximately 0.1 solar radii) regardless of the mass of the substellar object (brown dwarfs are less than 75 Jupiter masses). This is because the center of such a substellar object at the top range of the mass (just below the hydrogen-burning limit) is quite degenerate, with a density of ≈103 g/cm3, but this degeneracy lessens with decreasing mass until, at the mass of Jupiter, a substellar object has a central density less than 10 g/cm3. The density decrease balances the mass decrease, keeping the radius approximately constant.
Substellar objects like brown dwarfs do not have enough mass to fuse hydrogen and helium, hence do not undergo the usual stellar evolution that limits the lifetime of stars.
A substellar object with a mass just below the hydrogen-fusing limit may ignite hydrogen fusion temporarily at its center. Although this will provide some energy, it will not be enough to overcome the object's ongoing gravitational contraction. Likewise, although an object with mass above approximately 0.013 solar masses will be able to fuse deuterium for a time, this source of energy will be exhausted in approximately 1100million years. Apart from these sources, the radiation of an isolated substellar object comes only from the release of its gravitational potential energy, which causes it to gradually cool and shrink. A substellar object in orbit around a star will shrink more slowly as it is kept warm by the star, evolving towards an equilibrium state where it emits as much energy as it receives from the star.
Substellar objects are cool enough to have water vapor in their atmosphere. Infrared spectroscopy can detect the distinctive color of water in gas giant size substellar objects, even if they are not in orbit around a star.
Classification
William Duncan MacMillan proposed in 1918 the classification of substellar objects into three categories based on their density and phase state: solid, transitional and dark (non-stellar) gaseous. Solid objects include Earth, smaller terrestrial planets and moons; with Uranus and Neptune (as well as later mini-Neptune and Super Earth planets) as transitional objects between solid and gaseous. Saturn, Jupiter and large gas giant planets are in a fully "gaseous" state.
Substellar companion
A substellar object may be a companion of a star, such as an exoplanet or brown dwarf that is orbiting a star. Objects as low as 823 Jupiter masses have been called substellar companions.
Objects orbiting a star are often called planets below 13 Jupiter masses and brown dwarves above that. Companions at that planet-brown dwarf borderline have been called Super-Jupiters, such as that around the star Kappa Andromedae. Nevertheless, objects as small as 8 Jupiter masses have been called brown dwarfs.
| Physical sciences | Astronomy basics | Astronomy |
13320206 | https://en.wikipedia.org/wiki/Vapor%E2%80%93liquid%20separator | Vapor–liquid separator | In chemical engineering, a vapor–liquid separator is a device used to separate a vapor–liquid mixture into its constituent phases. It can be a vertical or horizontal vessel, and can act as a 2-phase or 3-phase separator.
A vapor–liquid separator may also be referred to as a flash drum, breakpot, knock-out drum or knock-out pot, compressor suction drum, suction scrubber or compressor inlet drum, or vent scrubber. When used to remove suspended water droplets from streams of air, it is often called a demister.
Method of operation
In vapor-liquid separators gravity is utilized to cause the denser fluid (liquid) to settle to the bottom of the vessel where it is withdrawn, less dense fluid (vapor) is withdrawn from the top of the vessel.
In low gravity environments such as a space station, a common liquid separator will not function because gravity is not usable as a separation mechanism. In this case, centrifugal force needs to be utilised in a spinning centrifugal separator to drive liquid towards the outer edge of the chamber for removal. Gaseous components migrate towards the center.
An inlet diffuser reduces the velocity and spreads the incoming mixture across the full cross-section of the vessel. A mesh pad in the upper part of the vessel aids separation and prevents liquid from being carried over with the vapor. The pad or mist mat traps entrained liquid droplets and allows them to coalesce until they are large enough to fall through the up-flowing vapor to the bottom of the vessel. Vane packs and cyclonic separators are also used to remove liquid from the outlet vapor.
The gas outlet may itself be surrounded by a spinning mesh screen or grating, so that any liquid that does approach the outlet strikes the grating, is accelerated, and thrown away from the outlet.
The vapor travels through the gas outlet at a design velocity which minimises the entrainment of any liquid droplets in the vapor as it exits the vessel.
A vortex breaker on the liquid outlet prevents the formation of vortices and of vapor being drawn into the liquid outlet.
Liquid level monitoring
The separator is only effective as long as there is a vapor space inside the chamber. The separator can fail if either the mixed inlet is overwhelmed with supply material, or the liquid drain is unable to handle the volume of liquid being collected. The separator may therefore be combined with some other liquid level sensing mechanism such as a sight glass or float sensor. In this manner, both the supply and drain flow can be regulated to prevent the separator from becoming overloaded.
Applications
Vertical separators are generally used when the gas-liquid ratio is high or gas volumes are high. Horizontal separators are used where large volumes of liquid are involved.
A vapor-liquid separator may operate as a 3-phase separator, with two immiscible liquid phases of different densities. For example natural gas (vapor), water and oil/condensate. The two liquids settle at the bottom of the vessel with oil floating on the water. Separate liquid outlets are provided.
The feed to a vapor–liquid separator may also be a liquid that is being partially or totally flashed into a vapor and liquid as it enters the separator.
A slug catcher is a type of vapor–liquid separator that is able to receive a large inflow of liquid at random times. It is usually found at the end of gas pipelines where condensate may be present as slugs of liquid. It is usually a horizontal vessel or array of large diameter pipes.
The liquid capacity of a separator is usually defined by the residence time of the liquid in the vessel. Some typical residence times are as shown.
Where vapor–liquid separators are used
Vapor–liquid separators are very widely used in a great many industries and applications, such as:
Oil refineries
Offshore platforms
Natural-gas processing plants (NGL)
Petrochemical and chemical plants
Refrigeration systems
Air conditioning
Compressor systems
Gas pipelines
Steam condensate flash drums
Geothermal power plants
Combined cycle power plants
Flare stacks
Soil vapor extraction
Paper mills
Liquid ring pumps
Preventing pump damage
In refrigeration systems, it is common for the system to contain a mixture of liquid and gas, but for the mechanical gas compressor to be intolerant of liquid.
Some compressor types such as the scroll compressor use a continuously shrinking compression volume. Once liquid completely fills this volume the pump may either stall and overload, or the pump chamber may be warped or otherwise damaged by the fluid that can not fit into a smaller space.
| Physical sciences | Phase separations | Chemistry |
14485893 | https://en.wikipedia.org/wiki/Praya%20dubia | Praya dubia | Praya dubia, the giant siphonophore, lives in the mesopelagic zone to bathypelagic zone at to below sea level. It has been found off the coasts around the world, from Iceland in the North Atlantic to Chile in the South Pacific.
Praya dubia is a member of the order Siphonophorae within the class Hydrozoa. With a body length of up to , it is the second-longest sea organism after the bootlace worm. Its length also rivals the blue whale, the sea's largest mammal, although Praya dubia is as thin as a broomstick.
A siphonophore is not a single, multi-cellular organism, but a colony of tiny biological components called zooids, each having evolved with a specific function. Zooids cannot survive on their own, relying on symbiosis in order for a complete Praya dubia specimen to survive.
Description
Praya dubia zooids arrange themselves in a long stalk—usually whitish and transparent (though other colours have been seen)—known as a physonect colony. The larger end features a transparent, dome-like float known as a pneumatophore, filled with gas which provides buoyancy, allowing the organism to remain at its preferred ocean depth. Next to it are the nectophores, powerful medusae which pulsate in rhythmic coordination which propel Praya dubia through ocean waters. Together, the array is known as the nectosome.
Beneath the nectosome is the siphosome which extends to the far end of Praya dubia, containing several types of specialized zooids in repeating patterns. Some have a long tentacle used for catching and immobilizing food and distributing their digested nutrients to the rest of the colony. Other zooids known as palpons, or dactylozooids, appear to contain an excretory system that may also assist in defense, though little is known about their precise function in Praya dubia. Transparent bracts (also called hydrophyllia), are leaf-shaped organs generally thought to be another type of zooid which covers and forces other zooids to contract in times of danger.
Due to their hydrostatic skeleton being held together by water pressure above , these animals burst when brought to the surface. The remains of Praya dubia dredged up in fishing nets resemble a blob of gelatin, which prevented their identification as a unique creature until the 19th century. In 1987, Monterey Bay Aquarium Research Institute observed living Praya dubia during a systematic study of a water column, the animal's natural habitat, in Monterey Bay.
Behavior
Praya dubia is an active swimmer that attracts its prey with bright blue bioluminescent light. When it finds itself in a region abundant with food, it holds its position and deploys a curtain of tentacles covered with nematocysts which produce a powerful, toxic sting that can paralyze or kill prey that happen to bump into it. Praya dubia's diet includes gelatinous sea life, small crustaceans, and possibly small fish and fish larvae. It has no known predators.
A Praya dubia specimen, filmed in its native habitat, was featured in Episode 2 of the David Attenborough television series Blue Planet II, produced for the BBC.
| Biology and health sciences | Cnidarians | Animals |
1527574 | https://en.wikipedia.org/wiki/Rotamer | Rotamer | In chemistry, rotamers are chemical species that differ from one another primarily due to rotations about one or more single bonds. Various arrangements of atoms in a molecule that differ by rotation about single bonds can also be referred to as different conformations. Conformers/rotamers differ little in their energies, so they are almost never separable in a practical sense. Rotations about single bonds are subject to small energy barriers. When the time scale for interconversion is long enough for isolation of individual rotamers (usually arbitrarily defined as a half-life of interconversion of 1000 seconds or longer), the species are termed atropisomers (see: atropisomerism). The ring-flip of substituted cyclohexanes constitutes a common form of conformers.
The study of the energetics of bond rotation is referred to as conformational analysis. In some cases, conformational analysis can be used to predict and explain product selectivity, mechanisms, and rates of reactions. Conformational analysis also plays an important role in rational, structure-based drug design.
Types
Rotating their carbon–carbon bonds, the molecules ethane and propane have three local energy minima. They are structurally and energetically equivalent, and are called the staggered conformers. For each molecule, the three substituents emanating from each carbon–carbon bond are staggered, with each H–C–C–H dihedral angle (and H–C–C–CH3 dihedral angle in the case of propane) equal to 60° (or approximately equal to 60° in the case of propane). The three eclipsed conformations, in which the dihedral angles are zero, are transition states (energy maxima) connecting two equivalent energy minima, the staggered conformers.
The butane molecule is the simplest molecule for which single bond rotations result in two types of nonequivalent structures, known as the anti- and gauche-conformers (see figure).
For example, butane has three conformers relating to its two methyl (CH3) groups: two gauche conformers, which have the methyls ±60° apart and are enantiomeric, and an anti conformer, where the four carbon centres are coplanar and the substituents are 180° apart (refer to free energy diagram of butane). The energy difference between gauche and anti is 0.9 kcal/mol associated with the strain energy of the gauche conformer. The anti conformer is, therefore, the most stable (≈ 0 kcal/mol). The three eclipsed conformations with dihedral angles of 0°, 120°, and 240° are transition states between conformers. Note that the two eclipsed conformations have different energies: at 0° the two methyl groups are eclipsed, resulting in higher energy (≈ 5 kcal/mol) than at 120°, where the methyl groups are eclipsed with hydrogens (≈ 3.5 kcal/mol).
While simple molecules can be described by these types of conformations, more complex molecules require the use of the Klyne–Prelog system to describe the different conformers.
More specific examples of conformations are detailed elsewhere:
Ring conformation
Cyclohexane conformations, including with chair and boat conformations among others.
Cycloalkane conformations, including medium rings and macrocycles
Carbohydrate conformation, which includes cyclohexane conformations as well as other details.
Allylic strain – energetics related to rotation about the single bond between an sp2 carbon and an sp3 carbon.
Atropisomerism – due to restricted rotation about a bond.
Folding, including the secondary and tertiary structure of biopolymers (nucleic acids and proteins).
Akamptisomerism – due to restricted inversion of a bond angle.
Equilibrium of conformers
Conformers generally exist in a dynamic equilibrium
Three isotherms are given in the diagram depicting the equilibrium distribution of two conformers at different temperatures. At a free energy difference of 0 kcal/mol, this gives an equilibrium constant of 1, meaning that two conformers exist in a 1:1 ratio. The two have equal free energy; neither is more stable, so neither predominates compared to the other. A negative difference in free energy means that a conformer interconverts to a thermodynamically more stable conformation, thus the equilibrium constant will always be greater than 1. For example, the ΔG° for the transformation of butane from the gauche conformer to the anti conformer is −0.47 kcal/mol at 298 K. This gives an equilibrium constant is about 2.2 in favor of the anti conformer, or a 31:69 mixture of gauche:anti conformers at equilibrium. Conversely, a positive difference in free energy means the conformer already is the more stable one, so the interconversion is an unfavorable equilibrium (K < 1). Even for highly unfavorable changes (large positive ΔG°), the equilibrium constant between two conformers can be increased by increasing the temperature, so that the amount of the less stable conformer present at equilibrium increases (although it always remains the minor conformer).
Population distribution of conformers
The fractional population distribution of different conformers follows a Boltzmann distribution:
The left hand side is the proportion of conformer i in an equilibrating mixture of M conformers in thermodynamic equilibrium. On the right side, Ek (k = 1, 2, ..., M) is the energy of conformer k, R is the molar ideal gas constant (approximately equal to 8.314 J/(mol·K) or 1.987 cal/(mol·K)), and T is the absolute temperature. The denominator of the right side is the partition function.
Factors contributing to the free energy of conformers
The effects of electrostatic and steric interactions of the substituents as well as orbital interactions such as hyperconjugation are responsible for the relative stability of conformers and their transition states. The contributions of these factors vary depending on the nature of the substituents and may either contribute positively or negatively to the energy barrier. Computational studies of small molecules such as ethane suggest that electrostatic effects make the greatest contribution to the energy barrier; however, the barrier is traditionally attributed primarily to steric interactions.
In the case of cyclic systems, the steric effect and contribution to the free energy can be approximated by A values, which measure the energy difference when a substituent on cyclohexane in the axial as compared to the equatorial position. In large (>14 atom) rings, there are many accessible low-energy conformations which correspond to the strain-free diamond lattice.
Observation of conformers
The short timescale of interconversion precludes the separation of conformer in most cases. Atropisomers are conformational isomers which can be separated due to restricted rotation. The equilibrium between conformational isomers can be observed using a variety of spectroscopic techniques.
Protein folding also generates conformers which can be observed. The Karplus equation relates the dihedral angle of vicinal protons to their J-coupling constants as measured by NMR. The equation aids in the elucidation of protein folding as well as the conformations of other rigid aliphatic molecules. Protein side chains exhibit rotamers, whose distribution is determined by their steric interaction with different conformations of the backbone. This is evident from statistical analysis of the conformations of protein side chains in the Backbone-dependent rotamer library.
Spectroscopy
Conformational dynamics can be monitored by variable temperature NMR spectroscopy. The technique applies to barriers of 8–14 kcal/mol, and species exhibiting such dynamics are often called "fluxional". For example, in cyclohexane derivatives, the two chair conformers interconvert rapidly at room temperature. The ring-flip proceeds at a rates of approximately 105 ring-flips/sec, with an overall energy barrier of 10 kcal/mol (42 kJ/mol). This barrier precludes separation at ambient temperatures. However, at low temperatures below the coalescence point one can directly monitor the equilibrium by NMR spectroscopy and by dynamic, temperature dependent NMR spectroscopy the barrier interconversion.
Besides NMR spectroscopy, IR spectroscopy is used to measure conformer ratios. For the axial and equatorial conformer of bromocyclohexane, νCBr differs by almost 50 cm−1.
Conformation-dependent reactions
Reaction rates are highly dependent on the conformation of the reactants. In many cases the dominant product arises from the reaction of the less prevalent conformer, by virtue of the Curtin-Hammett principle. This is typical for situations where the conformational equilibration is much faster than reaction to form the product. The dependence of a reaction on the stereochemical orientation is therefore usually only visible in Configurational analysis, in which a particular conformation is locked by substituents. Prediction of rates of many reactions involving the transition between sp2 and sp3 states, such as ketone reduction, alcohol oxidation or nucleophilic substitution is possible if all conformers and their relative stability ruled by their strain is taken into account.
One example where the rotamers become significant is elimination reactions, which involve the simultaneous removal of a proton and a leaving group from vicinal or antiperiplanar positions under the influence of a base.
The mechanism requires that the departing atoms or groups follow antiparallel trajectories. For open chain substrates this geometric prerequisite is met by at least one of the three staggered conformers. For some cyclic substrates such as cyclohexane, however, an antiparallel arrangement may not be attainable depending on the substituents which might set a conformational lock. Adjacent substituents on a cyclohexane ring can achieve antiperiplanarity only when they occupy trans diaxial positions (that is, both are in axial position, one going up and one going down).
One consequence of this analysis is that trans-4-tert-butylcyclohexyl chloride cannot easily eliminate but instead undergoes substitution (see diagram below) because the most stable conformation has the bulky t-Bu group in the equatorial position, therefore the chloride group is not antiperiplanar with any vicinal hydrogen (it is gauche to all four). The thermodynamically unfavored conformation has the t-Bu group in the axial position, which is higher in energy by more than 5 kcal/mol (see A value). As a result, the t-Bu group "locks" the ring in the conformation where it is in the equatorial position and substitution reaction is observed. On the other hand, cis-4-tert-butylcyclohexyl chloride undergoes elimination because antiperiplanarity of Cl and H can be achieved when the t-Bu group is in the favorable equatorial position.
The repulsion between an axial t-butyl group and hydrogen atoms in the 1,3-diaxial position is so strong that the cyclohexane ring will revert to a twisted boat conformation. The strain in cyclic structures is usually characterized by deviations from ideal bond angles (Baeyer strain), ideal torsional angles (Pitzer strain) or transannular (Prelog) interactions.
Alkane stereochemistry
Alkane conformers arise from rotation around sp3 hybridised carbon–carbon sigma bonds. The smallest alkane with such a chemical bond, ethane, exists as an infinite number of conformations with respect to rotation around the C–C bond. Two of these are recognised as energy minimum (staggered conformation) and energy maximum (eclipsed conformation) forms. The existence of specific conformations is due to hindered rotation around sigma bonds, although a role for hyperconjugation is proposed by a competing theory.
The importance of energy minima and energy maxima is seen by extension of these concepts to more complex molecules for which stable conformations may be predicted as minimum-energy forms. The determination of stable conformations has also played a large role in the establishment of the concept of asymmetric induction and the ability to predict the stereochemistry of reactions controlled by steric effects.
In the example of staggered ethane in Newman projection, a hydrogen atom on one carbon atom has a 60° torsional angle or torsion angle with respect to the nearest hydrogen atom on the other carbon so that steric hindrance is minimised. The staggered conformation is more stable by 12.5 kJ/mol than the eclipsed conformation, which is the energy maximum for ethane. In the eclipsed conformation the torsional angle is minimised.
In butane, the two staggered conformations are no longer equivalent and represent two distinct conformers:the anti-conformation (left-most, below) and the gauche conformation (right-most, below).
150px
Both conformations are free of torsional strain, but, in the gauche conformation, the two methyl groups are in closer proximity than the sum of their van der Waals radii. The interaction between the two methyl groups is repulsive (van der Waals strain), and an energy barrier results.
A measure of the potential energy stored in butane conformers with greater steric hindrance than the 'anti'-conformer ground state is given by these values:
Gauche, conformer – 3.8 kJ/mol
Eclipsed H and CH3 – 16 kJ/mol
Eclipsed CH3 and CH3 – 19 kJ/mol.
The eclipsed methyl groups exert a greater steric strain because of their greater electron density compared to lone hydrogen atoms.
The textbook explanation for the existence of the energy maximum for an eclipsed conformation in ethane is steric hindrance, but, with a C-C bond length of 154 pm and a Van der Waals radius for hydrogen of 120 pm, the hydrogen atoms in ethane are never in each other's way. The question of whether steric hindrance is responsible for the eclipsed energy maximum is a topic of debate to this day. One alternative to the steric hindrance explanation is based on hyperconjugation as analyzed within the Natural Bond Orbital framework. In the staggered conformation, one C-H sigma bonding orbital donates electron density to the antibonding orbital of the other C-H bond. The energetic stabilization of this effect is maximized when the two orbitals have maximal overlap, occurring in the staggered conformation. There is no overlap in the eclipsed conformation, leading to a disfavored energy maximum. On the other hand, an analysis within quantitative molecular orbital theory shows that 2-orbital-4-electron (steric) repulsions are dominant over hyperconjugation. A valence bond theory study also emphasizes the importance of steric effects.
Nomenclature
Naming alkanes per standards listed in the IUPAC Gold Book is done according to the Klyne–Prelog system for specifying angles (called either torsional or dihedral angles) between substituents around a single bond:
a torsion angle between 0° and ±90° is called syn (s)
a torsion angle between ±90° and 180° is called anti (a)
a torsion angle between 30° and 150° or between −30° and −150° is called clinal (c)
a torsion angle between 0° and ±30° or ±150° and 180° is called periplanar (p)
a torsion angle between 0° and ±30° is called synperiplanar (sp), also called syn- or cis- conformation
a torsion angle between 30° to 90° and −30° to −90° is called synclinal (sc), also called gauche or skew
a torsion angle between 90° and 150° or −90° and −150° is called anticlinal (ac)
a torsion angle between ±150° and 180° is called antiperiplanar (ap), also called anti- or trans- conformation
Torsional strain or "Pitzer strain" refers to resistance to twisting about a bond.
Special cases
In n-pentane, the terminal methyl groups experience additional pentane interference.
Replacing hydrogen by fluorine in polytetrafluoroethylene changes the stereochemistry from the zigzag geometry to that of a helix due to electrostatic repulsion of the fluorine atoms in the 1,3 positions. Evidence for the helix structure in the crystalline state is derived from X-ray crystallography and from NMR spectroscopy and circular dichroism in solution.
| Physical sciences | Stereochemistry | Chemistry |
1527674 | https://en.wikipedia.org/wiki/Trip%20hammer | Trip hammer | A trip hammer, also known as a tilt hammer or helve hammer, is a massive powered hammer. Traditional uses of trip hammers include pounding, decorticating and polishing of grain in agriculture. In mining, trip hammers were used for crushing metal ores into small pieces, although a stamp mill was more usual for this. In finery forges they were used for drawing out blooms made from wrought iron into more workable bar iron. They were also used for fabricating various articles of wrought iron, latten (an early form of brass), steel and other metals.
One or more trip hammers were set up in a forge, also known variously as a hammer mill, hammer forge or hammer works. The hammers were usually raised by a cam and then released to fall under the force of gravity. Historically, trip hammers were often powered hydraulically by a water wheel.
Trip hammers are known to have been used in Imperial China since the Western Han dynasty. They also existed in the contemporary Greco-Roman world, with more evidence of their use in medieval Europe during the 12th century. During the Industrial Revolution the trip hammer fell out of favor and was replaced with the power hammer. Often multiple hammers were powered via a set of line shafts, pulleys and belts from a centrally located power supply.
Early history
China
In ancient China, the trip hammer evolved out of the use of the mortar and pestle, which in turn gave rise to the treadle-operated tilt-hammer (Chinese: 碓 Pinyin: dui; Wade-Giles: tui). The latter was a simple device employing a lever and fulcrum (operated by pressure applied by the weight of one's foot to one end), which featured a series of catches or lugs on the main revolving shaft as well. This device enabled the labor of pounding, often in the decorticating and polishing of grain, and avoided manual use of pounding with hand and arm.
Although Chinese historians assert that its origins may span as far back as the Zhou dynasty (1050 BC–221 BC), the British sinologist Joseph Needham regards the earliest texts to describe the device are the Jijiupian dictionary of 40 BC, Yang Xiong's text known as the Fangyan of 15 BC, as well as the "best statement" the Xin Lun written by Huan Tan about 20 AD (during the usurpation of Wang Mang). The latter book states that the legendary mythological king known as Fu Xi was the one responsible for the pestle and mortar (which evolved into the tilt-hammer and then trip hammer device). Although the author speaks of the mythological Fu Xi, a passage of his writing gives hint that the waterwheel and trip-hammer were in widespread use by the 1st century AD in China (for water-powered Chinese metallurgy, see Du Shi):
Fu Hsi invented the pestle and mortar, which is so useful, and later on it was cleverly improved in such a way that the whole weight of the body could be used for treading on the tilt-hammer (tui), thus increasing the efficiency ten times. Afterwards the power of animals—donkeys, mules, oxen, and horses—was applied by means of machinery, and water-power too used for pounding, so that the benefit was increased a hundredfold.
However, this passage as well as other early references from the Han era may rather refer to a water lever, not a trip hammer. Later research, pointing to two contemporary Han era funeral wares depicting hydraulic hammers, proved that vertical waterwheels were used to power batteries of trip hammers during the Han dynasty.
With his description, it is seen that the out-of-date Chinese term for pestle and mortar (dui, tui) would soon be replaced with the Chinese term for the water-powered trip-hammer (. The Han dynasty scholar and poet Ma Rong (79–166 AD) mentioned in one of his poems of hammers 'pounding in the water-echoing caves'. As described in the Hou Han Shu, in 129 AD the official Yu Xu gave a report to Emperor Shun of Han that trip hammers were being exported from Han China to the Western Qiang people by way of canals through the Qilian Mountains. In his Rou Xing Lun, the government official Kong Rong (153–208 AD) remarked that the invention of the trip hammer was an excellent example of a product created by intelligent men during his own age (comparing the relative achievements of the sages of old). During the 3rd century AD, the high government official and engineer Du Yu established the use of combined trip hammer batteries (lian zhi dui), which employed several shafts that were arranged to work off one large waterwheel. In Chinese texts of the 4th century, there are written accounts of men possessing and operating hundreds of trip hammer machines, such as the venerable mathematician Wang Rong (died 306 AD), Deng Yu (died 326 AD), and Shi Chong (died 300 AD), responsible for the operation of hundreds of trip hammers in over thirty governmental districts throughout China. There are numerous references to trip hammers during the Tang dynasty (618–907 AD) and Song dynasty (960–1279), and there are Ming dynasty (1368–1644) references that report the use of trip hammers in papermills of Fujian Province.
Although Chinese trip hammers in China were sometimes powered by the more efficient vertical-set waterwheel, the Chinese often employed the horizontal-set waterwheel in operating trip hammers, along with recumbent hammers. The recumbent hammer was found in Chinese illustrations by 1313 AD, with the publishing of Wang Zhen's Nong Shu book on ancient and contemporary (medieval) metallurgy in China. There were also illustrations of trip hammers in an encyclopedia of 1637, written by Song Yingxing (1587–1666).
The Chinese use of the cam remained confined to the horizontal type and was limited to a "small variety of machines" that included only rice hulling and much later mica-pounders, paper mills and saw mills, while fulling stocks, ore stamps or forge hammers were unknown.
Classical antiquity
The main components for water-powered trip hammers – water wheels, cams, and hammers – were known in the Hellenistic world. Early cams are in evidence in water-powered automata from the 3rd century BC. According to M.J.T. Lewis, a flute player whose mechanism was described by the Persian Banū Mūsā brothers in the 9th century AD and can be "reasonably" attributed to Apollonius of Perge, functions on the principle of water-powered trip hammers.
The Roman scholar Pliny (Natural History XVIII, 23.97) indicates that water-driven pestles had become fairly widespread in Italy by the first century AD:
While some scholars have viewed this passage to mean a watermill, later scholarship argued that mola must refer to water-powered trip hammers which were used for the pounding and hulling of grain. Their mechanical character is also suggested by an earlier reference of Lucius Pomponius (fl. 100–85 BC) to a fuller's mill, a type of mill that has been operated at all times with falling stocks. However, it has been pointed out that the translation of Pomponius' fragmentary text could be faulty, and relies on translating mola, which is often thought to mean either a mill or millstone, to instead refer to a water powered trip hammer. Grain-pounders with pestles, as well as ordinary watermills, are attested as late as the middle of the 5th century AD in a monastery founded by Romanus of Condat in the remote Jura region, indicating that the knowledge of trip hammers continued into the early Middle Ages.
At the Italian site of Saepinum excavators have recently unearthed a late antique water mill that may have employed trip hammers for tanning, the earliest evidence of its kind in a classical context.
The widest application of trip hammers seems to have occurred in Roman mining, where ore from deep veins was first crushed into small pieces for further processing. Here, the regularity and spacing of large indentations on stone anvils indicate the use of cam-operated ore stamps, much like the devices of later medieval mining. Such mechanically deformed anvils have been found at numerous Roman silver and gold mining sites in Western Europe, including at Dolaucothi (Wales), and on the Iberian peninsula,<ref>Sánchez-Palencia Ramos, Francisco-Javier (1984/1985): "Los «Morteros» de Fresnedo ( Allande) y Cecos (Ibias) y los lavaderos de oro romanos en el noroeste de la Península Ibérica, "Zephyrus", Vol. 37/38, pp. 349–359 (356f.)</ref> where the datable examples are from the 1st and 2nd century AD. At Dolaucothi, these trip-hammers were hydraulic-driven and possibly also at other Roman mining sites, where the large-scale use of the hushing and ground sluicing technique meant that large amounts of water were directly available for powering the machines. However, none of the Spanish and Portuguese anvils can be convincingly associated with mill sites, though most mines had water sources and leat systems which could easily be harnessed. Likewise, the dating of the Pumsaint stone to the Roman era did not address that the stone could have been moved, and relies on a series of interlinked probabilities which would jeopardize the conclusion of a Roman dating should any of them unravel.
Medieval Europe
Water-powered and mechanised trip hammers reappeared in medieval Europe by the 12th century. Their use was described in medieval written sources of Styria (in modern-day Austria), written in 1135 and another in 1175 AD. Medieval French sources of the years 1116 and 1249 both record the use of mechanised trip hammers used in the forging of wrought iron. Medieval European trip hammers by the 15th century were most often in the shape of the vertical pestle stamp-mill, although they employed more frequent use of the vertical waterwheel than earlier Chinese versions (which often used the horizontal waterwheel). The well-known Renaissance artist and inventor Leonardo da Vinci often sketched trip hammers for use in forges and even file-cutting machinery, those of the vertical pestle stamp-mill type. The oldest depicted European illustration of a forge-hammer is perhaps the A Description of the Northern Peoples of Olaus Magnus, dated to 1565 AD. In this woodcut image, there is the scene of three martinets and a waterwheel working wood and leather bellows of the Osmund Bloomery furnace. The recumbent hammer was first depicted in European artwork in an illustration by Sandrart and Zonca (dated 1621 AD).
Types
A trip hammer has the head mounted at the end of a recumbent helve, hence the alternative name of helve hammer. The choice of which type was used in a particular context may have depended on the strain that its operation imposed on the helve. This was normally of wood, mounted in a cast-iron ring (called the hurst) where it pivoted. However, in the 19th century the heaviest helves were sometimes a single casting, incorporating the hurst.
The tilt hammer or tail helve hammer has a pivot at the centre of the helve on which it is mounted, and is lifted by pushing the opposite end to the head downwards. In practice, the head on such hammers seems to have been limited to one hundredweight (about 50 kg), but a very rapid stroke rate was possible. This made it suitable for drawing iron down to small sizes suitable for the cutlery trades. There were therefore many such forges known as 'tilts' around Sheffield. They were also used in brass battery works for making brass (or copper) pots and pans. In battery works (at least) it was possible for one power source to operate several hammers. In Germany, tilt hammers of up to 300 kg were used in hammer mills to forge iron. Surviving, working hammers, powered by water wheels, may be seen, for example, at the Frohnauer Hammer in the Ore Mountains.
The belly helve hammer was the kind normally found in a finery forge, used for making pig iron into forgeable bar iron. This was lifted by cams striking the helve between the pivot and the head. The head usually weighed quarter of a ton. This was probably the case because the strain on a wooden helve would have been too great if the head were heavier.
The nose helve hammer seems to have been unusual until the late 18th or early 19th century. This was lifted beyond the head. Surviving nose helves and those in pictures appear to be of cast iron.
Demise
The steam-powered drop hammer replaced the trip hammer (at least for the largest forgings). James Nasmyth invented it in 1839 and patented in 1842. However, by then forging had become less important for the iron industry, following the improvements to the rolling mill that went along with the adoption of puddling from the end of the 18th century. Nevertheless, hammers continued to be needed for shingling.
| Technology | Industrial machinery | null |
1528221 | https://en.wikipedia.org/wiki/Sheet%20metal | Sheet metal | Sheet metal is metal formed into thin, flat pieces, usually by an industrial process.
Thicknesses can vary significantly; extremely thin sheets are considered foil or leaf, and pieces thicker than 6 mm (0.25 in) are considered plate, such as plate steel, a class of structural steel.
Sheet metal is available in flat pieces or coiled strips. The coils are formed by running a continuous sheet of metal through a roll slitter.
In most of the world, sheet metal thickness is consistently specified in millimeters. In the U.S., the thickness of sheet metal is commonly specified by a traditional, non-linear measure known as its gauge. The larger the gauge number, the thinner the metal. Commonly used steel sheet metal ranges from 30 gauge to about 7 gauge. Gauge differs between ferrous (iron-based) metals and nonferrous metals such as aluminum or copper. Copper thickness, for example, is measured in ounces, representing the weight of copper contained in an area of one square foot. Parts manufactured from sheet metal must maintain a uniform thickness for ideal results.
There are many different metals that can be made into sheet metal, such as aluminium, brass, copper, steel, tin, nickel and titanium. For decorative uses, some important sheet metals include silver, gold, and platinum (platinum sheet metal is also utilized as a catalyst). These metal sheets are processed through different processing technologies, mainly including cold rolling and hot rolling. Sometimes hot-dip galvanizing process is adopted as needed to prevent it from rusting due to constant exposure to the outdoors. Sometimes a layer of color coating is applied to the surface of the cold-rolled sheet to obtain a decorative and protective metal sheet, generally called a color-coated metal sheet.
Sheet metal is used in automobile and truck (lorry) bodies, major appliances, airplane fuselages and wings, tinplate for tin cans, roofing for buildings (architecture), and many other applications. Sheet metal of iron and other materials with high magnetic permeability, also known as laminated steel cores, has applications in transformers and electric machines. Historically, an important use of sheet metal was in plate armor worn by cavalry, and sheet metal continues to have many decorative uses, including in horse tack. Sheet metal workers are also known as "tin bashers" (or "tin knockers"), a name derived from the hammering of panel seams when installing tin roofs.
History
Hand-hammered metal sheets have been used since ancient times for architectural purposes. Water-powered rolling mills replaced the manual process in the late 17th century. The process of flattening metal sheets required large rotating iron cylinders which pressed metal pieces into sheets. The metals suited for this were lead, copper, zinc, iron and later steel. Tin was often used to coat iron and steel sheets to prevent it from rusting. This tin-coated sheet metal was called "tinplate." Sheet metals appeared in the United States in the 1870s, being used for shingle roofing, stamped ornamental ceilings, and exterior façades. Sheet metal ceilings were only popularly known as "tin ceilings" later as manufacturers of the period did not use the term. The popularity of both shingles and ceilings encouraged widespread production. With further advances of steel sheet metal production in the 1890s, the promise of being cheap, durable, easy to install, lightweight and fireproof gave the middle-class a significant appetite for sheet metal products. It was not until the 1930s and WWII that metals became scarce and the sheet metal industry began to collapse. However, some American companies, such as the W.F. Norman Corporation, were able to stay in business by making other products until Historic preservation projects aided the revival of ornamental sheet metal.
Materials
Stainless steel
Grade 304 is the most common of the three grades. It offers good corrosion resistance while maintaining formability and weldability. Available finishes are #2B, #3, and #4. Grade 303 is not available in sheet form.
Grade 316 possesses more corrosion resistance and strength at elevated temperatures than 304. It is commonly used for pumps, valves, chemical equipment, and marine applications. Available finishes are #2B, #3, and #4.
Grade 410 is a heat treatable stainless steel, but it has a lower corrosion resistance than the other grades. It is commonly used in cutlery. The only available finish is dull.
Grade 430 is a popular grade, low-cost alternative to series 300's grades. This is used when high corrosion resistance is not a primary criterion. Common grade for appliance products, often with a brushed finish.
Aluminium
Aluminium is widely used in sheet metal form due to its flexibility, wide range of options, cost effectiveness, and other properties. The four most common aluminium grades available as sheet metal are 1100-H14, 3003-H14, 5052-H32, and 6061-T6.
Grade 1100-H14 is commercially pure aluminium, highly chemical and weather resistant. It is ductile enough for deep drawing and weldable, but has low strength. It is commonly used in chemical processing equipment, light reflectors, and jewelry.
Grade 3003-H14 is stronger than 1100, while maintaining the same formability and low cost. It is corrosion resistant and weldable. It is often used in stampings, spun and drawn parts, mail boxes, cabinets, tanks, and fan blades.
Grade 5052-H32 is much stronger than 3003 while still maintaining good formability. It maintains high corrosion resistance and weldability. Common applications include electronic chassis, tanks, and pressure vessels.
Grade 6061-T6 is a common heat-treated structural aluminium alloy. It is weldable, corrosion resistant, and stronger than 5052, but not as formable. It loses some of its strength when welded. It is used in modern aircraft structures.
Brass
Brass is an alloy of copper, which is widely used as a sheet metal. It has more strength, corrosion resistance and formability when compared to copper while retaining its conductivity.
In sheet hydroforming, variation in incoming sheet coil properties is a common problem for forming process, especially with materials for automotive applications. Even though incoming sheet coil may meet tensile test specifications, high rejection rate is often observed in production due to inconsistent material behavior. Thus there is a strong need for a discriminating method for testing incoming sheet material formability. The hydraulic sheet bulge test emulates biaxial deformation conditions commonly seen in production operations.
For forming limit curves of materials aluminium, mild steel and brass. Theoretical analysis is carried out by deriving governing equations for determining of equivalent stress and equivalent strain based on the bulging to be spherical and Tresca's yield criterion with the associated flow rule. For experimentation circular grid analysis is one of the most effective methods.
Gauge
Use of gauge numbers to designate sheet metal thickness is discouraged by numerous international standards organizations. For example, ASTM states in specification ASTM A480-10a: "The use of gauge number is discouraged as being an archaic term of limited usefulness not having general agreement on meaning."
Manufacturers' Standard Gauge for Sheet Steel is based on an average density of 41.82 lb per square foot per inch thick, equivalent to . The older United States Standard Gauge is based upon 40 lb per square foot per inch thick. Gauge is defined differently for ferrous (iron-based) and non-ferrous metals (e.g. aluminium and brass).
The gauge thicknesses shown in column 2 (U.S. standard sheet and plate iron and steel decimal inch (mm)) seem somewhat arbitrary. The progression of thicknesses is clear in column 3 (U.S. standard for sheet and plate iron and steel 64ths inch (delta)). The thicknesses vary first by inch in higher thicknesses and then step down to increments of inch, then inch, with the final increments at decimal fractions of inch.
Some steel tubes are manufactured by folding a single steel sheet into a square/circle and welding the seam together. Their wall thickness has a similar (but distinct) gauge to the thickness of steel sheets.
Tolerances
During the rolling process the rollers bow slightly, which results in the sheets being thinner on the edges. The tolerances in the table and attachments reflect current manufacturing practices and commercial standards and are not representative of the Manufacturer's Standard Gauge, which has no inherent tolerances.
Forming processes
Bending
The equation for estimating the maximum bending force is,
,
where k is a factor taking into account several parameters including friction. T is the ultimate tensile strength of the metal. L and t are the length and thickness of the sheet metal, respectively. The variable W is the open width of a V-die or wiping die.
Curling
The curling process is used to form an edge on a ring. This process is used to remove sharp edges. It also increases the moment of inertia near the curled end.
The flare/burr should be turned away from the die. It is used to curl a material of specific thickness. Tool steel is generally used due to the amount of wear done by operation.
Decambering
It is a metal working process of removing camber, the horizontal bend, from a strip shaped material. It may be done to a finite length section or coils. It resembles flattening of leveling process, but on a deformed edge.
Deep drawing
Drawing is a forming process in which the metal is stretched over a form or die. In deep drawing the depth of the part being made is more than half its diameter. Deep drawing is used for making automotive fuel tanks, kitchen sinks, two-piece aluminum cans, etc. Deep drawing is generally done in multiple steps called draw reductions. The greater the depth, the more reductions are required. Deep drawing may also be accomplished with fewer reductions by heating the workpiece, for example in sink manufacture.
In many cases, material is rolled at the mill in both directions to aid in deep drawing. This leads to a more uniform grain structure which limits tearing and is referred to as "draw quality" material.
Expanding
Expanding is a process of cutting or stamping slits in alternating pattern much like the stretcher bond in brickwork and then stretching the sheet open in accordion-like fashion. It is used in applications where air and water flow are desired as well as when light weight is desired at cost of a solid flat surface. A similar process is used in other materials such as paper to create a low cost packing paper with better supportive properties than flat paper alone.
Hemming and seaming
Hemming is a process of folding the edge of sheet metal onto itself to reinforce that edge. Seaming is a process of folding two sheets of metal together to form a joint.
Hydroforming
Hydroforming is a process that is analogous to deep drawing, in that the part is formed by stretching the blank over a stationary die. The force required is generated by the direct application of extremely high hydrostatic pressure to the workpiece or to a bladder that is in contact with the workpiece, rather than by the movable part of a die in a mechanical or hydraulic press. Unlike deep drawing, hydroforming usually does not involve draw reductions—the piece is formed in a single step.
Incremental sheet forming
Incremental sheet forming or ISF forming process is basically sheet metal working or sheet metal forming process. In this case, sheet is formed into final shape by a series of processes in which small incremental deformation can be done in each series.
Ironing
Ironing is a sheet metal working or sheet metal forming process. It uniformly thins the workpiece in a specific area. This is a very useful process. It is used to produce a uniform wall thickness part with a high height-to-diameter ratio.
It is used in making aluminium beverage cans.
Laser cutting
Sheet metal can be cut in various ways, from hand tools called tin snips up to very large powered shears. With the advances in technology, sheet metal cutting has turned to computers for precise cutting. Many sheet metal cutting operations are based on computer numerically controlled (CNC) laser cutting or multi-tool CNC punch press.
CNC laser involves moving a lens assembly carrying a beam of laser light over the surface of the metal. Oxygen, nitrogen or air is fed through the same nozzle from which the laser beam exits. The metal is heated and burnt by the laser beam, cutting the metal sheet. The quality of the edge can be mirror smooth and a precision of around can be obtained. Cutting speeds on thin sheet can be as high as per minute. Most laser cutting systems use a based laser source with a wavelength of around 10 μm; some more recent systems use a YAG based laser with a wavelength of around 1 μm.
Photochemical machining
Photochemical machining, also known as photo etching, is a tightly controlled corrosion process which is used to produce complex metal parts from sheet metal with very fine detail. The photo etching process involves photo sensitive polymer being applied to a raw metal sheet. Using CAD designed photo-tools as stencils, the metal is exposed to UV light to leave a design pattern, which is developed and etched from the metal sheet.
Perforating
Perforating is a cutting process that punches multiple small holes close together in a flat workpiece. Perforated sheet metal is used to make a wide variety of surface cutting tools, such as the surform.
Press brake forming
This is a form of bending used to produce long, thin sheet metal parts. The machine that bends the metal is called a press brake. The lower part of the press contains a V-shaped groove called the die. The upper part of the press contains a punch that presses the sheet metal down into the v-shaped die, causing it to bend. There are several techniques used, but the most common modern method is "air bending". Here, the die has a sharper angle than the required bend (typically 85 degrees for a 90 degree bend) and the upper tool is precisely controlled in its stroke to push the metal down the required amount to bend it through 90 degrees. Typically, a general purpose machine has an available bending force of around 25 tons per meter of length. The opening width of the lower die is typically 8 to 10 times the thickness of the metal to be bent (for example, 5 mm material could be bent in a 40 mm die). The inner radius of the bend formed in the metal is determined not by the radius of the upper tool, but by the lower die width. Typically, the inner radius is equal to 1/6 of the V-width used in the forming process.
The press usually has some sort of back gauge to position depth of the bend along the workpiece. The backgauge can be computer controlled to allow the operator to make a series of bends in a component to a high degree of accuracy. Simple machines control only the backstop, more advanced machines control the position and angle of the stop, its height and the position of the two reference pegs used to locate the material. The machine can also record the exact position and pressure required for each bending operation to allow the operator to achieve a perfect 90 degree bend across a variety of operations on the part.
Punching
Punching is performed by placing the sheet of metal stock between a punch and a die mounted in a press. The punch and die are made of hardened steel and are the same shape. The punch is sized to be a very close fit in the die. The press pushes the punch against and into the die with enough force to cut a hole in the stock. In some cases the punch and die "nest" together to create a depression in the stock. In progressive stamping, a coil of stock is fed into a long die/punch set with many stages. Multiple simple shaped holes may be produced in one stage, but complex holes are created in multiple stages. In the final stage, the part is punched free from the "web".
A typical CNC turret punch has a choice of up to 60 tools in a "turret" that can be rotated to bring any tool to the punching position. A simple shape (e.g. a square, circle, or hexagon) is cut directly from the sheet. A complex shape can be cut out by making many square or rounded cuts around the perimeter. A punch is less flexible than a laser for cutting compound shapes, but faster for repetitive shapes (for example, the grille of an air-conditioning unit). A CNC punch can achieve 600 strokes per minute.
A typical component (such as the side of a computer case) can be cut to high precision from a blank sheet in under 15 seconds by either a press or a laser CNC machine.
Roll forming
A continuous bending operation for producing open profiles or welded tubes with long lengths or in large quantities.
Rolling
Rolling is metal working or metal forming process. In this method, stock passes through one or more pair of rolls to reduce thickness. It is used to make thickness uniform. It is classified according to its temperature of rolling:
Hot rolling: in this temperature is above recrystallisation temperature.
Cold rolling: In this temperature is below recrystallisation temperature.
Warm rolling: In this temperature is used is in between Hot rolling and cold rolling.
Spinning
Spinning is used to make tubular (axis-symmetric) parts by fixing a piece of sheet stock to a rotating form (mandrel). Rollers or rigid tools press the stock against the form, stretching it until the stock takes the shape of the form. Spinning is used to make rocket motor casings, missile nose cones, satellite dishes and metal kitchen funnels.
Stamping
Stamping includes a variety of operations such as punching, blanking, embossing, bending, flanging, and coining; simple or complex shapes can be formed at high production rates; tooling and equipment costs can be high, but labor costs are low.
Alternatively, the related techniques repoussé and chasing have low tooling and equipment costs, but high labor costs.
Water jet cutting
A water jet cutter, also known as a waterjet, is a tool capable of a controlled erosion into metal or other materials using a jet of water at high velocity and pressure, or a mixture of water and an abrasive substance.
Wheeling
The process of using an English wheel is called wheeling. It is basically a metal working or metal forming process. An English wheel is used by a craftsperson to form compound curves from a flat sheet of metal of aluminium or steel. It is costly, as highly skilled labour is required. It can produce different panels by the same method. A stamping press is used for high numbers in production.
Sheet metal fabrication
The use of sheet metal, through a comprehensive cold working process, including bending, shearing, punching, laser cutting, water jet cutting, riveting, splicing, etc. to make the final product we want (such as computer chassis, washing machine shells, refrigerator door panels, etc.), we generally called sheet metal fabrication. The academic community currently has no uniform definition, but this process has a common feature of the process is that the material is generally a thin sheet, and will not change the thickness of most of the material of the part.
Fasteners
Fasteners that are commonly used on sheet metal include: clecos, rivets, and sheet metal screws.
| Technology | Metallurgy | null |
1528883 | https://en.wikipedia.org/wiki/Aircraft%20fairing | Aircraft fairing | An aircraft fairing is a structure whose primary function is to produce a smooth outline and reduce drag.
These structures are covers for gaps and spaces between parts of an aircraft to reduce form drag and interference drag, and to improve appearance.
Types
On aircraft, fairings are commonly found on:
Belly fairing
Also called a "ventral fairing", it is located on the underside of the fuselage between the main wings. It can also cover additional cargo storage or fuel tanks.
Cockpit fairing
Also called a "cockpit pod", it protects the crew on ultralight trikes. Commonly made from fiberglass, it may also incorporate a windshield.
Elevator and horizontal stabilizer tips
Elevator and stabilizer tips fairings smooth out airflow at the tips.
Fin and rudder tip fairings Fin and rudder tip fairings reduce drag at low angles of attack, but also reduce the stall angle, so the fairing of control surface tips depends on the application.
Fillets Fillets smooth the airflow at the junction between two components like the fuselage and wing.
Fixed landing gear junctions
Landing gear fairings reduce drag at these junctions.
Flap track fairings
Fairings are needed to enclose the flap operating mechanism when the flap is up. They open up as the flap comes down and may also pivot to allow the necessary sideways movement of the extending mechanism which occurs on swept-wing installations.
Spinner
To protect and streamline the propeller hub.
Strut-to-wing and strut-to-fuselage junctions
Strut end fairings reduce drag at these junctions.
Tail cones
Tail cones streamline the rear extremity of a fuselage by eliminating any base area which is the source of base drag.
Wing root
Wing roots are often faired to reduce interference drag between the wing and the fuselage. On top and below the wing it consists of small rounded edge to reduce the surface and such friction drag. At the leading and trailing edge it consists of much larger taper and smooths out the pressure differences: high pressure at the leading and trailing edge, low pressure on top of the wing and around the fuselage.
Wing tips
Wing tips are often formed as complex shapes to reduce vortex generation and so also drag, especially at low speed.
Wheels on fixed gear aircraft
Wheel fairings are often called "wheel pants", "speed fairings" in North America or "wheel spats" or "trousers", in the United Kingdom, the latter enclosing both the wheel and landing gear leg. These fairings are a trade-off in advantages, as they increase the frontal and surface area, but also provide a smooth surface and a faired nose and tail for laminar flow, in an attempt to reduce the turbulence created by the round wheel and its associated gear legs and brakes. They also serve the important function of preventing mud and stones from being thrown upwards against the wings or fuselage, or into the propeller on a pusher craft.
| Technology | Aircraft components | null |
1529114 | https://en.wikipedia.org/wiki/Betula%20papyrifera | Betula papyrifera | Betula papyrifera (paper birch, also known as (American) white birch and canoe birch) is a short-lived species of birch native to northern North America. Paper birch is named after the tree's thin white bark, which often peels in paper-like layers from the trunk. Paper birch is often one of the first species to colonize a burned area within the northern latitudes, and is an important species for moose browsing. Primary commercial uses for paper birch wood are as boltwood and sawlogs, while secondary products include firewood and pulpwood. It is the provincial tree of Saskatchewan and the state tree of New Hampshire.
Description
Betula papyrifera is a medium-sized deciduous tree typically reaching tall, and exceptionally to with a trunk up to in diameter. Within forests, it often grows with a single trunk but when grown as a landscape tree it may develop multiple trunks or branch close to the ground.
Paper birch is a typically short-lived species. It handles heat and humidity poorly and may live only 30 years in zones six and up, while trees in colder-climate regions can grow for more than 100 years. B. papyrifera will grow in many soil types, from steep rocky outcrops to flat muskegs of the boreal forest. Best growth occurs in deeper, well drained to dry soils, depending on the location.
In older trees, the bark is white, commonly brightly so, flaking in fine horizontal strips to reveal a pinkish or salmon-colored inner bark. It often has small black marks and scars. In individuals younger than five years, the bark appears a brown red color with white lenticels, making the tree much harder to distinguish from other birches. The bark is highly weather-resistant. It has a high oil content and this gives it its waterproof and weather-resistant characteristics. Often, the wood of a downed paper birch will rot away, leaving the hollow bark intact.
The leaves are dark green and smooth on the upper surface; the lower surface is often pubescent on the veins. They are alternately arranged on the stem, oval to triangular in shape, long and about two-thirds as wide. The leaf is rounded at the base and tapering to an acutely pointed tip. The leaves have a doubly serrated margin with relatively sharp teeth. Each leaf has a petiole about long that connects it to the stems.
The fall color is a bright yellow color that contributes to the bright colors within the northern deciduous forest.
The leaf buds are conical and small and green-colored with brown edges.
The stems are a reddish-brown color and may be somewhat hairy when young.
The flowers are wind-pollinated catkins; the female flowers are greenish and long growing from the tips of twigs. The male (staminate) flowers are long and a brownish color. The tree flowers from mid-April to June depending on location. Paper birch is monoecious, meaning that one plant has both male and female flowers.
The fruit matures in the fall. The mature fruit is composed of numerous tiny winged seeds packed between the catkin bracts. They drop between September and spring. At 15 years of age, the tree will start producing seeds but will be in peak seed production between 40 and 70 years. The seed production is irregular, with a heavy seed crop produced typically every other year and with at least some seeds being produced every year. In average seed years, are produced, but in bumper years may be produced. The seeds are light and blow in the wind to new areas; they also may blow along the surface of snow.
The roots are generally shallow and occupy the upper of the soil and do not form taproots. High winds are more likely to break the trunk than to uproot the tree.
Genetics and taxonomy
B. papyrifera hybridizes with other species within the genus Betula.
Several varieties are recognized:
B. p. var papyrifera the typical paper birch
B. p. var cordifolia the eastern paper birch (now a separate species); see Betula cordifolia
B. p. var kenaica Alaskan paper birch (also treated as a separate species by some authors); see Betula kenaica
B. p. var subcordata Northwestern paper birch
B. p. var. neoalaskana Alaska paper birch (although this is often treated as a separate species); see Betula neoalaskana
Distribution
Betula papyrifera is mostly confined to Canada and the far northern United States. It is found in interior (var. humilus) and south-central (var. kenaica) Alaska and in all provinces and territories of Canada, except Nunavut, as well as the far northern continental United States. Isolated patches are found as far south as the Hudson Valley of New York and Pennsylvania, northern Connecticut, and Washington. High elevation stands are also in mountains to North Carolina, New Mexico, and Colorado. The most southerly stand in the Western United States is located in Long Canyon in the City of Boulder Open Space and Mountain Parks. This is an isolated Pleistocene relict that most likely reflects the southern reach of boreal vegetation into the area during the last Ice Age.
Ecology
In Alaska, paper birch often naturally grows in pure stands by itself or with black or white spruce. In the eastern and central regions of its range, it is often associated with red spruce and balsam fir. It may also be associated with big-toothed aspen, yellow birch, Betula populifolia, and maples.
Shrubs often associated with paper birch in the eastern part of its range include beaked hazel (Corylus cornuta), common bearberry (Arctostaphylos uva-ursi), dwarf bush-honeysuckle (Diervilla lonicera), wintergreen (Gaultheria procumbens), wild sarsaparilla (Aralia nudicaulis), blueberries (Vaccinium spp.), raspberries and blackberries (Rubus spp.), elderberry (Sambucus spp.), and hobblebush (Viburnum alnifolium).
Successional relationships
Betula papyrifera is a pioneer species, meaning it is often one of the first trees to grow in an area after other trees are removed by some sort of disturbance. Typical disturbances colonized by paper birch are wildfire, avalanche, or windthrow areas where the wind has blown down all trees. When it grows in these pioneer, or early successional, woodlands, it often forms stands of trees where it is the only species.
Paper birch is considered well adapted to fires because it recovers quickly by means of reseeding the area or regrowth from the burned tree. The lightweight seeds are easily carried by the wind to burned areas, where they quickly germinate and grow into new trees. Paper birch is adapted to ecosystems where fires occur every 50 to 150 years For example, it is frequently an early invader after fire in black spruce boreal forests. As paper birch is a pioneer species, finding it within mature or climax forests is rare because it will be overcome by trees that are more shade-tolerant as secondary succession progresses.
For example, in Alaskan boreal forests, a paper birch stand 20 years after a fire may have , but after 60 to 90 years, the number of trees will decrease to as spruce replaces the birch. After approximately 75 years, the birch will start dying and by 125 years, most paper birch will have disappeared unless another fire burns the area.
Paper birch trees themselves have varied reactions to wildfire. A group, or stand, of paper birch is not particularly flammable. The canopy often has a high moisture content and the understory is often lush green. As such, conifer crown fires often stop once they reach a stand of paper birch or become slower-moving ground fires. Since these stands are fire-resistant, they may become seed trees to reseed the area around them that was burned. However, in dry periods, paper birch is flammable and will burn rapidly. As the bark is flammable, it often will burn and may girdle the tree.
Wildlife
Birch bark is a winter staple food for moose. The nutritional quality is poor because of the large quantities of lignin, which make digestion difficult, but is important to wintering moose because of its sheer abundance. Moose prefer paper birch over aspen, alder, and balsam poplar, but they prefer willow (Salix spp.) over birch and the other species listed. Although moose consume large amounts of paper birch in the winter, if they were to eat only paper birch, they may starve.
Although white-tailed deer consider birch a "secondary-choice food," it is an important dietary component. In Minnesota, white-tailed deer eat considerable amounts of paper birch leaves in the fall. Snowshoe hares browse paper birch seedlings, and grouse eat the buds. Porcupines and beavers feed on the inner bark. The seeds of paper birch are an important part of the diet of many birds and small mammals, including chickadees, redpolls, voles, and ruffed grouse. Yellow bellied sapsuckers drill holes in the bark of paper birch to get at the sap; this is one of their favorite trees for feeding on.
Conservation
As of 2023, the conservation status of paper birch is considered of least concern according to the International Union for Conservation of Nature (IUCN). However, the species is considered vulnerable in Indiana and Nebraska, imperiled in Illinois, Virginia, and West Virginia, and critically imperiled in Colorado and Tennessee. These areas represent the southerly and southwesterly edge of the paper birch's range.
Uses
Betula papyrifera has a moderately heavy white wood. It makes excellent high-yielding firewood if seasoned properly. The dried wood has a density of and an energy density . Although paper birch does not have a very high overall economic value, it is used in furniture, flooring, popsicle sticks, pulpwood (for paper), plywood, and oriented strand board. The wood can also be made into spears, bows, arrows, snowshoes, sleds, and other items. When used as pulp for paper, the stems and other nontrunk wood are lower in quantity and quality of fibers, and consequently the fibers have less mechanical strength; nonetheless, this wood is still suitable for use in paper.
The sap is boiled down to produce birch syrup. The raw sap contains 0.9% carbohydrates (glucose, fructose, sucrose) as compared to 2 percent to 3 percent within sugar maple sap. The sap flows later in the season than maples. Currently, only a few small-scale operations in Alaska and Yukon produce birch syrup from this species.
Bark
Its bark is an excellent fire starter; it ignites at high temperatures even when wet. The bark has an energy density of and , the highest per unit weight of 24 species tested.
Birch bark is used in a number of crafts by various Native American tribes (e.g. Ojibwe). In the Ashinaabe language birch bark is called wiigwaas. Panels of bark can be fitted or sewn together to make cartons and boxes. The bark is also used to create a durable waterproof layer in the construction of sod-roofed houses. Many indigenous groups (i.e., Wabanaki peoples) use birchbark for making various items, such as canoes, containers, and wigwams. It is also used as a backing for porcupine quillwork and moosehair embroidery. Thin sheets can be employed as a medium for the art of birchbark biting.
Plantings
Paper birch is planted to reclaim old mines and other disturbed sites, often bare-root or small saplings are planted when this is the goal. Since paper birch is an adaptable pioneer species, it is a prime candidate for reforesting drastically disturbed areas.
Paper birch is frequently planted as an ornamental because of its graceful form and attractive bark. The bark changes to the white color at about 3 years of growth. Paper birch grows best in USDA zones 2–6, due to its intolerance of high temperatures. Betula nigra, or river birch, is recommended for warm-climate areas warmer than zone 6, where paper birch is rarely successful. B. papyrifera is more resistant to the bronze birch borer than Betula pendula, which is similarly planted as a landscape tree.
Pests
Bronze birch borer is a major pest among birch species. Under repeated infestation or stress to the tree from other sources, bronze birch borers may kill the tree. The insect bores into the sapwood, beginning at the top of the tree and causing death of the tree crown. The insect has a D-shaped emergence hole where it chews out of the tree. Healthy trees are resistant to the borer, but when grown in less than ideal conditions, the defense mechanisms of the tree may not function properly. Chemical controls exist.
Birch skeletonizers are moths which lay their eggs on the surfaces of birch leaves. Upon hatching, the larvae feed on the undersides of the leaves and cause browning.
Birch leafminer is a species of sawfly and a common pest that feeds from the inside of the leaf and causes the leaf to turn brown. It was introduced to the United States in the 1920s. The first generation appears in May but there will be several generations per year. Severe infestations may stress the tree and make it more vulnerable to the bronze birch borer.
| Biology and health sciences | Fagales | Plants |
1529187 | https://en.wikipedia.org/wiki/INTEGRAL | INTEGRAL | The INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) is a space telescope for observing gamma rays of energies up to 8 MeV. It was launched by the European Space Agency (ESA) into Earth orbit in 2002, and is designed to provide imaging and spectroscopy of cosmic sources. In the MeV energy range, it is the most sensitive gamma ray observatory in space. It is sensitive to higher energy photons than X-ray instruments such as NuSTAR, the Neil Gehrels Swift Observatory, XMM-Newton, and lower than other gamma-ray instruments such Fermi and HESS.
Photons in INTEGRAL's energy range are emitted by relativistic and supra-thermal particles in violent sources, radioactivity from unstable isotopes produced during nucleosynthesis, X-ray binaries, and astronomical transients of all types, including gamma-ray bursts. The spacecraft's instruments have very wide fields of view, which is particularly useful for detecting gamma-ray emission from transient sources as they can continuously monitor large parts of the sky.
INTEGRAL is an ESA mission with additional contributions from European member states including Italy, France, Germany, and Spain. Cooperation partners are the Russian Space Agency with IKI (military CP Command Punkt KW) and NASA.
As of June 2023, INTEGRAL continues to operate despite the loss of its thrusters through the use of its reaction wheels and solar radiation pressure.
Mission
Radiation more energetic than optical light, such as ultraviolet, X-rays, and gamma rays, cannot penetrate Earth's atmosphere, and direct observations must be made from space. INTEGRAL is an observatory, scientists can propose for observing time of their desired target regions, data are public after a proprietary period of up to one year.
INTEGRAL was launched from the Russian Baikonur spaceport, in Kazakhstan. The 2002 launch aboard a Proton-DM2 rocket achieved a 3-day elliptical orbit with an apogee of nearly 160,000 km and a perigee of above 2,000 km, hence mostly beyond radiation belts which would otherwise lead to high instrumental backgrounds from charged-particle activation. The spacecraft and instruments are controlled from ESOC in Darmstadt, Germany, ESA's control centre, through ground stations in Belgium (Redu) and California (Goldstone).
2015: Fuel usage is much lower than predictions. INTEGRAL has far exceeded its 2+3-year planned lifetime, and is set to enter Earth atmosphere in 2029 as a definite end of the mission. Its orbit was adjusted in Jan/Feb 2015 to cause such a safe (southern) reentry (due to lunar/solar perturbations, predicted for 2029), using half the remaining fuel then.
In July 2020 INTEGRAL put itself in safe-mode, and it seemed the thrusters had failed. Since then alternative algorithms to slew and unload the reaction wheels have been developed and tested.
In September 2021 a single event upset triggered a sequence of events that put INTEGRAL into an uncontrolled tumbling state, considered to be a 'mission critical anomaly'. The operations team used the reaction wheels to recover attitude control.
In March 2023, INTEGRAL science operations were extended to the end of 2024, which will be followed by a two-year post-operations phase and further monitoring of the spacecraft until its estimated reentry in February 2029.
Also in March 2023, a new software based safe mode was tested that would use reaction wheels (rather than the failed thrusters).
Spacecraft
The spacecraft body ("service module") is a copy of the XMM-Newton body. This saved development costs and simplified integration with infrastructure and ground facilities. An adapter was necessary to mate with the different launch vehicle, though. However, the denser instruments used for gamma rays and hard X-rays make INTEGRAL the heaviest scientific payload ever flown by ESA.
The body is constructed largely of composites. Propulsion is by a hydrazine monopropellant system, containing 544 kg of fuel in four exposed tanks. The titanium tanks were charged with gas to 24 bar (2.4 MPa) at 30 °C, and have tank diaphragms. Attitude control is via a star tracker, multiple Sun sensors (ESM), and multiple momentum wheels. The dual solar arrays, spanning 16 meters when deployed and producing 2.4 kW at beginning of life (BoL), are backed up by dual nickel-cadmium battery sets.
The instrument structure ("payload module") is also composite. A rigid base supports the detector assemblies, and an H-shaped structure holds the coded masks approximately 4 meters above their detectors. The payload module can be built and tested independently from the service module, reducing cost.
Alenia Spazio (now Thales Alenia Space Italia) was the spacecraft prime contractor.
Instruments
Four instruments with large fields-of-view are co-aligned on this platform, to study targets across such a wide energy range of almost two orders of magnitude in energy (other astronomy instruments in X-rays or optical cover much smaller ranges of factors of a few at most). Imaging is achieved by coded masks casting a shadowgram onto pixelised cameras; the tungsten masks were provided by the University of Valencia, Spain.
The INTEGRAL imager, IBIS (Imager on-Board the INTEGRAL Satellite) observes from 15 keV (hard X-rays) to 10 MeV (gamma rays). Angular resolution is 12 arcmin, enabling a bright source to be located to better than 1 arcmin. A 95 x 95 mask of rectangular tungsten tiles sits 3.2 meters above the detectors. The detector system contains a forward plane of 128 x 128 Cadmium-Telluride tiles (ISGRI- Integral Soft Gamma-Ray Imager), backed by a 64 x 64 plane of Caesium-Iodide tiles (PICsIT- Pixellated Caesium-Iodide Telescope). ISGRI is sensitive up to 1 MeV, while PICsIT extends to 10 MeV. Both are surrounded by passive shields of tungsten and lead. IBIS was provided by PI institutes in Rome/Italy and Paris/France.
The spectrometer aboard INTEGRAL is SPI, the SPectrometer of INTEGRAL. It was conceived and assembled by the French Space Agency CNES, with PI institutes in Toulouse/France and Garching/Germany. It observes radiation between 20 keV and 8 MeV. SPI has a coded mask of hexagonal tungsten tiles, above a detector plane of 19 germanium crystals (also packed hexagonally). The high energy resolution of 2 keV at 1 MeV is capable to resolve all candidate gamma-ray lines. The Ge crystals are actively cooled with a mechanical system of Stirling coolers to about 80K.
IBIS and SPI use active detectors to detect and veto charged particles that lead to background radiation. The SPI ACS (AntiCoincidence Shield) consists of a BGO scintillator blocks surrounding the camera and aperture, detecting all charged particles, and photons exceeding an energy of about 75 keV, that would hit the instrument from directions different from the aperture. A thin layer of plastic scintillator behind the tungsten tiles serves as additional charged-particle detector within the aperture.
The large effective area of the ACS turned out to be useful as an instrument in its own right. Its all-sky coverage and sensitivity make it a natural gamma-ray burst detector, and a valued component of the IPN (InterPlanetary Network).
Dual JEM-X units provide additional information on sources at soft and hard X-rays, from 3 to 35 keV. Aside from broadening the spectral coverage, imaging is more precise due to the shorter wavelength. Detectors are gas scintillators (xenon plus methane) in a microstrip layout, below a mask of hexagonal tiles.
INTEGRAL includes an Optical Monitor (OMC) instrument, sensitive from 500 to 580 nm. It acts as both a framing aid, and can note the activity and state of some brighter targets, e.g. it had been useful to monitor supernova light over months from SN2014J.
The spacecraft also includes a radiation monitor, INTEGRAL Radiation Environment Monitor (IREM), to note the orbital background for calibration purposes. IREM has an electron and a proton channel, though radiation up to cosmic rays can be sensed. Should the background exceed a preset threshold, IREM can shut down the instruments.
Scientific results
INTEGRAL contributes to multi-messenger astronomy, detecting gamma rays from the first merger of two neutron stars observed in gravitational waves, and from a fast radio burst. By 2018, approximately 5,600 scientific papers had been published, averaging one every 29 hours since the launch.
| Technology | Space-based observatories | null |
1529485 | https://en.wikipedia.org/wiki/Neighbourhood%20%28mathematics%29 | Neighbourhood (mathematics) | In topology and related areas of mathematics, a neighbourhood (or neighborhood) is one of the basic concepts in a topological space. It is closely related to the concepts of open set and interior. Intuitively speaking, a neighbourhood of a point is a set of points containing that point where one can move some amount in any direction away from that point without leaving the set.
Definitions
Neighbourhood of a point
If is a topological space and is a point in then a neighbourhood of is a subset of that includes an open set containing ,
This is equivalent to the point belonging to the topological interior of in
The neighbourhood need not be an open subset of When is open (resp. closed, compact, etc.) in it is called an (resp. closed neighbourhood, compact neighbourhood, etc.). Some authors require neighbourhoods to be open, so it is important to note their conventions.
A set that is a neighbourhood of each of its points is open since it can be expressed as the union of open sets containing each of its points. A closed rectangle, as illustrated in the figure, is not a neighbourhood of all its points; points on the edges or corners of the rectangle are not contained in any open set that is contained within the rectangle.
The collection of all neighbourhoods of a point is called the neighbourhood system at the point.
Neighbourhood of a set
If is a subset of a topological space , then a neighbourhood of is a set that includes an open set containing ,It follows that a set is a neighbourhood of if and only if it is a neighbourhood of all the points in Furthermore, is a neighbourhood of if and only if is a subset of the interior of
A neighbourhood of that is also an open subset of is called an of
The neighbourhood of a point is just a special case of this definition.
In a metric space
In a metric space a set is a neighbourhood of a point if there exists an open ball with center and radius such that
is contained in
is called a uniform neighbourhood of a set if there exists a positive number such that for all elements of
is contained in
Under the same condition, for the -neighbourhood of a set is the set of all points in that are at distance less than from (or equivalently, is the union of all the open balls of radius that are centered at a point in ):
It directly follows that an -neighbourhood is a uniform neighbourhood, and that a set is a uniform neighbourhood if and only if it contains an -neighbourhood for some value of
Examples
Given the set of real numbers with the usual Euclidean metric and a subset defined as
then is a neighbourhood for the set of natural numbers, but is a uniform neighbourhood of this set.
Topology from neighbourhoods
The above definition is useful if the notion of open set is already defined. There is an alternative way to define a topology, by first defining the neighbourhood system, and then open sets as those sets containing a neighbourhood of each of their points.
A neighbourhood system on is the assignment of a filter of subsets of to each in such that
the point is an element of each in
each in contains some in such that for each in is in
One can show that both definitions are compatible, that is, the topology obtained from the neighbourhood system defined using open sets is the original one, and vice versa when starting out from a neighbourhood system.
Uniform neighbourhoods
In a uniform space is called a uniform neighbourhood of if there exists an entourage such that contains all points of that are -close to some point of that is, for all
Deleted neighbourhood
A deleted neighbourhood of a point (sometimes called a punctured neighbourhood) is a neighbourhood of without For instance, the interval is a neighbourhood of in the real line, so the set is a deleted neighbourhood of A deleted neighbourhood of a given point is not in fact a neighbourhood of the point. The concept of deleted neighbourhood occurs in the definition of the limit of a function and in the definition of limit points (among other things).
| Mathematics | Topology | null |
1529518 | https://en.wikipedia.org/wiki/Aspergillus | Aspergillus | Aspergillus () is a genus consisting of several hundred mold species found in various climates worldwide.
Aspergillus was first catalogued in 1729 by the Italian priest and biologist Pier Antonio Micheli. Viewing the fungi under a microscope, Micheli was reminded of the shape of an aspergillum (holy water sprinkler), from Latin spargere (to sprinkle), and named the genus accordingly. Aspergillum is an asexual spore-forming structure common to all Aspergillus species; around one-third of species are also known to have a sexual stage. While some species of Aspergillus are known to cause fungal infections, others are of commercial importance.
Taxonomy
Species
In March 2010, Aspergillus covered 837 species of fungi. Notable species placed in Aspergillus include:
Aspergillus flavus is a notable plant pathogen impacting crop yields and a common cause of aspergillosis.
Aspergillus fumigatus is the most common cause of aspergillosis in individuals with an immunodeficiency.
Aspergillus nidulans has seen heavy use as research organism in cell biology.
Aspergillus niger is used in the chemical industry for a variety of applications, while also being a known food contaminant and a possible pathogen to humans.
Aspergillus oryzae and A. sojae are used in East Asian cuisine in the production of sake, soy sauce and other fermented food products.
Aspergillus terreus is used in the production of organic acids but can also cause opportunistic infections in humans.
Inner Taxonomy
The expansive genus Aspergillus is currently divided into six subgenera of which many are further split into a total of 27 sections.
Subgenus Circumdati, divided in 10 sections.
Subgenus Nidulantes, divided in 9 sections.
Subgenus Fumigati, divided in 4 sections.
Subgenus Aspergillus, divided in 2 sections.
Subgenus and section Cremei
Subgenus and section Polypaecilum
Growth and distribution
Aspergillus is defined as a group of conidial fungi—that is, fungi in an asexual state. Some of them, however, are known to have a teleomorph (sexual state) in the Ascomycota. With DNA evidence, all members of the genus Aspergillus are members of the phylum Ascomycota.
Members of the genus possess the ability to grow where a high osmotic pressure exists (high concentration of sugar, salt, etc.). Aspergillus species are highly aerobic and are found in almost all oxygen-rich environments, where they commonly grow as molds on the surface of a substrate, as a result of the high oxygen tension. Commonly, fungi grow on carbon-rich substrates like monosaccharides (such as glucose) and polysaccharides (such as amylose). Aspergillus species are common contaminants of starchy foods (such as bread and potatoes), and grow in or on many plants and trees.
In addition to growth on carbon sources, many species of Aspergillus demonstrate oligotrophy where they are capable of growing in nutrient-depleted environments, or environments with a complete lack of key nutrients. Aspergillus niger is a prime example of this; it can be found growing on damp walls, as a major component of mildew.
Several species of Aspergillus, including A. niger and A. fumigatus, will readily colonise buildings, favouring warm and damp or humid areas such as bathrooms and around window frames.
Aspergillus are found in millions of pillows.
Commercial importance
Species of Aspergillus are important medically and commercially. Some species can cause infection in humans and other animals. Some infections found in animals have been studied for years, while other species found in animals have been described as new and specific to the investigated disease, and others have been known as names already in use for organisms such as saprophytes. More than 60 Aspergillus species are medically relevant pathogens. For humans, a range of diseases such as infection to the external ear, skin lesions, and ulcers classed as mycetomas are found.
Other species are important in commercial microbial fermentations. For example, alcoholic beverages such as Japanese sake are often made from rice or other starchy ingredients (like manioc), rather than from grapes or malted barley. Typical microorganisms used to make alcohol, such as yeasts of the genus Saccharomyces, cannot ferment these starches. Therefore, koji mold such as Aspergillus oryzae is used to first break down the starches into simpler sugars.
Members of the genus are also sources of natural products that can be used in the development of medications to treat human disease. Aspergillus spp. are known to produce anthraquinone which has commercial importance due to its antibacterial and antifungal properties.
Perhaps the largest application of Aspergillus niger is as the major source of citric acid; this organism accounts for over 99% of global citric acid production, or more than 1.4 million tonnes (>1.5 million US tons) per year. A. niger is also commonly used for the production of native and foreign enzymes, including glucose oxidase, lysozyme, and lactase. In these instances, the culture is rarely grown on a solid substrate, although this is still common practice in Japan, but is more often grown as a submerged culture in a bioreactor. In this way, the most important parameters can be strictly controlled, and maximal productivity can be achieved. This process also makes it far easier to separate the chemical or enzyme of importance from the medium, and is therefore far more cost-effective.
Research
A. nidulans (Emericella nidulans) has been used as a research organism for many years and was used by Guido Pontecorvo to demonstrate parasexuality in fungi. Recently, A. nidulans was one of the pioneering organisms to have its genome sequenced by researchers at the Broad Institute. As of 2008, a further seven Aspergillus species have had their genomes sequenced: the industrially useful A. niger (two strains), A. oryzae, and A. terreus, and the pathogens A. clavatus, A. fischerianus (Neosartorya fischeri), A. flavus, and A. fumigatus (two strains). A. fischerianus is hardly ever pathogenic, but is very closely related to the common pathogen A. fumigatus; it was sequenced in part to better understand A. fumigatus pathogenicity.
Sexual reproduction
Of the 250 species of aspergilli, about 64% have no known sexual state. However, many of these species likely have an as yet unidentified sexual stage. Sexual reproduction occurs in two fundamentally different ways in fungi. These are outcrossing (in heterothallic fungi) in which two different individuals contribute nuclei, and self-fertilization or selfing (in homothallic fungi) in which both nuclei are derived from the same individual. In recent years, sexual cycles have been discovered in numerous species previously thought to be asexual. These discoveries reflect recent experimental focus on species of particular relevance to humans.
A. fumigatus is the most common species to cause disease in immunodeficient humans. In 2009, A. fumigatus was shown to have a heterothallic, fully functional sexual cycle. Isolates of complementary mating types are required for sex to occur.
A. flavus is the major producer of carcinogenic aflatoxins in crops worldwide. It is also an opportunistic human and animal pathogen, causing aspergillosis in immunocompromised individuals. In 2009, a sexual state of this heterothallic fungus was found to arise when strains of opposite mating types were cultured together under appropriate conditions.
A. lentulus is an opportunistic human pathogen that causes invasive aspergillosis with high mortality rates. In 2013, A. lentulus was found to have a heterothallic functional sexual breeding system.
A. terreus is commonly used in industry to produce important organic acids and enzymes, and was the initial source for the cholesterol-lowering drug lovastatin. In 2013, A. terreus was found to be capable of sexual reproduction when strains of opposite mating types were crossed under appropriate culture conditions.
These findings with Aspergillus species are consistent with accumulating evidence, from studies of other eukaryotic species, that sex was likely present in the common ancestor of all eukaryotes.
A. nidulans, a homothallic fungus, is capable of self-fertilization. Selfing involves activation of the same mating pathways characteristic of sex in outcrossing species, i.e. self-fertilization does not bypass required pathways for outcrossing sex, but instead requires activation of these pathways within a single individual.
Among those Aspergillus species that exhibit a sexual cycle, the overwhelming majority in nature are homothallic (self-fertilizing). This observation suggests Aspergillus species can generally maintain sex though little genetic variability is produced by homothallic self-fertilization. A. fumigatus, a heterothallic (outcrossing) fungus that occurs in areas with widely different climates and environments, also displays little genetic variability either within geographic regions or on a global scale, again suggesting sex, in this case outcrossing sex, can be maintained even when little genetic variability is produced.
Genomics
The simultaneous publication of three Aspergillus genome manuscripts in Nature in December 2005 established the genus as the leading filamentous fungal genus for comparative genomic studies. Like most major genome projects, these efforts were collaborations between a large sequencing centre and the respective community of scientists. For example, the Institute for Genome Research (TIGR) worked with the A. fumigatus community. A. nidulans was sequenced at the Broad Institute. A. oryzae was sequenced in Japan at the National Institute of Advanced Industrial Science and Technology. The Joint Genome Institute of the Department of Energy has released sequence data for a citric acid-producing strain of A. niger. TIGR, now renamed the J. Craig Venter Institute, is currently spearheading a project on the A. flavus genome.
Aspergillus is characterized by high levels of genetic diversity and, using protostome divergence as a scale, is as diverse as the Vertebrates phylum although both inter and intra-specific genome structure is relatively plastic. The genomes of some Aspergillus species, such as A. flavus and A. oryzae, are more rich and around 20% larger than others, such as A. nidulans and A. fumigatus. Several mechanisms could explain this difference, although the combination of segmental duplication, genome duplication, and horizontal gene transfer acting in a piecemeal fashion is well-supported.
Genome sizes for sequenced species of Aspergillus range from about 29.3 Mb for A. fumigatus to 37.1 Mb for A. oryzae, while the numbers of predicted genes vary from about 9926 for A. fumigatus to about 12,071 for A. oryzae. The genome size of an enzyme-producing strain of A. niger is of intermediate size at 33.9 Mb.
Pathogens
Some Aspergillus species cause serious disease in humans and animals. The most common pathogenic species are A. fumigatus and A. flavus, which produces aflatoxin which is both a toxin and a carcinogen, and which can contaminate foods such as nuts. The most common species causing allergic disease are A. fumigatus and A. clavatus. Other species are important as agricultural pathogens. Aspergillus spp. cause disease on many grain crops, especially maize, and some variants synthesize mycotoxins, including aflatoxin. Aspergillus can cause neonatal infections.
A. fumigatus (the most common species) infections are primary pulmonary infections and can potentially become a rapidly necrotizing pneumonia with a potential to disseminate. The organism can be differentiated from other common mold infections based on the fact that it takes on a mold form both in the environment and in the host (unlike Candida albicans which is a dimorphic mold in the environment and a yeast in the body).
Aspergillosis
Aspergillosis is the group of diseases caused by Aspergillus. The most common species among paranasal sinus infections associated with aspergillosis is A. fumigatus. The symptoms include fever, cough, chest pain, or breathlessness, which also occur in many other illnesses, so diagnosis can be difficult. Usually, only patients with already weakened immune systems or who suffer other lung conditions are susceptible.
In humans, the major forms of disease are:
Acute invasive aspergillosis, a form that grows into surrounding tissue, more common in those with weakened immune systems such as AIDS or chemotherapy patients
Allergic bronchopulmonary aspergillosis, which affects patients with respiratory diseases such as asthma, cystic fibrosis, and sinusitis
Aspergilloma, a "fungus ball" that can form within cavities such as the lung
Disseminated invasive aspergillosis, an infection spread widely through the body
Fungal infections from Aspergillus spores remain one theory of sickness and untimely death of some early Egyptologists and tomb explorers. Ancient spores which grew on the remains of food offerings and mummies sealed in tombs and chambers may have been blown around and inhaled by the excavators, ultimately linked to the notion of the curse of the pharaohs.
Aspergillosis of the air passages is also frequently reported in birds, and certain species of Aspergillus have been known to infect insects.
Most people inhale Aspergillus into their lungs everyday, but generally only the immuno-compromised become sick with aspergillosis.
| Biology and health sciences | Basics | Plants |
1529533 | https://en.wikipedia.org/wiki/Caddisfly | Caddisfly | The caddisflies (order Trichoptera) are a group of insects with aquatic larvae and terrestrial adults. There are approximately 14,500 described species, most of which can be divided into the suborders Integripalpia and Annulipalpia on the basis of the adult mouthparts. Integripalpian larvae construct a portable casing to protect themselves as they move around looking for food, while annulipalpian larvae make themselves a fixed retreat in which they remain, waiting for food to come to them. The affinities of the small third suborder Spicipalpia are unclear, and molecular analysis suggests it may not be monophyletic. Also called sedge-flies or rail-flies, the adults are small moth-like insects with two pairs of hairy membranous wings. They are closely related to the Lepidoptera (moths and butterflies) which have scales on their wings; the two orders together form the superorder Amphiesmenoptera.
The aquatic larvae are found in a wide variety of habitats such as streams, rivers, lakes, ponds, spring seeps and temporary waters (vernal pools), and even the ocean. The larvae of many species use silk to make protective cases, which are often strengthened with gravel, sand, twigs, bitten-off pieces of plants, or other debris. The larvae exhibit various feeding strategies, with different species being predators, leaf shredders, algal grazers, or collectors of particles from the water column and benthos. Most adults have short lives during which they do not feed.
In fly fishing, artificial flies called dry flies are tied to imitate adults, while larvae and pupae are imitated with artificial flies called wet flies or nymphs. It is also possible to use them as bait, though this is not as common as artificial flies and is known as bait fishing. Common and widespread genera such as Helicopsyche and Hydropsyche are important in the sport, where caddisflies are known as "sedges". Caddisflies are useful as bioindicators, as they are sensitive to water pollution and are large enough to be assessed in the field. In art, the French artist Hubert Duprat has created works by providing caddis larvae with small grains of gold and precious stones for them to build into decorative cases.
Etymology
The name of the order "Trichoptera" derives from the Greek: (, "hair"), genitive trichos + (, "wing"), and refers to the fact that the wings of these insects are bristly. The origin of the word "caddis" is unclear, but it dates back to at least as far as Izaak Walton's 1653 book The Compleat Angler, where "cod-worms or caddis" were mentioned as being used as bait. The term cadyss was being used in the fifteenth century for silk or cotton cloth, and "cadice-men" were itinerant vendors of such materials, but a connection between these words and the insects has not been established.
Evolution and phylogeny
Fossil history
Fossil caddisflies have been found in rocks dating back to the Triassic. The largest numbers of fossilised remains are those of larval cases, which are made of durable materials that preserve well. Body fossils of caddisflies are extremely rare, the oldest being from the Early and Middle Triassic, some 230 million years ago, and wings are another source of fossils. The evolution of the group to one with fully aquatic larvae seems to have taken place sometime during the Triassic. The finding of fossils resembling caddisfly larval cases in marine deposits in Brazil may push back the origins of the order to the Early Permian period.
Evolution
Nearly all adult caddisflies are terrestrial, but their larvae and pupae are aquatic. They share this characteristic with several distantly-related groups, namely the dragonflies, mayflies, stoneflies, alderflies and lacewings. The ancestors of all these groups were terrestrial, with open tracheal systems, convergently evolving different types of gills for their aquatic larvae as they took to the water to avoid predation. Caddisflies was the only group of these insects to use silk as part of their lifestyle, which has been a contributing factor to their success and why they are the most species-rich order of aquatic insects.
About 14,500 species of caddisfly in 45 families have been recognised worldwide, but many more species remain to be described. Most can be divided into the suborders Integripalpia and Annulipalpia on the basis of the adult mouthparts. The characteristics of adults depend on the palps, wing venation and genitalia of both sexes. The latter two characters have undergone such extensive differentiation among the different superfamilies that the differences between the suborders is not clear-cut. The larvae of Annulipalpians are campodeiform (free-living, well sclerotized, long legged predators with dorso-ventrally flattened bodies and protruding mouthparts). The larvae of Integripalpians are polypod (poorly sclerotized detritivores, with abdominal prolegs in addition to thoracic legs, living permanently in tight-fitting cases). The affinities of the third suborder, Spicipalpia, are unclear; the larvae are free-living with no cases, instead creating net-like traps from silk.
Phylogeny
The cladogram of external relationships, based on molecular analysis, shows the order as a clade, sister to the Lepidoptera, and more distantly related to the Diptera (true flies) and Mecoptera (scorpionflies).
The cladogram of relationships within the order is based on a 2002 molecular phylogeny using ribosomal RNA, a nuclear elongation factor gene, and mitochondrial cytochrome oxidase. The Annulipalpia and Integripalpia are clades, but the relationships within the Spicipalpia are unclear.
Distribution
Caddisflies are found worldwide, with the greater diversity being in warmer regions. They are associated with bodies of freshwater, the larvae being found in lakes, ponds, rivers, streams and other water bodies. The land caddis, Enoicyla pusilla (family: Limnephilidae), lives in the damp litter of the woodland floor. In the United Kingdom it is found in and around the county of Worcestershire in oakwoods.
Ecology
Caddisfly larvae can be found in all feeding guilds in freshwater habitats. Most early stage larvae and some late stage ones are collector-gatherers, picking up fragments of organic matter from the benthos. Other species are collector-filterers, sieving organic particles from the water using silken nets, or hairs on their legs. Some species are scrapers, feeding on the film of algae and other periphyton that grows on underwater objects in sunlight. Others are shredder-herbivores, chewing fragments off living plant material while others are shredder-detritivores, gnawing at rotting wood or chewing dead leaves that have been pre-processed by bacteria and fungi; most of the nutrients of the latter group come from consumption of the bacteria and fungi. The predatory species either actively hunt their prey, typically other insects, tiny crustaceans and worms, or lie in wait for unwary invertebrates to come too close. A few species feed opportunistically on dead animals or fish, and some Leptoceridae larvae feed on freshwater sponges.
One such opportunistic species is Gumaga nigricula (family: Sericostomatidae) which has been observed scavenging fish carcasses and even bits of deer flesh. This particular family of caddisflies is typically classified among the shredders, suggesting caution when classifying macroinvertebrates into strict ecological functional groups, as some may shift their diets opportunistically.
Like mayflies, stoneflies and dragonflies, but to a somewhat lesser extent, caddisflies are an indicator of good water quality; they die out of streams with polluted waters. They are an important part of the food web, both larvae and adults being eaten by many fish. The newly hatched adult is particularly vulnerable as it struggles to the surface after emerging from the submerged pupa, and as it dries its wings. The fish find these new adults easy pickings, and fishing flies resembling them can be successful for anglers at the right time of year.
The adult stage of a caddisfly may only survive for a few weeks; many species do not feed as adults and die soon after breeding, but some species are known to feed on nectar. The winged insects are nocturnal and provide food for night-flying birds, bats, small mammals, amphibians and arthropods. The larval stage lasts much longer, often for one or more years, and has a bigger impact on the environment. They form an important part of the diet of fish such as the trout. The fish acquire them by two means, either plucking them off vegetation or the stream-bed as the larvae move about, or during the daily behavioural drift; this drift happens during the night for many species of aquatic larvae, or around midday for some cased caddisfly species, and may result from population pressures or be a dispersal device. The larvae may drift in great numbers either close to the bottom, in mid-water or just below the surface. The fish swallow them whole, case and all.
Underwater structures
Cases
Caddisflies are best known for the portable cases created by their larvae. About thirty families of caddisfly, members of the suborder Integripalpia, adopt this stratagem. These larvae eat detritus, largely decaying vegetable material, and the dead leaf fragments on which they feed tend to accumulate in hollows, in slow-moving sections of streams and behind stones and tree roots. The cases provide protection to the larvae as they make their way between these resources.
The case is a tubular structure made of silk, secreted from salivary glands near the mouth of the larva, and is started soon after the egg hatches. Various reinforcements may be incorporated into its structure, the nature of the materials and design depending on the larva's genetic makeup; this means that caddisfly larvae can be recognised by their cases down to family, and even genus level. The materials used include grains of sand, larger fragments of rock, bark, sticks, leaves, seeds and mollusc shells. These are neatly arranged and stuck onto the outer surface of the silken tube. As the larva grows, more material is added at the front, and the larva can turn round in the tube and trim the rear end so that it does not drag along the substrate.
Caddisfly cases are open at both ends, the larvae drawing oxygenated water through the posterior end, over their gills, and pumping it out of the wider, anterior end. The larvae move around inside the tubes and this helps maintain the water current; the lower the oxygen content of the water, the more active the larvae need to be. This mechanism enable caddisfly larvae to live in waters too low in oxygen content to support stonefly and mayfly larvae.
Fixed retreats
In contrast to larvae that have portable cases, members of the Annulipalpia have a completely different feeding strategy. They make fixed retreats in which they remain stationary, waiting for food to come to them. Members of the Psychomyiidae, Ecnomidae and Xiphocentronidae families construct simple tubes of sand and other particles held together by silk and anchored to the bottom, and feed on the accumulations of silt formed when suspended material is deposited. The tube can be lengthened when the growing larva needs to feed in new areas. More complex tubes, short and flattened, are built by Polycentropodidae larvae in hollows in rocks or other submerged objects, sometimes with strands of silk suspended across the nearby surface. These larvae are carnivorous, resembling spiders in their feeding habits and rushing out of their retreat to attack any unwary small prey crawling across the surface.
Silk domes
Larvae of members of the family Glossosomatidae in the suborder Spicipalpia create dome-shaped enclosures of silk which enables them to graze on the periphyton, the biological film that grows on stones and other objects, while carrying their enclosure around like turtles. In the family Philopotamidae, the nets are sac-like, with intricate structure and tiny mesh. The larvae have specialised mouthparts to scrape off the microflora that get trapped in the net as water flows through.
Nets
The larvae of other species of caddisfly make nets rather than cases. These are silken webs stretching between aquatic vegetation and over stones. These net-making larvae usually live in running water, different species occupying different habitats with varying water speeds. There is a constant drift of invertebrates washed downstream by the current, and these animals, and bits of debris, accumulate in the nets which serve both as food traps and as retreats.
Development and morphology
Caddisfly larvae are aquatic, with six pairs of tracheal gills on the underside of the abdomen. The eggs are laid above water on emergent twigs or vegetation or on the water surface although females of some species enter water to choose sites. Although most species lay eggs, a few in the genus Triplectides are ovoviviparous. Some species lay eggs on land and although most are associated with freshwater, a few like Symphitoneuria are found in coastal saline water. Philanisus plebeius females lay their eggs into the coelomic cavity of intertidal starfish. The larvae are long and roughly cylindrical, very similar to those of lepidoptera but lacking prolegs. In case-bearing species, the heads are heavily sclerotised while the abdomen is soft; the antennae are short and the mouthparts adapted for biting. Each of the usually ten abdominal segments bears a pair of legs with a single tarsal joint. In case-bearing species, the first segment bears three papillae, one above and two at the sides, which anchor the larva centrally in the tube. The posterior segment bears a pair of hooks for grappling. There are five to seven larval instars, followed by an aquatic pupa which has functional mandibles (to cut through the case), gills, and swimming legs.
The pupal cocoon is spun from silk, but like the larval case, often has other materials attached. When pupating, species that build portable cases attach them to some underwater object, seal the front and back apertures against predators while still allowing water to flow through, and pupate within it. Once fully developed, most pupal caddisflies cut through their cases with a special pair of mandibles, swim up to the water surface, moult using the exuviae as a floating platform, and emerge as fully formed adults. They can often fly immediately after breaking from their pupal cuticle. Emergence is mainly univoltine (once per year) with all the adults of a species emerging at the same time. Development is within a year in warm places, but takes over a year in high latitudes and at high elevation in mountain lakes and streams.
The adult caddisfly is a medium-sized insect with membranous, hairy wings, which are held in a tent-wise fashion when the insect is at rest. The antennae are fairly long and threadlike, the mouthparts are reduced in size and the legs have five tarsi (lower leg joints). Adults are nocturnal and are attracted to light. Some species are strong fliers and can disperse to new localities, but many fly only weakly. Adults are usually short-lived, most being non-feeders and equipped only to breed. Once mated, the female caddisfly lays eggs in a gelatinous mass, attaching them above or below the water surface depending on species. The eggs hatch in a few weeks.
Relationship with humans
In angling
Adult caddisflies are called sedges by anglers. Individual species emerge en masse at different times, and are used one after the other, often for only a few days each year, as models for artificial fishing flies for fly fishing in trout streams. A mass emergence is known as a hatch. Each type has its own angling name, so for example Mystacides is the dancer; Sericostoma the caperer; Leptocerus the silverhorn; Phryganea the murragh or great red sedge; Brachycentrus subnubilis the grannom; Lepidostoma the silver sedge; Oecetis the longhorn sedge; Cheumatopsyche the little sister sedge; Helicopsyche the speckled Peter, an important fishing fly in North America; and Hydropsyche the specked sedge, perhaps the most important caddisfly genus for anglers with over 50 species of net-makers.
As bioindicators
Caddisflies are useful as bioindicators (of good water quality), since they are sensitive to water pollution, and are large enough to be assessed conveniently in the field. Some species indicate undisturbed habitat, and some indicate degraded habitat. Although caddisflies may be found in waterbodies of varying qualities, species-rich caddisfly assemblages are generally thought to indicate clean water bodies, such as lakes, ponds, and marshes. Together with stoneflies and mayflies, caddisflies feature importantly in bioassessment surveys of streams and other water bodies.
In art
While caddisflies in the wild construct their cases out of twigs, sand, aquatic plants, and rocks, the French artist Hubert Duprat makes art by providing wild caddisflies with precious stones and other materials. He collected caddisfly larvae from the wild and put them in climate-controlled tanks. He removes the larvae from their original cases and adds precious and semi-precious items such as grains of gold into the tank. The larvae then build new cases out of precious items, creating a unique form of artwork. The resulting works are sold across the world.
As food
In Japan the larvae of Stenopsyche marmorata are eaten as a delicacy called Zazamushi.
Taxonomy
There are roughly 16,266 extant species in 618 genera and 51 families worldwide.
Suborder Annulipalpia
Superfamily Hydropsychoidea
Family Hydropsychidae
Superfamily Psychomyioidea
Family Dipseudopsidae
Family Ecnomidae
Family †Electralbertidae
Family Polycentropodidae
Family Psychomyiidae
Family Xiphocentronidae
Superfamily Philopotamoidea
Family Philopotamidae
Family Stenopsychidae
Suborder Integripalpia
Superfamily Leptoceroidea
Family Atriplectididae
Family Calamoceratidae
Family Molannidae
Family Leptoceridae
Family Limnocentropodidae
Family Odontoceridae
Family Philorheithridae
Superfamily Limnephiloidea
Family Apataniidae
Family Brachycentridae
Family Goeridae
Family Limnephilidae
Family Lepidostomatidae
Family Oeconesidae
Family Pisuliidae
Family Rossianidae
Family †Taymyrelectronidae
Family Uenoidae
Superfamily †Necrotaulioidea
Family †Necrotauliidae
Superfamily Phyrganeoidea
Family †Baissoferidae
Family †Dysoneuridae
Family †Kalophryganeidae
Family Phryganeidae
Family Phryganopsychidae
Family Plectrotarsidae
Superfamily Sericostomatoidea
Family Anomalopsychidae
Family Antipodoeciidae
Family Barbarochthonidae
Family Beraeidae
Family Calocidae
Family Chathamiidae
Family Conoesucidae
Family Helicophidae
Family Helicopsychidae
Family Hydrosalpingidae
Family Kokiriidae
Family Petrothrincidae
Family Sericostomatidae
Superfamily Tasimioidea
Family Tasimiidae
Superfamily †Vitimotaulioidea
Family †Vitimotauliidae
Family †Cladochoristidae
Family †Microptysmatidae
Family †Prosepididontidae
Family †Protomeropidae
Family †Uraloptysmatidae
Suborder Spicipalpia
Superfamily Hydroptiloidea
Family Glossosomatidae
Family Hydroptilidae
Family Ptilocolepidae
Superfamily Rhyacophiloidea
Family Hydrobiosidae
Family Rhyacophilidae
| Biology and health sciences | Insects: General | Animals |
1530205 | https://en.wikipedia.org/wiki/Conidium | Conidium | A conidium ( ; : conidia), sometimes termed an asexual chlamydospore or chlamydoconidium (: chlamydoconidia), is an asexual, non-motile spore of a fungus. The word conidium comes from the Ancient Greek word for dust, (). They are also called mitospores due to the way they are generated through the cellular process of mitosis. They are produced exogenously. The two new haploid cells are genetically identical to the haploid parent, and can develop into new organisms if conditions are favorable, and serve in biological dispersal.
Asexual reproduction in ascomycetes (the phylum Ascomycota) is by the formation of conidia, which are borne on specialized stalks called conidiophores. The morphology of these specialized conidiophores is often distinctive between species and, before the development of molecular techniques at the end of the 20th century, was widely used for identification of (e.g. Metarhizium) species.
The terms microconidia and macroconidia are sometimes used.
Conidiogenesis
There are two main types of conidium development:
Blastic conidiogenesis, where the spore is already evident before it separates from the conidiogenic hypha which is giving rise to it, and
Thallic conidiogenesis, where first a cross-wall appears and thus the created cell develops into a spore.
Conidia germination
A conidium may form germ tubes (germination tubes) and/or conidial anastomosis tubes (CATs) in specific conditions. These two are some of the specialized hyphae that are formed by fungal conidia. The germ tubes will grow to form the hyphae and fungal mycelia. The conidial anastomosis tubes are morphologically and physiologically distinct from germ tubes. After conidia are induced to form conidial anastomosis tubes, they grow homing toward each other, and they fuse. Once fusion happens, the nuclei can pass through fused CATs. These are events of fungal vegetative growth and not sexual reproduction. Fusion between these cells seems to be important for some fungi during early stages of colony establishment. The production of these cells has been suggested to occur in 73 different species of fungi.
Germination in Aspergillus
As evidenced by recent literature, conidia germination of Aspergillus, a common mold, specifically is of interest. Aspergillus is not only a familiar fungus found across various different settings in the world, but it poses a danger for immunocompromised individuals, as inhaled Aspergillus conidia could germinate inside the respiratory tract and cause aspergillosis, a form of pulmonary infection, and continual developments of aspergillosis such as new risk groups and the resistance against antifungal drugs.
Stages of Germination: Dormancy
Germination in Aspergillus follows a sequence of three different stages: dormancy, isotropic growth, and polarized growth. The dormant conidia are able to germinate even after an year of remaining at room temperature, due to their resilient intracellular and extracellular characteristics, which enable them to undergo harsh conditions like dehydration, variation in osmotic pressure, oxidation, and temperature, and change in UV exposure and acidity levels. These abilities of the dormant conidia are dictated by a few central regulatory proteins, which are the main drivers of the conidia and conidiophore formation. One of these proteins, the developmental regulatory protein wetA, has been found to be particularly essential; in wetA-defective mutants have reduced tolerance to external factors mentioned above, and exhibit weak synthesization of the conidial cell wall. In addition to these central regulators, some notable groups of genes/proteins include other regulatory proteins like the velvet regulator proteins, which contribute to fungal growth, and other molecules that target specific unfavorable intra and extracellular conditions, like heat shock proteins.
Stages of Germination: Isotropic and Polarized Growth
The phases following dormancy include isotropic growth, in which increased intracellular osmotic pressure and water uptake causes swelling of the conidia and increased cellular diameter, and polarized growth, in which the swelling from isotropic growth directs the growth to one side of the cell, and leads to the formation of a germ tube. First, however, the conidia must go through the stage of breaking dormancy. In some species of Aspergillus, dormancy is broken when the dormant conidia is introduced to a carbon source in the presence of water and air, while in other species, the mere presence of glucose is enough to trigger it. The dense outer layer of the dormant conidia is shed and the growth of the hyphae cells begins, which has a significantly different composition compared to the dormant conidia cell. Breaking of dormancy involves transcription, but not translation; protein synthesis inhibitors prevent isotropic growth, while DNA and RNA synthesis inhibitors do not, and the start of breaking of dormancy is accompanied by and increase in transcripts for genes for biosynthesis of proteins, and immediate protein synthesis. Following the expansion of the cell via isotropic growth, studies have observed many new proteins emerging from the processes in the breaking of dormancy and transcripts associated with remodeling of the cell wall, suggesting that remodeling of the cell wall is a central process during isotropic growth. In the polarized growth stage, upregulated and overexpressed proteins and transcripts included ones involved in synthesis of chitin (a major component of the fungal cell wall), mitosis and DNA processing, remodeling of cell morphology, and ones in germ tube formation pertaining to infection and virulence factors.
Structures for release of conidia
Conidiogenesis is an important mechanism of spread of plant pathogens. In some cases, specialized macroscopic fruiting structures perhaps 1 mm or so in diameter containing masses of conidia are formed under the skin of the host plant and then erupt through the surface, allowing the spores to be distributed by wind and rain. One of these structures is called a conidioma (plural: conidiomata).
Two important types of conidiomata, distinguished by their form, are:
pycnidia (singular: pycnidium), which are flask-shaped, and
acervuli (singular: acervulus), which have a simpler cushion-like form.
Pycnidial conidiomata or pycnidia form in the fungal tissue itself, and are shaped like a bulging vase. The conidia are released through a small opening at the apex, the ostiole.
Acervular conidiomata, or acervuli, are cushion-like structures that form within the tissues of a host organism:
subcuticular, lying under the outer layer of the plant (the cuticle),
intraepidermal, inside the outer cell layer (the epidermis),
subepidermal, under the epidermis, or deeper inside the host.
Mostly they develop a flat layer of relatively short conidiophores which then produce masses of spores. The increasing pressure leads to the splitting of the epidermis and cuticle and allows release of the conidia from the tissue.
Health issues
Conidia are always present in the air, but levels fluctuate from day to day and with the seasons. An average person inhales at least 40 conidia per hour.
Exposure to conidia from certain species, such as those of Cryptostroma corticale, is known to cause hypersensitivity pneumonitis, an occupational hazard for forest workers and paper mill employees.
Conidia are often the method by which some normally harmless but heat-tolerating (thermotolerant), common fungi establish infection in certain types of severely immunocompromised patients (usually acute leukemia patients on induction chemotherapy, AIDS patients with superimposed B-cell lymphoma, bone marrow transplantation patients (taking immunosuppressants), or major organ transplant patients with graft versus host disease). Their immune system is not strong enough to fight off the fungus, and it may, for example, colonise the lung, resulting in a pulmonary infection. Especially with species of the Aspergillus genus, germination in the respiratory tract can lead to aspergillosis, which is quite common, can vary in severity, and has shown signs of developing new risk groups and antifungal drug resistance.
| Biology and health sciences | Fungal morphology and anatomy | Biology |
1530478 | https://en.wikipedia.org/wiki/Bioturbation | Bioturbation | Bioturbation is defined as the reworking of soils and sediments by animals or plants. It includes burrowing, ingestion, and defecation of sediment grains. Bioturbating activities have a profound effect on the environment and are thought to be a primary driver of biodiversity. The formal study of bioturbation began in the 1800s by Charles Darwin experimenting in his garden. The disruption of aquatic sediments and terrestrial soils through bioturbating activities provides significant ecosystem services. These include the alteration of nutrients in aquatic sediment and overlying water, shelter to other species in the form of burrows in terrestrial and water ecosystems, and soil production on land.
Bioturbators are deemed ecosystem engineers because they alter resource availability to other species through the physical changes they make to their environments. This type of ecosystem change affects the evolution of cohabitating species and the environment, which is evident in trace fossils left in marine and terrestrial sediments. Other bioturbation effects include altering the texture of sediments (diagenesis), bioirrigation, and displacement of microorganisms and non-living particles. Bioturbation is sometimes confused with the process of bioirrigation, however these processes differ in what they are mixing; bioirrigation refers to the mixing of water and solutes in sediments and is an effect of bioturbation.
Walruses, salmon, and pocket gophers are examples of large bioturbators. Although the activities of these large macrofaunal bioturbators are more conspicuous, the dominant bioturbators are small invertebrates, such as earthworms, polychaetes, ghost shrimp, mud shrimp, and midge larvae. The activities of these small invertebrates, which include burrowing and ingestion and defecation of sediment grains, contribute to mixing and the alteration of sediment structure.
Functional groups
Bioturbators have been organized by a variety of functional groupings based on either ecological characteristics or biogeochemical effects. While the prevailing categorization is based on the way bioturbators transport and interact with sediments, the various groupings likely stem from the relevance of a categorization mode to a field of study (such as ecology or sediment biogeochemistry) and an attempt to concisely organize the wide variety of bioturbating organisms in classes that describe their function. Examples of categorizations include those based on feeding and motility, feeding and biological interactions, and mobility modes. The most common set of groupings are based on sediment transport and are as follows:
Gallery-diffusers create complex tube networks within the upper sediment layers and transport sediment through feeding, burrow construction, and general movement throughout their galleries. Gallery-diffusers are heavily associated with burrowing polychaetes, such as Nereis diversicolor and Marenzelleria spp.
Biodiffusers transport sediment particles randomly over short distances as they move through sediments. Animals mostly attributed to this category include bivalves such as clams, and amphipod species, but can also include larger vertebrates, such as bottom-dwelling fish and rays that feed along the sea floor. Biodiffusers can be further divided into two subgroups, which include epifaunal (organisms that live on the surface sediments) biodiffusers and surface biodiffusers. This subgrouping may also include gallery-diffusers, reducing the number of functional groups.
Upward-conveyors are oriented head-down in sediments, where they feed at depth and transport sediment through their guts to the sediment surface. Major upward-conveyor groups include burrowing polychaetes like the lugworm, Arenicola marina, and thalassinid shrimps.
Downward-conveyor species are oriented with their heads towards the sediment-water interface and defecation occurs at depth. Their activities transport sediment from the surface to deeper sediment layers as they feed. Notable downward-conveyors include those in the peanut worm family, Sipunculidae.
Regenerators are categorized by their ability to release sediment to the overlying water column, which is then dispersed as they burrow. After regenerators abandon their burrows, water flow at the sediment surface can push in and collapse the burrow. Examples of regenerator species include fiddler and ghost crabs.
Ecological roles
The evaluation of the ecological role of bioturbators has largely been species-specific. However, their ability to transport solutes, such as dissolved oxygen, enhance organic matter decomposition and diagenesis, and alter sediment structure has made them important for the survival and colonization by other macrofaunal and microbial communities.
Microbial communities are greatly influenced by bioturbator activities, as increased transport of more energetically favorable oxidants, such as oxygen, to typically highly reduced sediments at depth alters the microbial metabolic processes occurring around burrows. As bioturbators burrow, they also increase the surface area of sediments across which oxidized and reduced solutes can be exchanged, thereby increasing the overall sediment metabolism. This increase in sediment metabolism and microbial activity further results in enhanced organic matter decomposition and sediment oxygen uptake. In addition to the effects of burrowing activity on microbial communities, studies suggest that bioturbator fecal matter provides a highly nutritious food source for microbes and other macrofauna, thus enhancing benthic microbial activity. This increased microbial activity by bioturbators can contribute to increased nutrient release to the overlying water column. Nutrients released from enhanced microbial decomposition of organic matter, notably limiting nutrients, such as ammonium, can have bottom-up effects on ecosystems and result in increased growth of phytoplankton and bacterioplankton.
Burrows offer protection from predation and harsh environmental conditions. For example, termites (Macrotermes bellicosus) burrow and create mounds that have a complex system of air ducts and evaporation devices that create a suitable microclimate in an unfavorable physical environment. Many species are attracted to bioturbator burrows because of their protective capabilities. The shared use of burrows has enabled the evolution of symbiotic relationships between bioturbators and the many species that utilize their burrows. For example, gobies, scale-worms, and crabs live in the burrows made by innkeeper worms. Social interactions provide evidence of co-evolution between hosts and their burrow symbionts. This is exemplified by shrimp-goby associations. Shrimp burrows provide shelter for gobies and gobies serve as a scout at the mouth of the burrow, signaling the presence of potential danger. In contrast, the blind goby Typhlogobius californiensis lives within the deep portion of Callianassa shrimp burrows where there is not much light. The blind goby is an example of a species that is an obligate commensalist, meaning their existence depends on the host bioturbator and its burrow. Although newly hatched blind gobies have fully developed eyes, their eyes become withdrawn and covered by skin as they develop. They show evidence of commensal morphological evolution because it is hypothesized that the lack of light in the burrows where the blind gobies reside is responsible for the evolutionary loss of functional eyes.
Bioturbators can also inhibit the presence of other benthic organisms by smothering, exposing other organisms to predators, or resource competition. While thalassinidean shrimps can provide shelter for some organisms and cultivate interspecies relationships within burrows, they have also been shown to have strong negative effects on other species, especially those of bivalves and surface-grazing gastropods, because thalassinidean shrimps can smother bivalves when they resuspend sediment. They have also been shown to exclude or inhibit polychaetes, cumaceans, and amphipods. This has become a serious issue in the northwestern United States, as ghost and mud shrimp (thalassinidean shrimp) are considered pests to bivalve aquaculture operations. The presence of bioturbators can have both negative and positive effects on the recruitment of larvae of conspecifics (those of the same species) and those of other species, as the resuspension of sediments and alteration of flow at the sediment-water interface can affect the ability of larvae to burrow and remain in sediments. This effect is largely species-specific, as species differences in resuspension and burrowing modes have variable effects on fluid dynamics at the sediment-water interface. Deposit-feeding bioturbators may also hamper recruitment by consuming recently settled larvae.
Biogeochemical effects
Since its onset around 539 million years ago, bioturbation has been responsible for changes in ocean chemistry, primarily through nutrient cycling. Bioturbators played, and continue to play, an important role in nutrient transport across sediments.
For example, bioturbating animals are hypothesized to have affected the cycling of sulfur in the early oceans. According to this hypothesis, bioturbating activities had a large effect on the sulfate concentration in the ocean. Around the Cambrian-Precambrian boundary (539 million years ago), animals begin to mix reduced sulfur from ocean sediments to overlying water causing sulfide to oxidize, which increased the sulfate composition in the ocean. During large extinction events, the sulfate concentration in the ocean was reduced. Although this is difficult to measure directly, seawater sulfur isotope compositions during these times indicates bioturbators influenced the sulfur cycling in the early Earth.
Bioturbators have also altered phosphorus cycling on geologic scales. Bioturbators mix readily available particulate organic phosphorus (P) deeper into ocean sediment layers which prevents the precipitation of phosphorus (mineralization) by increasing the sequestration of phosphorus above normal chemical rates. The sequestration of phosphorus limits oxygen concentrations by decreasing production on a geologic time scale. This decrease in production results in an overall decrease in oxygen levels, and it has been proposed that the rise of bioturbation corresponds to a decrease in oxygen levels of that time. The negative feedback of animals sequestering phosphorus in the sediments and subsequently reducing oxygen concentrations in the environment limits the intensity of bioturbation in this early environment.
Organic contaminants
Bioturbation can either enhance or reduce the flux of contaminants from the sediment to the water column, depending on the mechanism of sediment transport. In polluted sediments, bioturbating animals can mix the surface layer and cause the release of sequestered contaminants into the water column. Upward-conveyor species, like polychaete worms, are efficient at moving contaminated particles to the surface. Invasive animals can remobilize contaminants previously considered to be buried at a safe depth. In the Baltic Sea, the invasive Marenzelleria species of polychaete worms can burrow to 35-50 centimeters which is deeper than native animals, thereby releasing previously sequestered contaminants. However, bioturbating animals that live in the sediment (infauna) can also reduce the flux of contaminants to the water column by burying hydrophobic organic contaminants into the sediment. Burial of uncontaminated particles by bioturbating organisms provides more absorptive surfaces to sequester chemical pollutants in the sediments.
Ecosystem impacts
Nutrient cycling is still affected by bioturbation in the modern Earth. Some examples in the terrestrial and aquatic ecosystems are below.
Terrestrial
Plants and animals utilize soil for food and shelter, disturbing the upper soil layers and transporting chemically weathered rock called saprolite from the lower soil depths to the surface. Terrestrial bioturbation is important in soil production, burial, organic matter content, and downslope transport. Tree roots are sources of soil organic matter, with root growth and stump decay also contributing to soil transport and mixing. Death and decay of tree roots first delivers organic matter to the soil and then creates voids, decreasing soil density. Tree uprooting causes considerable soil displacement by producing mounds, mixing the soil, or inverting vertical sections of soil.
Burrowing animals, such as earth worms and small mammals, form passageways for air and water transport which changes the soil properties, such as the vertical particle-size distribution, soil porosity, and nutrient content. Invertebrates that burrow and consume plant detritus help produce an organic-rich topsoil known as the soil biomantle, and thus contribute to the formation of soil horizons. Small mammals such as pocket gophers also play an important role in the production of soil, possibly with an equal magnitude to abiotic processes. Pocket gophers form above-ground mounds, which moves soil from the lower soil horizons to the surface, exposing minimally weathered rock to surface erosion processes, speeding soil formation. Pocket gophers are thought to play an important role in the downslope transport of soil, as the soil that forms their mounds is more susceptible to erosion and subsequent transport. Similar to tree root effects, the construction of burrows-even when backfilled- decreases soil density. The formation of surface mounds also buries surface vegetation, creating nutrient hotspots when the vegetation decomposes, increasing soil organic matter. Due to the high metabolic demands of their burrow-excavating subterranean lifestyle, pocket gophers must consume large amounts of plant material. Though this has a detrimental effect on individual plants, the net effect of pocket gophers is increased plant growth from their positive effects on soil nutrient content and physical soil properties.
Freshwater
Important sources of bioturbation in freshwater ecosystems include benthivorous (bottom-dwelling) fish, macroinvertebrates such as worms, insect larvae, crustaceans and molluscs, and seasonal influences from anadromous (migrating) fish such as salmon. Anadromous fish migrate from the sea into fresh-water rivers and streams to spawn. Macroinvertebrates act as biological pumps for moving material between the sediments and water column, feeding on sediment organic matter and transporting mineralized nutrients into the water column. Both benthivorous and anadromous fish can affect ecosystems by decreasing primary production through sediment re-suspension, the subsequent displacement of benthic primary producers, and recycling nutrients from the sediment back into the water column.
Lakes and ponds
The sediments of lake and pond ecosystems are rich in organic matter, with higher organic matter and nutrient contents in the sediments than in the overlying water. Nutrient re-regeneration through sediment bioturbation moves nutrients into the water column, thereby enhancing the growth of aquatic plants and phytoplankton (primary producers). The major nutrients of interest in this flux are nitrogen and phosphorus, which often limit the levels of primary production in an ecosystem. Bioturbation increases the flux of mineralized (inorganic) forms of these elements, which can be directly used by primary producers. In addition, bioturbation increases the water column concentrations of nitrogen and phosphorus-containing organic matter, which can then be consumed by fauna and mineralized.
Lake and pond sediments often transition from the aerobic (oxygen containing) character of the overlaying water to the anaerobic (without oxygen) conditions of the lower sediment over sediment depths of only a few millimeters, therefore, even bioturbators of modest size can affect this transition of the chemical characteristics of sediments. By mixing anaerobic sediments into the water column, bioturbators allow aerobic processes to interact with the re-suspended sediments and the newly exposed bottom sediment surfaces.
Macroinvertebrates including chironomid (non-biting midges) larvae and tubificid worms (detritus worms) are important agents of bioturbation in these ecosystems and have different effects based on their respective feeding habits. Tubificid worms do not form burrows, they are upward conveyors. Chironomids, on the other hand, form burrows in the sediment, acting as bioirrigators and aerating the sediments and are downward conveyors. This activity, combined with chironomid's respiration within their burrows, decrease available oxygen in the sediment and increase the loss of nitrates through enhanced rates of denitrification.
The increased oxygen input to sediments by macroinvertebrate bioirrigation coupled with bioturbation at the sediment-water interface complicates the total flux of phosphorus . While bioturbation results in a net flux of phosphorus into the water column, the bio-irrigation of the sediments with oxygenated water enhances the adsorption of phosphorus onto iron-oxide compounds, thereby reducing the total flux of phosphorus into the water column.
The presence of macroinvertebrates in sediment can initiate bioturbation due to their status as an important food source for benthivorous fish such as carp. Of the bioturbating, benthivorous fish species, carp in particular are important ecosystem engineers and their foraging and burrowing activities can alter the water quality characteristics of ponds and lakes. Carp increase water turbidity by the re-suspension of benthic sediments. This increased turbidity limits light penetration and coupled with increased nutrient flux from the sediment into the water column, inhibits the growth of macrophytes (aquatic plants) favoring the growth of phytoplankton in the surface waters. Surface phytoplankton colonies benefit from both increased suspended nutrients and from recruitment of buried phytoplankton cells released from the sediments by the fish bioturbation. Macrophyte growth has also been shown to be inhibited by displacement from the bottom sediments due to fish burrowing.
Rivers and streams
River and stream ecosystems show similar responses to bioturbation activities, with chironomid larvae and tubificid worm macroinvertebrates remaining as important benthic agents of bioturbation. These environments can also be subject to strong season bioturbation effects from anadromous fish.
Salmon function as bioturbators on both gravel to sand-sized sediment and a nutrient scale, by moving and re-working sediments in the construction of redds (gravel depressions or "nests" containing eggs buried under a thin layer of sediment) in rivers and streams and by mobilization of nutrients. The construction of salmon redds functions to increase the ease of fluid movement (hydraulic conductivity) and porosity of the stream bed. In select rivers, if salmon congregate in large enough concentrations in a given area of the river, the total sediment transport from redd construction can equal or exceed the sediment transport from flood events. The net effect on sediment movement is the downstream transfer of gravel, sand and finer materials and enhancement of water mixing within the river substrate.
The construction of salmon redds increases sediment and nutrient fluxes through the hyporheic zone (area between surface water and groundwater) of rivers and effects the dispersion and retention of marine derived nutrients (MDN) within the river ecosystem. MDN are delivered to river and stream ecosystems by the fecal matter of spawning salmon and the decaying carcasses of salmon that have completed spawning and died. Numerical modeling suggests that residence time of MDN within a salmon spawning reach is inversely proportional to the amount of redd construction within the river. Measurements of respiration within a salmon-bearing river in Alaska further suggest that salmon bioturbation of the river bed plays a significant role in mobilizing MDN and limiting primary productivity while salmon spawning is active. The river ecosystem was found to switch from a net autotrophic to heterotrophic system in response to decreased primary production and increased respiration. The decreased primary production in this study was attributed to the loss of benthic primary producers who were dislodged due to bioturbation, while increased respiration was thought to be due to increased respiration of organic carbon, also attributed to sediment mobilization from salmon redd construction. While marine derived nutrients are generally thought to increase productivity in riparian and freshwater ecosystems, several studies have suggested that temporal effects of bioturbation should be considered when characterizing salmon influences on nutrient cycles.
Marine
Major marine bioturbators range from small infaunal invertebrates to fish and marine mammals. In most marine sediments, however, they are dominated by small invertebrates, including polychaetes, bivalves, burrowing shrimp, and amphipods.
Shallow and coastal
Coastal ecosystems, such as estuaries, are generally highly productive, which results in the accumulation of large quantities of detritus (organic waste). These large quantities, in addition to typically small sediment grain size and dense populations, make bioturbators important in estuarine respiration. Bioturbators enhance the transport of oxygen into sediments through irrigation and increase the surface area of oxygenated sediments through burrow construction. Bioturbators also transport organic matter deeper into sediments through general reworking activities and production of fecal matter. This ability to replenish oxygen and other solutes at sediment depth allows for enhanced respiration by both bioturbators as well as the microbial community, thus altering estuarine elemental cycling.
The effects of bioturbation on the nitrogen cycle are well-documented. Coupled denitrification and nitrification are enhanced due to increased oxygen and nitrate delivery to deep sediments and increased surface area across which oxygen and nitrate can be exchanged. The enhanced nitrification-denitrification coupling contributes to greater removal of biologically available nitrogen in shallow and coastal environments, which can be further enhanced by the excretion of ammonium by bioturbators and other organisms residing in bioturbator burrows. While both nitrification and denitrification are enhanced by bioturbation, the effects of bioturbators on denitrification rates have been found to be greater than that on rates of nitrification, further promoting the removal of biologically available nitrogen. This increased removal of biologically available nitrogen has been suggested to be linked to increased rates of nitrogen fixation in microenvironments within burrows, as indicated by evidence of nitrogen fixation by sulfate-reducing bacteria via the presence of nifH (nitrogenase) genes.
Bioturbation by walrus feeding is a significant source of sediment and biological community structure and nutrient flux in the Bering Sea. Walruses feed by digging their muzzles into the sediment and extracting clams through powerful suction. By digging through the sediment, walruses rapidly release large amounts of organic material and nutrients, especially ammonium, from the sediment to the water column. Additionally, walrus feeding behavior mixes and oxygenates the sediment and creates pits in the sediment which serve as new habitat structures for invertebrate larvae.
Deep sea
Bioturbation is important in the deep sea because deep-sea ecosystem functioning depends on the use and recycling of nutrients and organic inputs from the photic zone. In low energy regions (areas with relatively still water), bioturbation is the only force creating heterogeneity in solute concentration and mineral distribution in the sediment. It has been suggested that higher benthic diversity in the deep sea could lead to more bioturbation which, in turn, would increase the transport of organic matter and nutrients to benthic sediments. Through the consumption of surface-derived organic matter, animals living on the sediment surface facilitate the incorporation of particulate organic carbon (POC) into the sediment where it is consumed by sediment dwelling animals and bacteria. Incorporation of POC into the food webs of sediment dwelling animals promotes carbon sequestration by removing carbon from the water column and burying it in the sediment. In some deep-sea sediments, intense bioturbation enhances manganese and nitrogen cycling.
Mathematical modelling
The role of bioturbators in sediment biogeochemistry makes bioturbation a common parameter in sediment biogeochemical models, which are often numerical models built using ordinary and partial differential equations. Bioturbation is typically represented as DB, or the biodiffusion coefficient, and is described by a diffusion and, sometimes, an advective term. This representation and subsequent variations account for the different modes of mixing by functional groups and bioirrigation that results from them. The biodiffusion coefficient is usually measured using radioactive tracers such as Pb210, radioisotopes from nuclear fallout, introduced particles including glass beads tagged with radioisotopes or inert fluorescent particles, and chlorophyll a. Biodiffusion models are then fit to vertical distributions (profiles) of tracers in sediments to provide values for DB.
Parameterization of bioturbation, however, can vary, as newer and more complex models can be used to fit tracer profiles. Unlike the standard biodiffusion model, these more complex models, such as expanded versions of the biodiffusion model, random walk, and particle-tracking models, can provide more accuracy, incorporate different modes of sediment transport, and account for more spatial heterogeneity.
Evolution
The onset of bioturbation had a profound effect on the environment and the evolution of other organisms. Bioturbation is thought to have been an important co-factor of the Cambrian Explosion, during which most major animal phyla appeared in the fossil record over a short time. Predation arose during this time and promoted the development of hard skeletons, for example bristles, spines, and shells, as a form of armored protection. It is hypothesized that bioturbation resulted from this skeleton formation. These new hard parts enabled animals to dig into the sediment to seek shelter from predators, which created an incentive for predators to search for prey in the sediment (see Evolutionary Arms Race). Burrowing species fed on buried organic matter in the sediment which resulted in the evolution of deposit feeding (consumption of organic matter within sediment). Prior to the development of bioturbation, laminated microbial mats were the dominant biological structures of the ocean floor and drove much of the ecosystem functions. As bioturbation increased, burrowing animals disturbed the microbial mat system and created a mixed sediment layer with greater biological and chemical diversity. This greater biological and chemical diversity is thought to have led to the evolution and diversification of seafloor-dwelling species.
An alternate, less widely accepted hypothesis for the origin of bioturbation exists. The trace fossil Nenoxites is thought to be the earliest record of bioturbation, predating the Cambrian Period. The fossil is dated to 555 million years, which places it in the Ediacaran Period. The fossil indicates a 5 centimeter depth of bioturbation in muddy sediments by a burrowing worm. This is consistent with food-seeking behavior, as there tended to be more food resources in the mud than the water column. However, this hypothesis requires more precise geological dating to rule out an early Cambrian origin for this specimen.
The evolution of trees during the Devonian Period enhanced soil weathering and increased the spread of soil due to bioturbation by tree roots. Root penetration and uprooting also enhanced soil carbon storage by enabling mineral weathering and the burial of organic matter.
Fossil record
Patterns or traces of bioturbation are preserved in lithified rock. The study of such patterns is called ichnology, or the study of "trace fossils", which, in the case of bioturbators, are fossils left behind by digging or burrowing animals. This can be compared to the footprint left behind by these animals. In some cases bioturbation is so pervasive that it completely obliterates sedimentary structures, such as laminated layers or cross-bedding. Thus, it affects the disciplines of sedimentology and stratigraphy within geology. The study of bioturbator ichnofabrics uses the depth of the fossils, the cross-cutting of fossils, and the sharpness (or how well defined) of the fossil to assess the activity that occurred in old sediments. Typically the deeper the fossil, the better preserved and well defined the specimen.
Important trace fossils from bioturbation have been found in marine sediments from tidal, coastal and deep sea sediments. In addition sand dune, or Eolian, sediments are important for preserving a wide variety of fossils. Evidence of bioturbation has been found in deep-sea sediment cores including into long records, although the act extracting the core can disturb the signs of bioturbation, especially at shallower depths. Arthropods, in particular are important to the geologic record of bioturbation of Eolian sediments. Dune records show traces of burrowing animals as far back as the lower Mesozoic (250 Million years ago), although bioturbation in other sediments has been seen as far back as 550 Ma.
Research history
Bioturbation's importance for soil processes and geomorphology was first realized by Charles Darwin, who devoted his last scientific book to the subject (The Formation of Vegetable Mould through the Action of Worms). Darwin spread chalk dust over a field to observe changes in the depth of the chalk layer over time. Excavations 30 years after the initial deposit of chalk revealed that the chalk was buried 18 centimeters under the sediment, which indicated a burial rate of 6 millimeters per year. Darwin attributed this burial to the activity of earthworms in the sediment and determined that these disruptions were important in soil formation. In 1891, geologist Nathaniel Shaler expanded Darwin's concept to include soil disruption by ants and trees. The term "bioturbation" was later coined by Rudolf Richter in 1952 to describe structures in sediment caused by living organisms. Since the 1980s, the term "bioturbation" has been widely used in soil and geomorphology literature to describe the reworking of soil and sediment by plants and animals.
| Physical sciences | Sedimentology | Earth science |
1530575 | https://en.wikipedia.org/wiki/Fungal%20infection | Fungal infection | Fungal infection, also known as mycosis, is a disease caused by fungi. Different types are traditionally divided according to the part of the body affected; superficial, subcutaneous, and systemic. Superficial fungal infections include common tinea of the skin, such as tinea of the body, groin, hands, feet and beard, and yeast infections such as pityriasis versicolor. Subcutaneous types include eumycetoma and chromoblastomycosis, which generally affect tissues in and beneath the skin. Systemic fungal infections are more serious and include cryptococcosis, histoplasmosis, pneumocystis pneumonia, aspergillosis and mucormycosis. Signs and symptoms range widely. There is usually a rash with superficial infection. Fungal infection within the skin or under the skin may present with a lump and skin changes. Pneumonia-like symptoms or meningitis may occur with a deeper or systemic infection.
Fungi are everywhere, but only some cause disease. Fungal infection occurs after spores are either breathed in, come into contact with skin or enter the body through the skin such as via a cut, wound or injection. It is more likely to occur in people with a weak immune system. This includes people with illnesses such as HIV/AIDS, and people taking medicines such as steroids or cancer treatments. Fungi that cause infections in people include yeasts, molds and fungi that are able to exist as both a mold and yeast. The yeast Candida albicans can live in people without producing symptoms, and is able to cause both superficial mild candidiasis in healthy people, such as oral thrush or vaginal yeast infection, and severe systemic candidiasis in those who cannot fight infection themselves.
Diagnosis is generally based on signs and symptoms, microscopy, culture, sometimes requiring a biopsy and the aid of medical imaging. Some superficial fungal infections of the skin can appear similar to other skin conditions such as eczema and lichen planus. Treatment is generally performed using antifungal medicines, usually in the form of a cream or by mouth or injection, depending on the specific infection and its extent. Some require surgically cutting out infected tissue.
Fungal infections have a world-wide distribution and are common, affecting more than one billion people every year. An estimated 1.7 million deaths from fungal disease were reported in 2020. Several, including sporotrichosis, chromoblastomycosis and mycetoma are neglected.
A wide range of fungal infections occur in other animals, and some can be transmitted from animals to people.
Classification
Mycoses are traditionally divided into superficial, subcutaneous, or systemic, where infection is deep, more widespread and involving internal body organs. They can affect the nails, vagina, skin and mouth. Some types such as blastomycosis, cryptococcus, coccidioidomycosis and histoplasmosis, affect people who live in or visit certain parts of the world. Others such as aspergillosis, pneumocystis pneumonia, candidiasis, mucormycosis and talaromycosis, tend to affect people who are unable to fight infection themselves. Mycoses might not always conform strictly to the three divisions of superficial, subcutaneous and systemic. Some superficial fungal infections can cause systemic infections in people who are immunocompromised. Some subcutaneous fungal infections can invade into deeper structures, resulting in systemic disease. Candida albicans can live in people without producing symptoms, and is able to cause both mild candidiasis in healthy people and severe invasive candidiasis in those who cannot fight infection themselves.
ICD-11 codes
ICD-11 codes include:
1F20 Aspergillosis
1F21 Basidiobolomycosis
1F22 Blastomycosis
1F23 Candidosis
1F24 Chromoblastomycosis
1F25 Coccidioidomycosis
1F26 Conidiobolomycosis
1F27 Cryptococcosis
1F28 Dermatophytosis
1F29 Eumycetoma
1F2A Histoplasmosis
1F2B Lobomycosis
1F2C Mucormycosis
1F2D Non-dermatophyte superficial dermatomycoses
1F2E Paracoccidioidomycosis
1F2F Phaeohyphomycosis
1F2G Pneumocystosis
1F2H Scedosporiosis
1F2J Sporotrichosis
1F2K Talaromycosis
1F2L Emmonsiosis
Superficial mycoses
Superficial mycoses include candidiasis in healthy people, common tinea of the skin, such as tinea of the body, groin, hands, feet and beard, and malassezia infections such as pityriasis versicolor.
Subcutaneous
Subcutaneous fungal infections include sporotrichosis, chromoblastomycosis, and eumycetoma.
Systemic
Systemic fungal infections include histoplasmosis, cryptococcosis, coccidioidomycosis, blastomycosis, mucormycosis, aspergillosis, pneumocystis pneumonia and systemic candidiasis.
Systemic mycoses due to primary pathogens originate normally in the lungs and may spread to other organ systems. Organisms that cause systemic mycoses are inherently virulent.. Systemic mycoses due to opportunistic pathogens are infections of people with immune deficiencies who would otherwise not be infected. Examples of immunocompromised conditions include AIDS, alteration of normal flora by antibiotics, immunosuppressive therapy, and metastatic cancer. Examples of opportunistic mycoses include Candidiasis, Cryptococcosis and Aspergillosis.
Signs and symptoms
Most common mild mycoses often present with a rash. Infections within the skin or under the skin may present with a lump and skin changes. Less common deeper fungal infections may present with pneumonia like symptoms or meningitis.
Causes
Mycoses are caused by certain fungi; yeasts, molds and some fungi that can exist as both a mold and yeast. They are everywhere and infection occurs after spores are either breathed in, come into contact with skin or enter the body through the skin such as via a cut, wound or injection. Candida albicans is the most common cause of fungal infection in people, particularly as oral or vaginal thrush, often following taking antibiotics.
Risk factors
Fungal infections are more likely in people with weak immune systems. This includes people with illnesses such as HIV/AIDS, and people taking medicines such as steroids or cancer treatments. People with diabetes also tend to develop fungal infections. Very young and very old people, also, are groups at risk.
Individuals being treated with antibiotics are at higher risk of fungal infections.
Children whose immune systems are not functioning properly (such as children with cancer) are at risk of invasive fungal infections.
COVID-19
During the COVID-19 pandemic some fungal infections have been associated with COVID-19. Fungal infections can mimic COVID-19, occur at the same time as COVID-19 and more serious fungal infections can complicate COVID-19. A fungal infection may occur after antibiotics for a bacterial infection which has occurred following COVID-19. The most common serious fungal infections in people with COVID-19 include aspergillosis and invasive candidiasis. COVID-19–associated mucormycosis is generally less common, but in 2021 was noted to be significantly more prevalent in India.
Mechanism
Fungal infections occur after spores are either breathed in, come into contact with skin or enter the body through a wound.
Diagnosis
Diagnosis is generally by signs and symptoms, microscopy, biopsy, culture and sometimes with the aid of medical imaging.
Differential diagnosis
Some tinea and candidiasis infections of the skin can appear similar to eczema and lichen planus. Pityriasis versicolor can look like seborrheic dermatitis, pityriasis rosea, pityriasis alba and vitiligo.
Some fungal infections such as coccidioidomycosis, histoplasmosis, and blastomycosis can present with fever, cough, and shortness of breath, thereby resembling COVID-19.
Prevention
Keeping the skin clean and dry, as well as maintaining good hygiene, will help larger topical mycoses. Because some fungal infections are contagious, it is important to wash hands after touching other people or animals. Sports clothing should also be washed after use.
Treatment
Treatment depends on the type of fungal infection, and usually requires topical or systemic antifungal medicines. Pneumocystosis that does not respond to anti-fungals is treated with co-trimoxazole. Sometimes, infected tissue needs to be surgically cut away.
Epidemiology
Worldwide, every year fungal infections affect more than one billion people. An estimated 1.6 million deaths from fungal disease were reported in 2017. The figure has been rising, with an estimated 1.7 million deaths from fungal disease reported in 2020. Fungal infections also constitute a significant cause of illness and mortality in children.
According to the Global Action Fund for Fungal Infections, every year there are over 10 million cases of fungal asthma, around 3 million cases of long-term aspergillosis of lungs, 1 million cases of blindness due to fungal keratitis, more than 200,000 cases of meningitis due to cryptococcus, 700,000 cases of invasive candidiasis, 500,000 cases of pneumocystosis of lungs, 250,000 cases of invasive aspergillosis, and 100,000 cases of histoplasmosis.
History
In 500BC, an apparent account of ulcers in the mouth by Hippocrates may have been thrush. The Hungarian microscopist based in Paris David Gruby first reported that human disease could be caused by fungi in the early 1840s.
SARS 2003
During the 2003 SARS outbreak, fungal infections were reported in 14.8–33% of people affected by SARS, and it was the cause of death in 25–73.7% of people with SARS.
Other animals
A wide range of fungal infections occur in other animals, and some can be transmitted from animals to people, such as Microsporum canis from cats.
| Biology and health sciences | Infectious disease | null |
8676888 | https://en.wikipedia.org/wiki/Sodium%20permanganate | Sodium permanganate | Sodium permanganate is the inorganic compound with the formula NaMnO4. It is closely related to the more commonly encountered potassium permanganate, but it is generally less desirable, because it is more expensive to produce. It is mainly available as the monohydrate. This salt absorbs water from the atmosphere and has a low melting point. Being about 15 times more soluble than KMnO4, sodium permanganate finds some applications where very high concentrations of MnO4− are sought.
Preparation and properties
Sodium permanganate cannot be prepared analogously to the route to KMnO4 because the required intermediate manganate salt, Na2MnO4, does not form. Thus less direct routes are used including conversion from KMnO4.
Sodium permanganate behaves similarly to potassium permanganate. It dissolves readily in water to give deep purple solutions, evaporation of which gives prismatic purple-black glistening crystals of the monohydrate NaMnO4·H2O. The potassium salt does not form a hydrate. Because of its hygroscopic nature, it is less useful in analytical chemistry than its potassium counterpart.
It can be prepared by the reaction of manganese dioxide with sodium hypochlorite:
2 MnO2 + 3 NaClO + 2 NaOH → 2 NaMnO4 + 3 NaCl + H2O
Applications
Because of its high solubility, its aqueous solutions are used as a drilled hole debris remover and etchant in printed circuitry, with a limited utility though. It is gaining popularity in water treatment for taste, odor, and zebra mussel
control. The V-2 rocket used it in combination with hydrogen peroxide to drive a steam turbopump.
As an oxidizer, sodium permanganate is used in environmental remediation of soil and groundwater contaminated with chlorinated solvents using the remediation technology in situ chemical oxidation, also referred to as ISCO.
| Physical sciences | Metallic oxyanions | Chemistry |
16001916 | https://en.wikipedia.org/wiki/Zero-day%20vulnerability | Zero-day vulnerability | A zero-day (also known as a 0-day) is a vulnerability in software or hardware that is typically unknown to the vendor and for which no patch or other fix is available. The vendor thus has zero days to prepare a patch, as the vulnerability has already been described or exploited.
Despite developers' goal of delivering a product that works entirely as intended, virtually all software and hardware contains bugs. Many of these impair the security of the system and are thus vulnerabilities. Although the basis of only a minority of cyberattacks, zero-days are considered more dangerous than known vulnerabilities because there are fewer countermeasures possible.
States are the primary users of zero-day vulnerabilities, not only because of the high cost of finding or buying them, but also the significant cost of writing the attack software. Many vulnerabilities are discovered by hackers or security researchers, who may disclose them to the vendor (often in exchange for a bug bounty) or sell them to states or criminal groups. The use of zero-days increased after many popular software companies began to encrypt messages and data, meaning that the unencrypted data could only be obtained by hacking into the software before it was encrypted.
Definition
Despite developers' goal of delivering a product that works entirely as intended, virtually all software and hardware contain bugs. If a bug creates a security risk, it is called a vulnerability. Vulnerabilities vary in their ability to be exploited by malicious actors. Some are not usable at all, while others can be used to disrupt the device with a denial of service attack. The most valuable allow the attacker to inject and run their own code, without the user being aware of it. Although the term "zero-day" initially referred to the time since the vendor had become aware of the vulnerability, zero-day vulnerabilities can also be defined as the subset of vulnerabilities for which no patch or other fix is available. A zero-day exploit is any exploit that takes advantage of such a vulnerability.
Exploits
An exploit is the delivery mechanism that takes advantage of the vulnerability to penetrate the target's systems, for such purposes as disrupting operations, installing malware, or exfiltrating data. Researchers Lillian Ablon and Andy Bogart write that "little is known about the true extent, use, benefit, and harm of zero-day exploits". Exploits based on zero-day vulnerabilities are considered more dangerous than those that take advantage of a known vulnerability. However, it is likely that most cyberattacks use known vulnerabilities, not zero-days.
States are the primary users of zero-day exploits, not only because of the high cost of finding or buying vulnerabilities, but also the significant cost of writing the attack software. Nevertheless, anyone can use a vulnerability, and according to research by the RAND Corporation, "any serious attacker can always get an affordable zero-day for almost any target". Many targeted attacks and most advanced persistent threats rely on zero-day vulnerabilities.
The average time to develop an exploit from a zero-day vulnerability was estimated at 22 days. The difficulty of developing exploits has been increasing over time due to increased anti-exploitation features in popular software.
Window of vulnerability
Zero-day vulnerabilities are often classified as alive—meaning that there is no public knowledge of the vulnerability—and dead—the vulnerability has been disclosed, but not patched. If the software's maintainers are actively searching for vulnerabilities, it is a living vulnerability; such vulnerabilities in unmaintained software are called immortal. Zombie vulnerabilities can be exploited in older versions of the software but have been patched in newer versions.
Even publicly known and zombie vulnerabilities are often exploitable for an extended period. Security patches can take months to develop, or may never be developed. A patch can have negative effects on the functionality of software and users may need to test the patch to confirm functionality and compatibility. Larger organizations may fail to identify and patch all dependencies, while smaller enterprises and personal users may not install patches.
Research suggests that risk of cyberattack increases if the vulnerability is made publicly known or a patch is released. Cybercriminals can reverse engineer the patch to find the underlying vulnerability and develop exploits, often faster than users install the patch.
According to research by RAND Corporation published in 2017, zero-day exploits remain usable for 6.9 years on average, although those purchased from a third party only remain usable for 1.4 years on average. The researchers were unable to determine if any particular platform or software (such as open-source software) had any relationship to the life expectancy of a zero-day vulnerability. Although the RAND researchers found that 5.7 percent of a stockpile of secret zero-day vulnerabilities will have been discovered by someone else within a year, another study found a higher overlap rate, as high as 10.8 percent to 21.9 percent per year.
Countermeasures
Because, by definition, there is no patch that can block a zero-day exploit, all systems employing the software or hardware with the vulnerability are at risk. This includes secure systems such as banks and governments that have all patches up to date. Security systems are designed around known vulnerabilities, and repeated exploitations of a zero-day exploit could continue undetected for an extended period of time. Although there have been many proposals for a system that is effective at detecting zero-day exploits, this remains an active area of research in 2023.
Many organizations have adopted defense-in-depth tactics so that attacks are likely to require breaching multiple levels of security, which makes it more difficult to achieve. Conventional cybersecurity measures such as training and access control such as multifactor authentication, least-privilege access, and air-gapping makes it harder to compromise systems with a zero-day exploit. Since writing perfectly secure software is impossible, some researchers argue that driving up the cost of exploits is a good strategy to reduce the burden of cyberattacks.
Market
Zero-day exploits can fetch millions of dollars. There are three main types of buyers:
White: the vendor, or to third parties such as the Zero Day Initiative that disclose to the vendor. Often such disclosure is in exchange for a bug bounty. Not all companies respond positively to disclosures, as they can cause legal liability and operational overhead. It is not uncommon to receive cease-and-desist letters from software vendors after disclosing a vulnerability for free.
Gray: the largest and most lucrative. Government or intelligence agencies buy zero-days and may use it in an attack, stockpile the vulnerability, or notify the vendor. The United States federal government is one of the largest buyers. As of 2013, the Five Eyes (United States, United Kingdom, Canada, Australia, and New Zealand) captured the plurality of the market and other significant purchasers included Russia, India, Brazil, Malaysia, Singapore, North Korea, and Iran. Middle Eastern countries were poised to become the biggest spenders.
Black: organized crime, which typically prefers exploit software rather than just knowledge of a vulnerability. These users are more likely to employ "half-days" where a patch is already available.
In 2015, the markets for government and crime were estimated at at least ten times larger than the white market. Sellers are often hacker groups that seek out vulnerabilities in widely used software for financial reward. Some will only sell to certain buyers, while others will sell to anyone. White market sellers are more likely to be motivated by non pecuniary rewards such as recognition and intellectual challenge. Selling zero day exploits is legal. Despite calls for more regulation, law professor Mailyn Fidler says there is little chance of an international agreement because key players such as Russia and Israel are not interested.
The sellers and buyers that trade in zero-days tend to be secretive, relying on non-disclosure agreements and classified information laws to keep the exploits secret. If the vulnerability becomes known, it can be patched and its value consequently crashes. Because the market lacks transparency, it can be hard for parties to find a fair price. Sellers might not be paid if the vulnerability was disclosed before it was verified, or if the buyer declined to purchase it but used it anyway. With the proliferation of middlemen, sellers could never know to what use the exploits could be put. Buyers could not guarantee that the exploit was not sold to another party. Both buyers and sellers advertise on the dark web.
Research published in 2022 based on maximum prices paid as quoted by a single exploit broker found a 44 percent annualized inflation rate in exploit pricing. Remote zero-click exploits could fetch the highest price, while those that require local access to the device are much cheaper. Vulnerabilities in widely used software are also more expensive. They estimated that around 400 to 1,500 people sold exploits to that broker and they made around $5,500 to $20,800 annually.
Disclosure and stockpiling
, there is an ongoing debate as to whether the United States should disclose the vulnerabilities it is aware of, so that they can be patched, or keep them secret for its own use. Reasons that states keep an vulnerability secret include wanting to use it offensively, or defensively in penetration testing. Disclosing the vulnerability reduces the risk that consumers and all users of the software will be victimized by malware or data breaches.
The phases of zero-day vulnerability disclosure, along with a typical timeline, are as follows:
Discovery: A researcher identifies the vulnerability, marking "Day 0."
Reporting: The researcher notifies the vendor or a third party, starting remediation efforts.
Patch Development: The vendor develops a fix, which can take weeks to months depending on the complexity.
Public Disclosure: Once a patch is released, details are shared publicly. If no patch is issued within an agreed period (commonly 90 days), some researchers disclose it to push for action.
History
Zero-day exploits increased in significance after services such as Apple, Google, Facebook, and Microsoft encrypted servers and messages, meaning that the most feasible way to access a user's data was to intercept it at the source before it was encrypted. One of the best-known use of zero-day exploits was the Stuxnet worm, which used four zero-day vulnerabilities to damage Iran's nuclear program in 2010. The worm showed what could be achieved by zero-day exploits, unleashing an expansion in the market.
The United States National Security Agency (NSA) increased its search for zero-day vulnerabilities after large tech companies refused to install backdoors into the software, tasking the Tailored Access Operations (TAO) with discovering and purchasing zero-day exploits. In 2007, former NSA employee Charlie Miller publicly revealed for the first time that the United States government was buying zero-day exploits. Some information about the NSA involvement with zero-days was revealed in the documents leaked by NSA contractor Edward Snowden in 2013, but details were lacking. Reporter Nicole Perlroth concluded that "either Snowden’s access as a contractor didn’t take him far enough into the government’s systems for the intel required, or some of the government’s sources and methods for acquiring zero-days were so confidential, or controversial, that the agency never dared put them in writing".
One of the most infamous vulnerabilities discovered after 2013, Heartbleed (CVE-2014-0160), was not a zero-day when publicly disclosed but underscored the critical impact that software bugs can have on global cybersecurity. This flaw in the OpenSSL cryptographic library could have been exploited as a zero-day prior to its discovery, allowing attackers to steal sensitive information such as private keys and passwords.
In 2016 the hacking group known as Shadow Brokers released a trove of sophisticated zero-day exploits reportedly stolen from the United States National Security Agency (NSA). These included tools such as EternalBlue, which leveraged a vulnerability in Microsoft Windows' Server Message Block (SMB) protocol. EternalBlue was later weaponized in high-profile attacks like WannaCry and NotPetya, causing widespread global damage and highlighting the risks of stockpiling vulnerabilities.
The year 2020 saw one of the most sophisticated cyber espionage campaigns to date, in which attackers exploited multiple vulnerabilities, including zero-day vulnerabilities, to compromise SolarWinds' Orion software. This allowed access to numerous government and corporate networks.
In 2021 Chinese state-sponsored group, Hafnium, exploited zero-day vulnerabilities in Microsoft Exchange Server to conduct cyber espionage. Known as ProxyLogon, these flaws allowed attackers to bypass authentication and execute arbitrary code, compromising thousands of systems globally.
In 2022 the spyware Pegasus, developed by Israel's NSO Group, was found to exploit zero-click vulnerabilities in messaging apps like iMessage and WhatsApp. These exploits allowed attackers to access targets' devices without requiring user interaction, heightening concerns over surveillance and privacy.
| Technology | Computer security | null |
2177275 | https://en.wikipedia.org/wiki/Domesticated%20hedgehog | Domesticated hedgehog | The domesticated hedgehog kept as a pet is typically the African pygmy hedgehog (Atelerix albiventris). Other species kept as pets include the long-eared hedgehog (Hemiechinus auritus) and the Indian long-eared hedgehog (Hemiechinus collaris).
In the ancient era
Although ancient humans were familiar with the hedgehog, hunting it for food and using its spines in the processing of wool, it was not likely kept as a pet. Aristotle described the behaviour of "pet" hedgehogs kept in the home as a means for predicting weather by someone in Byzantium—Plutarch describes the same, but refers to the man as living in Cyzicus—but this is probably an unusual situation, as hedgehogs were generally not regarded as valuable animals. Other sources suggest that the Ancient Greeks may have kept hedgehogs around the home for their potential to eat beetles and other pests.
The Guinness World Records describe the Romans as having domesticated a relative of the Algerian hedgehog in the 4th century BCE, to use for meat and quills as well as pets.
The Romans did use the quill-covered hedgehog skins to clean their shawls, making them important to commerce, which resulted in the Roman Senate regulating the trade in hedgehog skins. The quills were used in the training of other animals, such as keeping a calf from suckling after it had been weaned.
Modern domestication
In the early 1980s, hedgehog domestication became popular in the United States. Some U.S. states, however, ban them, or require a license to own one.
Since domestication restarted, several new colors of hedgehogs have been cultivated or become common, including albino and pinto hedgehogs. "Pinto" is a color pattern, rather than a color: A total lack of color on the quills and skin beneath, in distinct patches.
Currently, the species most common among domestic hedgehogs are African, from warm climates (above ). They do not hibernate in the wild, and if one of these African hedgehogs begins hibernation in response to lowered body temperature, the result can be its death. The process is easily reversed by warming, if caught within a few days of onset.
Legality
Because a hedgehog is commonly kept in a cage or similar enclosure, it is allowed in some residences where cats and dogs are not allowed.
It is illegal to own a hedgehog as a pet in some jurisdictions in North America, and a license is needed to legally breed them. These restrictions may have been enacted due to the ability of some hedgehog species to carry foot and mouth disease, a highly contagious disease of cloven-hooved animals.
The European hedgehog is a protected species in all countries that have signed the Berne Convention; this includes all member states of the Council of Europe, as well as the European Union and a small number of other states. In these countries, it is illegal to capture the European hedgehog or keep it as a pet.
Legal status internationally
Austria: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs (African Pygmy hedgehogs) may legally be kept as pets.
Australia: All hedgehogs are classified as exotic pets that are illegal to import.
Canada:
In Quebec – European hedgehogs are illegal. Four-toed hedgehogs are legal.
In Ontario – European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
Denmark: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
Finland: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
Germany: European hedgehogs are protected and cannot be kept as pets. They may be removed from their habitat if injured or sick, but only for the purposes of restoring their health. If, for some reason, they cannot be released back into the wild (due to neurological conditions or permanent damage to limbs, for example) keeping them is still illegal and they must be put to sleep by a qualified vet. Four-toed hedgehogs may legally be kept as pets.
Italy: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
Latvia: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
Netherlands: European hedgehogs are protected and cannot be kept as pets. Since 01-01-2024, the selling, trading, and breeding of four-toed hedgehogs has become illegal.
Poland: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
France: European hedgehogs are protected, no specie of hedgehog can be kept as pets.
Spain: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs are illegal and considered an exotic invasive species.
Sweden: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
United Kingdom: European hedgehogs are protected and cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
United States:
In Idaho and Oregon – European hedgehogs cannot be kept as pets. Four-toed hedgehogs may legally be kept as pets.
In New Jersey and Wyoming – a permit is required.
In Wisconsin – an import permit from the state department of agriculture is required to bring a hedgehog into the state.
In Fairfax County, Virginia, it became legal to keep hedgehogs as pets in 2019.
In Pennsylvania – hedgehogs may not be imported into the state, but hedgehogs in the state as of 1992 and their descendants are allowed.
It is currently illegal to own a hedgehog in California, Georgia, Hawaii, New York City, and Washington, D.C.
Singapore: Hedgehogs of all kinds are illegal, along with other exotic pets such as iguanas, tarantulas, scorpions, and snakes.
Turkey: European hedgehogs are protected and cannot be kept as pets, and four-toed hedgehogs may also not legally be kept as pets.
Enclosures
In the wild, a hedgehog will cover many miles each night. A hedgehog with insufficient range may show signs of depression, such as excessive sleeping, refusal to eat, repetitious behaviour, and self-mutilation. Hedgehogs require a fair amount of exercise to avoid liver problems due to excess weight. Therefore, a domesticated hedgehog must have access to a running wheel. Running wheels must be selected carefully to avoid foot injury. Running wheels made of solid material that are approximately 1 foot (12 inches) in diameter are recommended.
Food
In the wild, a hedgehog is opportunistic and will eat many things, but the majority of the diet comprises insects. As insectivores, hedgehogs need a diet that is high in protein and low in fat. They also require chitin, which comes from the exoskeleton of insects; fiber in the diet may be a substitute for the chitin component. There are prepared foods specifically for pet hedgehogs and insectivores, including foods made from insect components. Also available are alimentary powders to sprinkle on other food which provide chitin and other nutrients.
Pet hedgehogs may eat such table foods as cooked, lean chicken, turkey, beef or pork. They will often eat small amounts of vegetables and fruit. Hedgehogs are lactose-intolerant and will have stomach problems after consuming most dairy products, though occasional plain low-fat yogurt or cottage cheese seem to be well tolerated.
Allergies
Hedgehogs produce very little danger to their owners and handlers. It is possible to be allergic to items surrounding the hedgehog, such as the hedgehog's food or bedding, but it is rare that a person would be allergic to the hedgehog itself.
After handling hedgehogs, some have claimed that pink dots on their hands is an allergic reaction. This is more likely caused by small pricks from the hedgehog's spines. If a hedgehog is not clean, the pricks can become infected. The infection is from contaminants on the hedgehog or on the surface of the hands, not from an allergic reaction to the hedgehog. As is true with most animal handling, one should wash their hands after handling a hedgehog.
Hedgehogs are commonly allergic to wood oils. Wood bedding should be avoided, specifically cedar. The oil found in cedar can cause severe upper respiratory problems. Aspen however is widely accepted as a safe substitute.
Diseases
Hedgehogs can easily become obese; if they can no longer roll completely into a ball, it is a clear sign of obesity. Conversely, hedgehogs often stop eating under situations of stress, such as when adjusting to a new home.
Hedgehogs are prone to many diseases, including cancer, which spreads quickly in hedgehogs, and wobbly hedgehog syndrome (WHS), a neurological problem. Some symptoms of WHS resemble those of multiple sclerosis (MS) in humans, therefore the condition the animal experiences can be compared with what MS patients experience. A possible cause of WHS is a genetic flaw allowing a virus to attack the hedgehog's nervous system.
The nose can display a variety of symptoms of a troubled hedgehog, especially respiratory illnesses, such as pneumonia. In many cases, the form of pneumonia that affects hedgehogs is bacterial in nature. If acted upon quickly, antibiotics can have a very positive effect. Signs to watch for include bubbles, excessive dripping, or constant sneezing.
Hedgehogs usually react to stress with temporary digestive disorders that include vomiting and green feces.
| Biology and health sciences | Eulipotyphla | Animals |
2178570 | https://en.wikipedia.org/wiki/Cosmic%20dust | Cosmic dust | Cosmic dustalso called extraterrestrial dust, space dust, or star dustis dust that occurs in outer space or has fallen onto Earth. Most cosmic dust particles measure between a few molecules and , such as micrometeoroids (<30 μm) and meteoroids (>30 μm). Cosmic dust can be further distinguished by its astronomical location: intergalactic dust, interstellar dust, interplanetary dust (as in the zodiacal cloud), and circumplanetary dust (as in a planetary ring). There are several methods to obtain space dust measurement.
In the Solar System, interplanetary dust causes the zodiacal light. Solar System dust includes comet dust, planetary dust (like from Mars), asteroidal dust, dust from the Kuiper belt, and interstellar dust passing through the Solar System. Thousands of tons of cosmic dust are estimated to reach Earth's surface every year, with most grains having a mass between 10−16 kg (0.1 pg) and 10−4 kg (0.1 g). The density of the dust cloud through which the Earth is traveling is approximately 10−6 dust grains/m3.
Cosmic dust contains some complex organic compounds (amorphous organic solids with a mixed aromatic–aliphatic structure) that could be created naturally, and rapidly, by stars. A smaller fraction of dust in space is "stardust" consisting of larger refractory minerals that condensed as matter left by stars.
Interstellar dust particles were collected by the Stardust spacecraft and samples were returned to Earth in 2006.
Study and importance
Cosmic dust was once solely an annoyance to astronomers, as it obscures objects they wished to observe. When infrared astronomy began, the dust particles were observed to be significant and vital components of astrophysical processes. Their analysis can reveal information about phenomena like the formation of the Solar System. For example, cosmic dust can drive the mass loss when a star is nearing the end of its life, play a part in the early stages of star formation, and form planets. In the Solar System, dust plays a major role in the zodiacal light, Saturn's B Ring spokes, the outer diffuse planetary rings at Jupiter, Saturn, Uranus and Neptune, and comets.
The interdisciplinary study of dust brings together different scientific fields: physics (solid-state, electromagnetic theory, surface physics, statistical physics, thermal physics), fractal mathematics, surface chemistry on dust grains, meteoritics, as well as every branch of astronomy and astrophysics. These disparate research areas can be linked by the following theme: the cosmic dust particles evolve cyclically; chemically, physically and dynamically. The evolution of dust traces out paths in which the Universe recycles material, in processes analogous to the daily recycling steps with which many people are familiar: production, storage, processing, collection, consumption, and discarding.
Observations and measurements of cosmic dust in different regions provide an important insight into the Universe's recycling processes; in the clouds of the diffuse interstellar medium, in molecular clouds, in the circumstellar dust of young stellar objects, and in planetary systems such as the Solar System, where astronomers consider dust as in its most recycled state. The astronomers accumulate observational ‘snapshots’ of dust at different stages of its life and, over time, form a more complete movie of the Universe's complicated recycling steps.
Parameters such as the particle's initial motion, material properties, intervening plasma and magnetic field determined the dust particle's arrival at the dust detector. Slightly changing any of these parameters can give significantly different dust dynamical behavior. Therefore, one can learn about where that object came from, and what is (in) the intervening medium.
Detection methods
A wide range of methods is available to study cosmic dust. Cosmic dust can be detected by remote sensing methods that utilize the radiative properties of cosmic dust particles, c.f. Zodiacal light measurement.
Cosmic dust can also be detected directly ('in-situ') using a variety of collection methods and from a variety of collection locations. Estimates of the daily influx of extraterrestrial material entering the Earth's atmosphere range between 5 and 300 tonnes.
NASA collects samples of star dust particles in the Earth's atmosphere using plate collectors under the wings of stratospheric-flying airplanes. Dust samples are also collected from surface deposits on the large Earth ice-masses (Antarctica and Greenland/the Arctic) and in deep-sea sediments.
Don Brownlee at the University of Washington in Seattle first reliably identified the extraterrestrial nature of collected dust particles in the latter 1970s. Another source is the meteorites, which contain stardust extracted from them. Stardust grains are solid refractory pieces of individual presolar stars. They are recognized by their extreme isotopic compositions, which can only be isotopic compositions within evolved stars, prior to any mixing with the interstellar medium. These grains condensed from the stellar matter as it cooled while leaving the star.
In interplanetary space, dust detectors on planetary spacecraft have been built and flown, some are presently flying, and more are presently being built to fly. The large orbital velocities of dust particles in interplanetary space (typically 10–40 km/s) make intact particle capture problematic. Instead, in-situ dust detectors are generally devised to measure parameters associated with the high-velocity impact of dust particles on the instrument, and then derive physical properties of the particles (usually mass and velocity) through laboratory calibration (i.e., impacting accelerated particles with known properties onto a laboratory replica of the dust detector). Over the years dust detectors have measured, among others, the impact light flash, acoustic signal and impact ionisation. Recently the dust instrument on Stardust captured particles intact in low-density aerogel.
Dust detectors in the past flew on the HEOS 2, Helios, Pioneer 10, Pioneer 11, Giotto, Galileo, Ulysses and Cassini space missions, on the Earth-orbiting LDEF, EURECA, and Gorid satellites, and some scientists have utilized the Voyager 1 and 2 spacecraft as giant Langmuir probes to directly sample the cosmic dust. Presently dust detectors are flying on the Ulysses, Proba, Rosetta, Stardust, and the New Horizons spacecraft. The collected dust at Earth or collected further in space and returned by sample-return space missions is then analyzed by dust scientists in their respective laboratories all over the world. One large storage facility for cosmic dust exists at the NASA Houston JSC.
Infrared light can penetrate cosmic dust clouds, allowing us to peer into regions of star formation and the centers of galaxies. NASA's Spitzer Space Telescope was the largest infrared space telescope, before the launch of the James Webb Space Telescope. During its mission, Spitzer obtained images and spectra by detecting the thermal radiation emitted by objects in space between wavelengths of 3 and 180 micrometres. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground. Findings from the Spitzer have revitalized the studies of cosmic dust. One report showed some evidence that cosmic dust is formed near a supermassive black hole.
Another detection mechanism is polarimetry. Dust grains are not spherical and tend to align to interstellar magnetic fields, preferentially polarizing starlight that passes through dust clouds. In nearby interstellar space, where interstellar reddening is not intense enough to be detected, high precision optical polarimetry has been used to glean the structure of dust within the Local Bubble.
In 2019, researchers found interstellar dust in Antarctica which they relate to the Local Interstellar Cloud. The detection of interstellar dust in Antarctica was done by the measurement of the radionuclides iron-60 and manganese-53 by highly sensitive Accelerator mass spectrometry.
Radiation properties
A dust particle interacts with electromagnetic radiation in a way that depends on its cross section, the wavelength of the electromagnetic radiation, and on the nature of the grain: its refractive index, size, etc. The radiation process for an individual grain is called its emissivity, dependent on the grain's efficiency factor. Further specifications regarding the emissivity process include extinction, scattering, absorption, or polarisation. In the radiation emission curves, several important signatures identify the composition of the emitting or absorbing dust particles.
Dust particles can scatter light nonuniformly. Forward scattered light is light that is redirected slightly off its path by diffraction, and back-scattered light is reflected light.
The scattering and extinction ("dimming") of the radiation gives useful information about the dust grain sizes. For example, if the in one's data is many times brighter in forward-scattered visible light than in back-scattered visible light, then it is understood that a significant fraction of the particles are about a micrometer in diameter.
The scattering of light from dust grains in long exposure visible photographs is quite noticeable in reflection nebulae, and gives clues about the individual particle's light-scattering properties. In X-ray wavelengths, many scientists are investigating the scattering of X-rays by interstellar dust, and some have suggested that astronomical X-ray sources would possess diffuse haloes, due to the dust.
Presolar grains
Presolar grains are contained within meteorites, from which they are extracted in terrestrial laboratories. The term "stardust" or "presolar stardust" is sometimes used to distinguish grains from a single star in comparison to aggregated interstellar dust particles, though this distinction is not universally applied. Presolar material was a component of the dust in the interstellar medium before its incorporation into meteorites. The meteorites have stored those presolar grains ever since the meteorites first assembled within the planetary accretion disk more than four billion years ago. Carbonaceous chondrites are especially fertile reservoirs of presolar material. Presolar grains definitionally existed before the Earth was formed. Presolar grain (and, less frequently, "stardust" or "presolar stardust") is the scientific term referring to refractory dust grains that condensed from cooling ejected gases from individual presolar stars and incorporated into the cloud from which the Solar System condensed.
Many different types of presolar grains have been identified by laboratory measurements of the highly unusual isotopic composition of the chemical elements that comprise each presolar grain. These refractory mineral grains may earlier have been coated with volatile compounds, but those are lost in the dissolving of meteorite matter in acids, leaving only insoluble refractory minerals. Finding the grain cores without dissolving most of the meteorite has been possible, but difficult and labor-intensive.
Many new aspects of nucleosynthesis have been discovered from the isotopic ratios within the presolar grains. An important property of presolar is the hard, refractory, high-temperature nature of the grains. Prominent are silicon carbide, graphite, aluminium oxide, aluminium spinel, and other such solids that would condense at high temperature from a cooling gas, such as in stellar winds or in the decompression of the inside of a supernova. They differ greatly from the solids formed at low temperature within the interstellar medium.
Also important are their extreme isotopic compositions, which are expected to exist nowhere in the interstellar medium. This also suggests that the presolar grains condensed from the gases of individual stars before the isotopes could be diluted by mixing with the interstellar medium. These allow the source stars to be identified. For example, the heavy elements within the silicon carbide (SiC) grains are almost pure S-process isotopes, fitting their condensation within AGB star red giant winds inasmuch as the AGB stars are the main source of S-process nucleosynthesis and have atmospheres observed by astronomers to be highly enriched in dredged-up s process elements.
Another dramatic example is given by supernova condensates, usually shortened by acronym to SUNOCON (from SUperNOva CONdensate) to distinguish them from other grains condensed within stellar atmospheres. SUNOCONs contain in their calcium an excessively large abundance of 44Ca, demonstrating that they condensed containing abundant radioactive 44Ti, which has a 65-year half-life. The outflowing 44Ti nuclei were thus still "alive" (radioactive) when the SUNOCON condensed near one year within the expanding supernova interior, but would have become an extinct radionuclide (specifically 44Ca) after the time required for mixing with the interstellar gas. Its discovery proved the prediction from 1975 that it might be possible to identify SUNOCONs in this way. The SiC SUNOCONs (from supernovae) are only about 1% as numerous as are SiC stardust from AGB stars.
Stardust itself (SUNOCONs and AGB grains that come from specific stars) is but a modest fraction of the condensed cosmic dust, forming less than 0.1% of the mass of total interstellar solids. The high interest in presolar grains derives from new information that it has brought to the sciences of stellar evolution and nucleosynthesis.
Laboratories have studied solids that existed before the Earth was formed. This was once thought impossible, especially in the 1970s when cosmochemists were confident that the Solar System began as a hot gas virtually devoid of any remaining solids, which would have been vaporized by high temperature. The existence of presolar grains proved this historic picture incorrect.
Some bulk properties
Cosmic dust is made of dust grains and aggregates into dust particles. These particles are irregularly shaped, with porosity ranging from fluffy to compact. The composition, size, and other properties depend on where the dust is found, and conversely, a compositional analysis of a dust particle can reveal much about the dust particle's origin. General diffuse interstellar medium dust, dust grains in dense clouds, planetary rings dust, and circumstellar dust, are each different in their characteristics. For example, grains in dense clouds have acquired a mantle of ice and on average are larger than dust particles in the diffuse interstellar medium. Interplanetary dust particles (IDPs) are generally larger still.
Most of the influx of extraterrestrial matter that falls onto the Earth is dominated by meteoroids with diameters in the range 50 to 500 micrometers, of average density 2.0 g/cm3 (with porosity about 40%). The total influx rate of meteoritic sites of most IDPs captured in the Earth's stratosphere range between 1 and 3 g/cm3, with an average density at about 2.0 g/cm3.
Other specific dust properties: in circumstellar dust, astronomers have found molecular signatures of CO, silicon carbide, amorphous silicate, polycyclic aromatic hydrocarbons, water ice, and polyformaldehyde, among others (in the diffuse interstellar medium, there is evidence for silicate and carbon grains). Cometary dust is generally different (with overlap) from asteroidal dust. Asteroidal dust resembles carbonaceous chondritic meteorites. Cometary dust resembles interstellar grains which can include silicates, polycyclic aromatic hydrocarbons, and water ice.
In September 2020, evidence was presented of solid-state water in the interstellar medium, and particularly, of water ice mixed with silicate grains in cosmic dust grains.
Dust grain formation
The large grains in interstellar space are probably complex, with refractory cores that condensed within stellar outflows topped by layers acquired during incursions into cold dense interstellar clouds. That cyclic process of growth and destruction outside of the clouds has been modeled to demonstrate that the cores live much longer than the average lifetime of dust mass. Those cores mostly start with silicate particles condensing in the atmospheres of cool, oxygen-rich red-giants and carbon grains condensing in the atmospheres of cool carbon stars. Red giants have evolved or altered off the main sequence and have entered the giant phase of their evolution and are the major source of refractory dust grain cores in galaxies. Those refractory cores are also called stardust (section above), which is a scientific term for the small fraction of cosmic dust that condensed thermally within stellar gases as they were ejected from the stars. Several percent of refractory grain cores have condensed within expanding interiors of supernovae, a type of cosmic decompression chamber. Meteoriticists who study refractory stardust (extracted from meteorites) often call it presolar grains but that within meteorites is only a small fraction of all presolar dust. Stardust condenses within the stars via considerably different condensation chemistry than that of the bulk of cosmic dust, which accretes cold onto preexisting dust in dark molecular clouds of the galaxy. Those molecular clouds are very cold, typically less than 50K, so that ices of many kinds may accrete onto grains, in cases only to be destroyed or split apart by radiation and sublimation into a gas component. Finally, as the Solar System formed many interstellar dust grains were further modified by coalescence and chemical reactions in the planetary accretion disk. The history of the various types of grains in the early Solar System is complicated and only partially understood.
Astronomers know that the dust is formed in the envelopes of late-evolved stars from specific observational signatures. In infrared light, emission at 9.7 micrometres is a signature of silicate dust in cool evolved oxygen-rich giant stars. Emission at 11.5 micrometres indicates the presence of silicon carbide dust in cool evolved carbon-rich giant stars. These help provide evidence that the small silicate particles in space came from the ejected outer envelopes of these stars.
Conditions in interstellar space are generally not suitable for the formation of silicate cores. This would take excessive time to accomplish, even if it might be possible. The arguments are that: given an observed typical grain diameter a, the time for a grain to attain a, and given the temperature of interstellar gas, it would take considerably longer than the age of the Universe for interstellar grains to form. On the other hand, grains are seen to have recently formed in the vicinity of nearby stars, in nova and supernova ejecta, and in R Coronae Borealis variable stars which seem to eject discrete clouds containing both gas and dust. So mass loss from stars is unquestionably where the refractory cores of grains formed.
Most dust in the Solar System is highly processed dust, recycled from the material out of which the Solar System formed and subsequently collected in the planetesimals, and leftover solid material such as comets and asteroids, and reformed in each of those bodies' collisional lifetimes. During the Solar System's formation history, the most abundant element was (and still is) H2. The metallic elements: magnesium, silicon, and iron, which are the principal ingredients of rocky planets, condensed into solids at the highest temperatures of the planetary disk. Some molecules such as CO, N2, NH3, and free oxygen, existed in a gas phase. Some molecules, for example, graphite (C) and SiC would condense into solid grains in the planetary disk; but carbon and SiC grains found in meteorites are presolar based on their isotopic compositions, rather than from the planetary disk formation. Some molecules also formed complex organic compounds and some molecules formed frozen ice mantles, of which either could coat the "refractory" (Mg, Si, Fe) grain cores. Stardust once more provides an exception to the general trend, as it appears to be totally unprocessed since its thermal condensation within stars as refractory crystalline minerals. The condensation of graphite occurs within supernova interiors as they expand and cool, and do so even in gas containing more oxygen than carbon, a surprising carbon chemistry made possible by the intense radioactive environment of supernovae. This special example of dust formation has merited specific review.
Planetary disk formation of precursor molecules was determined, in large part, by the temperature of the solar nebula. Since the temperature of the solar nebula decreased with heliocentric distance, scientists can infer a dust grain's with knowledge of the grain's materials. Some materials could only have been formed at high temperatures, while other grain materials could only have been formed at much lower temperatures. The materials in a single interplanetary dust particle often show that the grain elements formed in different locations and at different times in the solar nebula. Most of the matter present in the original solar nebula has since disappeared; drawn into the Sun, expelled into interstellar space, or reprocessed, for example, as part of the planets, asteroids or comets.
Due to their highly processed nature, IDPs (interplanetary dust particles) are fine-grained mixtures of thousands to millions of mineral grains and amorphous components. We can picture an IDP as a "matrix" of material with embedded elements which were formed at different times and places in the solar nebula and before the solar nebula's formation. Examples of embedded elements in cosmic dust are GEMS, chondrules, and CAIs.
From the solar nebula to Earth
The arrows in the adjacent diagram show one possible path from a collected interplanetary dust particle back to the early stages of the solar nebula.
We can follow the trail to the right in the diagram to the IDPs that contain the most volatile and primitive elements. The trail takes us first from interplanetary dust particles to chondritic interplanetary dust particles. Planetary scientists classify chondritic IDPs in terms of their diminishing degree of oxidation so that they fall into three major groups: the carbonaceous, the ordinary, and the enstatite chondrites. As the name implies, the carbonaceous chondrites are rich in carbon, and many have anomalies in the isotopic abundances of H, C, N, and O. From the carbonaceous chondrites, we follow the trail to the most primitive materials. They are almost completely oxidized and contain the lowest condensation temperature elements ("volatile" elements) and the largest amount of organic compounds. Therefore, dust particles with these elements are thought to have been formed in the early life of the Solar System. The volatile elements have never seen temperatures above about 500 K, therefore, the IDP grain "matrix" consists of some very primitive Solar System material. Such a scenario is true in the case of comet dust. The provenance of the small fraction that is stardust (see above) is quite different; these refractory interstellar minerals thermally condense within stars, become a small component of interstellar matter, and therefore remain in the presolar planetary disk. Nuclear damage tracks are caused by the ion flux from solar flares. Solar wind ions impacting on the particle's surface produce amorphous radiation damaged rims on the particle's surface. And spallogenic nuclei are produced by galactic and solar cosmic rays. A dust particle that originates in the Kuiper Belt at 40 AU would have many more times the density of tracks, thicker amorphous rims and higher integrated doses than a dust particle originating in the main-asteroid belt.
Based on 2012 computer model studies, the complex organic molecules necessary for life (extraterrestrial organic molecules) may have formed in the protoplanetary disk of dust grains surrounding the Sun before the formation of the Earth. According to the computer studies, this same process may also occur around other stars that acquire planets.
In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics – "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."
In February 2014, NASA announced a greatly upgraded database for detecting and monitoring polycyclic aromatic hydrocarbons (PAHs) in the universe. According to NASA scientists, over 20% of the carbon in the Universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are abundant in the Universe, and are associated with new stars and exoplanets.
In March 2015, NASA scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the Universe, may have been formed in red giants or in interstellar dust and gas clouds, according to the scientists.
Some "dusty" clouds in the universe
The Solar System has its own interplanetary dust cloud, as do extrasolar systems. There are different types of nebulae with different physical causes and processes: diffuse nebula, infrared (IR) reflection nebula, supernova remnant, molecular cloud, HII regions, photodissociation regions, and dark nebula.
Distinctions between those types of nebula are that different radiation processes are at work. For example, H II regions, like the Orion Nebula, where a lot of star-formation is taking place, are characterized as thermal emission nebulae. Supernova remnants, on the other hand, like the Crab Nebula, are characterized as nonthermal emission (synchrotron radiation).
Some of the better known dusty regions in the Universe are the diffuse nebulae in the Messier catalog, for example: M1, M8, M16, M17, M20, M42, M43.
Some larger dust catalogs are Sharpless (1959) A Catalogue of HII Regions, Lynds (1965) Catalogue of Bright Nebulae, Lynds (1962) Catalogue of Dark Nebulae, van den Bergh (1966) Catalogue of Reflection Nebulae, Green (1988) Rev. Reference Cat. of Galactic SNRs, The National Space Sciences Data Center (NSSDC), and CDS Online Catalogs.
Dust sample return
The Discovery program's Stardust mission, was launched on 7 February 1999 to collect samples from the coma of comet Wild 2, as well as samples of cosmic dust. It returned samples to Earth on 15 January 2006. In 2007, the recovery of particles of interstellar dust from the samples was announced.
Dust particles on Earth
In 2017, Genge et al published a paper about "urban collection" of dust particles on Earth. The team were able to collect 500 micrometeorites from rooftops. Dust was collected in Oslo and in Paris, and "all particles are silicate-dominated (S type) cosmic spherules with subspherical shapes that form by melting during atmospheric entry and consist of quench crystals of magnesian olivine, relict crystals of forsterite, and iron-bearing olivine within glass". In the UK, scientists look for micrometeorites on the rooftops of cathedrals, like Canterbury Cathedral and Rochester Cathedral. Currently 40,000 tons of cosmic dust falls to Earth each year.
| Physical sciences | Basics_2 | Astronomy |
2179965 | https://en.wikipedia.org/wiki/Deinotheriidae | Deinotheriidae | Deinotheriidae ("terrible beasts") is a family of prehistoric elephant-like proboscideans that lived during the Cenozoic era, first appearing in Africa during the Oligocene then spreading across Europe and the lower latitudes of Asia during the Miocene epoch. Their most distinctive features were their lack of upper tusks and downward-curving tusks on the lower jaw.
Deinotheres were not very diverse; the only three known genera are Chilgatherium, Prodeinotherium, and Deinotherium. These form an evolutionary succession, with each new genus replacing the preceding one. Deinotheres were relatively conservative and showed little morphological change over their evolution, aside from a progressive increase in body size. Some species of Deinotherium are among the largest known land mammals ever, considerably exceeding modern elephants in size. The last members of Deinotherium persisted until the end of the Early Pleistocene in Africa, around 1 million years ago.
Description
The body shape and proportions of deinotheres were very much like those of modern elephants. The legs were long, like modern elephants, but the skull was rather flatter than that of true elephants. The upper jaw lacked incisor and canine teeth, but possessed five low-crowned molars on each side, with the same number in the lower jaw. Deinotheres used their front teeth for crushing their food, and the back teeth for shearing (slicing) the plant material. The front part of the lower jaw was turned downwards and bore the two tusk-like incisors. These curved downwards and backwards in a sort of huge hook and constituted the most distinct feature of the deinotheres. The tusks were used to strip vegetation rather than for digging.
While the earliest deinothere Chilgatherium probably weighed only around and was less than tall, some species of Deinotherium represent among the largest known proboscideans, with shoulder heights of over and body masses around , considerably exceeding living African bush elephants in body size, making them among the largest land mammals ever.
Ecology
Deinotheres were "shearing browsers" adapted for feeding on plants above ground level. The way they chewed their food was probably similar to that of modern tapirs, with the front teeth being used to crush the food, while the second and third molars have a strong vertical shearing action, with little lateral (side-to-side) movement. This chewing action differs from both that of gomphotheres (lateral grinding) and elephants (horizontal shearing). Deinothere molars show little wear, indicating a diet of soft, non-gritty, forest vegetation, with the down-turned lower tusks being used for stripping bark or other vegetation.
Deinotherium giganteum has a more elongated lower fore limb than early and middle Miocene Prodeinotherium, indicating a more efficient stride as an adaptation to the spread of savannas in Europe during the late Miocene. Deinotheres probably migrated from forest to forest, traversing the wide and (to them) useless grasslands.
Evolutionary history
Deinotheriids are thought to have diverged away from the ancestors of Elephantiformes during the Eocene, over 40 millon years ago, based on the presence of primitive Elephantiformes in Lutetian deposits. Phylogeny of Proboscidea showing placement of Deinotheriidae, following Hautier et al. 2021:The oldest known deinothere is Chilgatherium harrisi from the late Oligocene, around 27-28 million years ago. Its fossil remains have been found in the district of Chilga in Ethiopia (hence the name). It is primarily known from tooth remains.
By the early Miocene, deinotheres had grown to the size of a small elephant and had migrated to Eurasia. Several species are known, all belonging to the genus Prodeinotherium.
During the late middle Miocene, these modest-seized proboscideans were replaced by much larger forms across Eurasia. In Europe, Prodeinotherium bavaricum appeared in the early Miocene mammal faunal zone MN 4, but was soon replaced by Deinotherium giganteum in the middle Miocene. Likewise in Asia, Prodeinotherium is known from the early Miocene strata in the Bugti Hills, and continued into the middle Miocene Chinji Formation, where it was replaced by D. indicum.
While these Miocene deinotheres were dispersed widely and evolved to huge elephant sizes, they were not as common as the contemporary (but smaller) Elephantoidea. Fossil remains of this age are known from the France, Germany, Greece, Malta, and northern India and Pakistan. These consist chiefly of teeth and the bones of the skull.
After the extinction of the paraceratheres at the Oligocene-Miocene transition, the deinotheres were (and remained) the largest animals walking the Earth.
The late Miocene was the heyday of the giant deinotheres. D. giganteum was common from Vallesian and Turolian localities in Europe. Prodeinotherium, which was reasonably well represented in the early Miocene of Africa, was succeeded by D. bozasi at the beginning of the late Miocene. And in Asia, D. indicum was most common in the late-Miocene Dhok Pathan Formation.
Fossil teeth of D. giganteum, from the late-Miocene Sinap Formation at the Turkish site of Kayadibi are larger than those from older localities, such as Eppelsheim, Wissberg, and Montredon, indicating a tendency for increasing size of members of the species over time. These were the biggest animals of their day, protected from both predators and rival herbivores by virtue of their huge bulk. The largest mammoths did not approach them in size until the Pleistocene.
With the end of the Miocene, deinothere fortunes declined. D. indicum died out about 7 million years ago, possibly driven to extinction by the same process of climate change that had previously eliminated the even more enormous Paraceratherium. While in Europe, D. giganteum continued, albeit with dwindling numbers, until the middle Pliocene; the most recent specimen is from Romania.
In its original African homeland, Deinotherium continued to flourish throughout the Pliocene, and fossils have been uncovered at several of the African sites where remains of hominids have also been found.
The last deinothere species to become extinct was D. bozasi. The youngest known specimens are from the Kanjera Formation, Kenya, about 1 million years ago (early Pleistocene). The causes of the extinction of such a successful and long-lived animal are not known, although a small number of other species of African megafauna also died out at this time.
| Biology and health sciences | Proboscidea | Animals |
9328562 | https://en.wikipedia.org/wiki/Boltzmann%27s%20entropy%20formula | Boltzmann's entropy formula | In statistical mechanics, Boltzmann's equation (also known as the Boltzmann–Planck equation) is a probability equation relating the entropy , also written as , of an ideal gas to the multiplicity (commonly denoted as or ), the number of real microstates corresponding to the gas's macrostate:
where is the Boltzmann constant (also written as simply ) and equal to 1.380649 × 10−23 J/K, and is the natural logarithm function (or log base e, as in the image above).
In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a certain kind of thermodynamic system can be arranged.
History
The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases".
A 'microstate' is a state specified in terms of the constituent particles of a body of matter or radiation that has been specified as a macrostate in terms of such variables as internal energy and pressure. A macrostate is experimentally observable, with at least a finite extent in spacetime. A microstate can be instantaneous, or can be a trajectory composed of a temporal progression of instantaneous microstates. In experimental practice, such are scarcely observable. The present account concerns instantaneous microstates.
The value of was originally intended to be proportional to the Wahrscheinlichkeit (the German word for probability) of a macroscopic state for some probability distribution of possible microstates—the collection of (unobservable microscopic single particle) "ways" in which the (observable macroscopic) thermodynamic state of a system can be realized by assigning different positions and momenta to the respective molecules.
There are many instantaneous microstates that apply to a given macrostate. Boltzmann considered collections of such microstates. For a given macrostate, he called the collection of all possible instantaneous microstates of a certain kind by the name monode, for which Gibbs' term ensemble is used nowadays. For single particle instantaneous microstates, Boltzmann called the collection an ergode. Subsequently, Gibbs called it a microcanonical ensemble, and this name is widely used today, perhaps partly because Bohr was more interested in the writings of Gibbs than of Boltzmann.
Interpreted in this way, Boltzmann's formula is the most basic formula for the thermodynamic entropy. Boltzmann's paradigm was an ideal gas of identical particles, of which are in the -th microscopic condition (range) of position and momentum. For this case, the probability of each microstate of the system is equal, so it was equivalent for Boltzmann to calculate the number of microstates associated with a macrostate. was historically misinterpreted as literally meaning the number of microstates, and that is what it usually means today. can be counted using the formula for permutations
where ranges over all possible molecular conditions and "" denotes factorial. The "correction" in the denominator is due to the fact that identical particles in the same condition are indistinguishable. is sometimes called the "thermodynamic probability" since it is an integer greater than one, while mathematical probabilities are always numbers between zero and one.
Introduction of the natural logarithm
In Boltzmann’s 1877 paper, he clarifies molecular state counting to determine the state distribution number introducing the logarithm to simplify the equation.
Boltzmann writes:
“The first task is to determine the permutation number, previously designated by
𝒫
, for any state distribution. Denoting by J the sum of the permutations
𝒫
for all possible state distributions, the quotient
𝒫
/J is the state distribution’s probability, henceforth denoted by W. We would first like to calculate the permutations
𝒫
for
the state distribution characterized by w0 molecules with kinetic energy 0, w1 molecules with kinetic energy ϵ, etc. …
“The most likely state distribution will be for those w0, w1 … values for which
𝒫
is a maximum or since the numerator is a constant, for which the denominator is a minimum. The values w0, w1 must simultaneously satisfy the two constraints (1) and (2). Since the denominator of
𝒫
is a product, it is easiest to determine the minimum of its logarithm, …”
Therefore, by making the denominator small, he maximizes the number of states. So to simplify the product of the factorials, he uses their natural logarithm to add them. This is the reason for the natural logarithm in Boltzmann’s entropy formula.
Generalization
Boltzmann's formula applies to microstates of a system, each possible microstate of which is presumed to be equally probable.
But in thermodynamics, the universe is divided into a system of interest, plus its surroundings; then the entropy of Boltzmann's microscopically specified system can be identified with the system entropy in classical thermodynamics. The microstates of such a thermodynamic system are not equally probable—for example, high energy microstates are less probable than low energy microstates for a thermodynamic system kept at a fixed temperature by allowing contact with a heat bath.
For thermodynamic systems where microstates of the system may not have equal probabilities, the appropriate generalization, called the Gibbs entropy, is:
This reduces to equation () if the probabilities pi are all equal.
Boltzmann used a formula as early as 1866. He interpreted as a density in phase space—without mentioning probability—but since this satisfies the axiomatic definition of a probability measure we can retrospectively interpret it as a probability anyway. Gibbs gave an explicitly probabilistic interpretation in 1878.
Boltzmann himself used an expression equivalent to () in his later work and recognized it as more general than equation (). That is, equation () is a corollary of
equation ()—and not vice versa. In every situation where equation () is valid,
equation () is valid also—and not vice versa.
Boltzmann entropy excludes statistical dependencies
The term Boltzmann entropy is also sometimes used to indicate entropies calculated based on the approximation that the overall probability can be factored into an identical separate term for each particle—i.e., assuming each particle has an identical independent probability distribution, and ignoring interactions and correlations between the particles. This is exact for an ideal gas of identical particles that move independently apart from instantaneous collisions, and is an approximation, possibly a poor one, for other systems.
The Boltzmann entropy is obtained if one assumes one can treat all the component particles of a thermodynamic system as statistically independent. The probability distribution of the system as a whole then factorises into the product of N separate identical terms, one term for each particle; and when the summation is taken over each possible state in the 6-dimensional phase space of a single particle (rather than the 6N-dimensional phase space of the system as a whole), the Gibbs entropy
simplifies to the Boltzmann entropy .
This reflects the original statistical entropy function introduced by Ludwig Boltzmann in 1872. For the special case of an ideal gas it exactly corresponds to the proper thermodynamic entropy.
For anything but the most dilute of real gases, leads to increasingly wrong predictions of entropies and physical behaviours, by ignoring the interactions and correlations between different molecules. Instead one must consider the ensemble of states of the system as a whole, called by Boltzmann a holode, rather than single particle states. Gibbs considered several such kinds of ensembles; relevant here is the canonical one.
| Physical sciences | Thermodynamics | Physics |
9331066 | https://en.wikipedia.org/wiki/Induction%20generator | Induction generator | An induction generator or asynchronous generator is a type of alternating current (AC) electrical generator that uses the principles of induction motors to produce electric power. Induction generators operate by mechanically turning their rotors faster than synchronous speed. A regular AC induction motor usually can be used as a generator, without any internal modifications. Because they can recover energy with relatively simple controls, induction generators are useful in applications such as mini hydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure.
An induction generator draws reactive excitation current from an external source. Induction generators have an AC rotor and cannot bootstrap using residual magnetization to black start a de-energized distribution system as synchronous machines do. Power factor correcting capacitors can be added externally to neutralize a constant amount of the variable reactive excitation current. After starting, an induction generator can use a capacitor bank to produce reactive excitation current, but the isolated power system's voltage and frequency are not self-regulating and destabilize readily.
Principle of Operation
An induction generator produces electrical power when its rotor is turned faster than the synchronous speed. For a four-pole motor (two pairs of poles on stator) powered by a 60 Hz source, the synchronous speed is 1800 rotations per minute (rpm) and 1500 RPM powered at 50 Hz. The motor always turns slightly slower than the synchronous speed. The difference between synchronous and operating speed is called "slip" and is often expressed as percent of the synchronous speed. For example, a motor operating at 1450 RPM that has a synchronous speed of 1500 RPM is running at a slip of +3.3%.
In operation as a motor, the stator flux rotation is at the synchronous speed, which is faster than the rotor speed. This causes the stator flux to cycle at the slip frequency inducing rotor current through the mutual inductance between the stator and rotor. The induced current create a rotor flux with magnetic polarity opposite to the stator. In this way, the rotor is dragged along behind stator flux, with the currents in the rotor induced at the slip frequency. The motor runs at the speed where the induced rotor current gives rise to torque equal to the shaft load.
In generator operation, a prime mover (turbine or engine) drives the rotor above the synchronous speed (negative slip). The stator flux induces current in the rotor, but the opposing rotor flux is now cutting the stator coils, a current is induced in the stator coils 270° behind the magnetizing current, in phase with magnetizing voltage. The motor delivers real (in-phase) power to the power system.
Excitation
An induction motor requires an externally supplied current to the stator windings in order to induce a current in the rotor. Because the current in an inductor is integral of the voltage with respect to time, for a sinusoidal voltage waveform the current lags the voltage by 90°, and the induction motor always consumes reactive power, regardless of whether it is consuming electrical power and delivering mechanical power as a motor or consuming mechanical power and delivering electrical power to the system.
A source of excitation current for magnetizing flux (reactive power) for the stator is still required, to induce rotor current. This can be supplied from the electrical grid or, once it starts producing power, from a capacitive reactance. The generating mode for induction motors is complicated by the need to excite the rotor, which being induced by an alternating current is demagnetized at shutdown with no residual magnetization to bootstrap a cold start. It is necessary to connect an external source of magnetizing current to initialize production. The power frequency and voltage are not self regulating. The generator is able to supply current out of phase with the voltage requiring more external equipment to build a functional isolated power system. Similar is the operation of the induction motor in parallel with a synchronous motor serving as a power factor compensator. A feature in the generator mode in parallel to the grid is that the rotor speed is higher than in the driving mode. Then active energy is being given to the grid. Another disadvantage of induction motor generator is that it consumes a significant magnetizing current I0 = (20-35)%.
Active Power
Active power delivered to the line is proportional to slip above the synchronous speed. Full rated power of the generator is reached at very small slip values (motor dependent, typically 3%). At synchronous speed of 1800 RPM, generator will produce no power. When the driving speed is increased to 1860 RPM (typical example), full output power is produced. If the prime mover is unable to produce enough power to fully drive the generator, speed will remain somewhere between 1800 and 1860 RPM range.
Required Capacitance
A capacitor bank must supply reactive power to the motor when used in stand-alone mode. The reactive power supplied should be equal or greater than the reactive power that the generator normally draws when operating as a motor.
Torque vs. Slip
The basic fundamental of induction generators is the conversion from mechanical energy to electrical energy. This requires an external torque applied to the rotor to turn it faster than the synchronous speed. However, indefinitely increasing torque doesn't lead to an indefinite increase in power generation. The rotating magnetic field torque excited from the armature works to counter the motion of the rotor and prevent over speed because of induced motion in the opposite direction. As the speed of the motor increases the counter torque reaches a max value of torque (breakdown torque) that it can operate until before the operating conditions become unstable. Ideally, induction generators work best in the stable region between the no-load condition and maximum torque region.
Rating Current
The maximum power that can be produced by an induction motor operated as a generator is limited by the rated current of the generator's windings.
Grid and stand-alone connections
In induction generators, the reactive power required to establish the air gap magnetic flux is provided by a capacitor bank connected to the machine in case of stand-alone system and in case of grid connection it draws reactive power from the grid to maintain its air gap flux. For a grid-connected system, frequency and voltage at the machine will be dictated by the electric grid, since it is very small compared to the whole system. For stand-alone systems, frequency and voltage are complex function of machine parameters, capacitance used for excitation, and load value and type.
Uses
Induction generators are often used in wind turbines and some micro hydro installations due to their ability to produce useful power at varying rotor speeds. Induction generators are mechanically and electrically simpler than other generator types. They are also more rugged, requiring no brushes or commutators.
Limitations
An induction generator connected to a capacitor system can generate sufficient reactive power to operate independently. When the load current exceeds the capability of the generator to supply both magnetization reactive power and load power the generator will immediately cease to produce power. The load must be removed and the induction generator restarted with either an external DC motor or if present, residual magnetism in the core.
Induction generators are particularly suitable for wind generating stations as in this case speed is always a variable factor. Unlike synchronous motors, induction generators are load-dependent and cannot be used alone for grid frequency control.
Example application
As an example, consider the use of a 10 hp, 1760 r/min, 440 V, three-phase induction motor (a.k.a. induction electrical machine in an asynchronous generator regime) as asynchronous generator. The full-load current of the motor is 10 A and the full-load power factor is 0.8.
Required capacitance per phase if capacitors are connected in delta:
Apparent power
Active power
Reactive power
For a machine to run as an asynchronous generator, capacitor bank must supply minimum 4567 / 3 phases = 1523 VAR per phase. Voltage per capacitor is 440 V because capacitors are connected in delta.
Capacitive current Ic = Q/E = 1523/440 = 3.46 A
Capacitive reactance per phase Xc = E/Ic = 127 Ω
Minimum capacitance per phase:
C = 1 / (2*π*f*Xc) = 1 / (2 * 3.141 * 60 * 127) = 21 μF.
If the load also absorbs reactive power, capacitor bank must be increased in size to compensate.
Prime mover speed should be used to generate frequency of 60 Hz:
Typically, slip should be similar to full-load value when machine is running as motor, but negative (generator operation):
if Ns = 1800, one can choose N=Ns+40 rpm
Required prime mover speed N = 1800 + 40 = 1840 rpm.
| Technology | Power generation | null |
9332179 | https://en.wikipedia.org/wiki/A/B%20testing | A/B testing | A/B testing (also known as bucket testing, split-run testing, or split testing) is a user experience research method. A/B tests consist of a randomized experiment that usually involves two variants (A and B), although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and determining which of the variants is more effective.
Multivariate testing or multinomial testing is similar to A/B testing, but may test more than two versions at the same time or use more controls. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations—commonplace with survey data, offline data, and other, more complex phenomena.
Definition
"A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared.
A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows.
The following example illustrates an A/B test with a single variable:
Suppose a company has a customer database of 2,000 people and decides to create an email campaign with a discount code in order to generate sales through its website. The company creates two versions of the email with different call to action (the part of the copy which encourages customers to do something — in the case of a sales campaign, make a purchase) and identifying promotional code.
To 1,000 people it sends the email with the call to action stating, "Offer ends this Saturday! Use code A1",
To the remaining 1,000 people, it sends the email with the call to action stating, "Offer ends soon! Use code B1".
All other elements of the emails' copy and layout are identical.
The company then monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1,000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine if the differences in response rates between A1 and B1 were statistically significant (that is, highly likely that the differences are real, repeatable, and not due to random chance).
In the example above, the purpose of the test is to determine which is the more effective way to encourage customers to make a purchase. If, however, the aim of the test had been to see which email would generate the higher click-ratethat is, the number of people who actually click onto the website after receiving the emailthen the results might have been different.
For example, even though more of the customers receiving the code B1 accessed the website, because the Call To Action didn't state the end-date of the promotion many of them may feel no urgency to make an immediate purchase. Consequently, if the purpose of the test had been simply to see which email would bring more traffic to the website, then the email containing code B1 might well have been more successful. An A/B test should have a defined outcome that is measurable such as number of sales made, click-rate conversion, or number of people signing up/registering.
Common test statistics
Two-sample hypothesis tests are appropriate for comparing the two samples where the samples are divided by the two control cases in the experiment. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. Student's t-tests are appropriate for comparing means under relaxed conditions when less is assumed. Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used.
For a comparison of two binomial distributions such as a click-through rate one would use Fisher's exact test.
Segmentation and targeting
A/B tests most commonly apply the same variant (e.g., user interface element) with equal probability to all users. However, in some circumstances, responses to variants may be heterogeneous. That is, while a variant A might have a higher response rate overall, variant B may have an even higher response rate within a specific segment of the customer base.
For instance, in the above example, the breakdown of the response rates by gender could have been:
In this case, we can see that while variant A had a higher response rate overall, variant B actually had a higher response rate with men.
As a result, the company might select a segmented strategy as a result of the A/B test, sending variant B to men and variant A to women in the future. In this example, a segmented strategy would yield an increase in expected response rates from to – constituting a 30% increase.
If segmented results are expected from the A/B test, the test should be properly designed at the outset to be evenly distributed across key customer attributes, such as gender. That is, the test should both (a) contain a representative sample of men vs. women, and (b) assign men and women randomly to each “variant” (variant A vs. variant B). Failure to do so could lead to experiment bias and inaccurate conclusions to be drawn from the test.
This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attributefor example, customers' age and genderto identify more nuanced patterns that may exist in the test results.
Tradeoffs
Positives
The results of A/B tests are simple to interpret and use to get a clear idea of what users prefer, since it is directly testing one option over another. It is based on real user behavior, so the data can be very helpful especially when determining what works better between two options.
A/B tests can also provide answers to highly specific design questions. One example of this is Google's A/B testing with hyperlink colors. In order to optimize revenue, they tested dozens of different hyperlink hues to see which color the users tend to click more on.
Negatives
A/B tests are sensitive to variance; they require a large sample size in order to reduce standard error and produce a statistically significant result. In applications where active users are abundant, such as popular online social media platforms, obtaining a large sample size is trivial. In other cases, large sample sizes are obtained by increasing the experiment enrollment period. However, using a technique coined by Microsoft as Controlled-experiment Using Pre-Experiment Data (CUPED), variance from before the experiment start can be taken into account so that fewer samples are required to produce a statistically significant result.
Due to its nature as an experiment, running an A/B test introduces the risk of wasted time and resources if the test produces unwanted results, such as negative or no impact to business metrics.
In December 2018, representatives with experience in large-scale A/B testing from thirteen different organizations (Airbnb, Amazon, Booking.com, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber, and Stanford University) summarized the top challenges in a SIGKDD Explorations paper.
The challenges can be grouped into four areas: Analysis, Engineering and Culture, Deviations from Traditional A/B tests, and Data quality.
History
It is difficult to definitively establish when A/B testing was first used. The first randomized double-blind trial, to assess the effectiveness of a homeopathic drug, occurred in 1835. Experimentation with advertising campaigns, which has been compared to modern A/B testing, began in the early twentieth century. The advertising pioneer Claude Hopkins used promotional coupons to test the effectiveness of his campaigns. However, this process, which Hopkins described in his Scientific Advertising, did not incorporate concepts such as statistical significance and the null hypothesis, which are used in statistical hypothesis testing. Modern statistical methods for assessing the significance of sample data were developed separately in the same period. This work was done in 1908 by William Sealy Gosset when he altered the Z-test to create Student's t-test.
With the growth of the internet, new ways to sample populations have become available. Google engineers ran their first A/B test in the year 2000 in an attempt to determine what the optimum number of results to display on its search engine results page would be. The first test was unsuccessful due to glitches that resulted from slow loading times. Later A/B testing research would be more advanced, but the foundation and underlying principles generally remain the same, and in 2011, 11 years after Google's first test, Google ran over 7,000 different A/B tests.
In 2012, a Microsoft employee working on the search engine Microsoft Bing created an experiment to test different ways of displaying advertising headlines. Within hours, the alternative format produced a revenue increase of 12% with no impact on user-experience metrics. Today, major software companies such as Microsoft and Google each conduct over 10,000 A/B tests annually.
A/B testing has been claimed by some to be a change in philosophy and business-strategy in certain niches, though the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions. A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice.
Many companies now use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results. It is an increasingly common practice as the tools and expertise grow in this area.
Applications
A/B testing in online social media
A/B tests have been used by large social media sites like LinkedIn, Facebook, and Instagram to understand user engagement and satisfaction of online features, such as a new feature or product. A/B tests have also been used to conduct complex experiments on subjects such as network effects when users are offline, how online services affect user actions, and how users influence one another.
A/B testing for e-commerce
On an e-commerce website, the purchase funnel is typically a good candidate for A/B testing, since even marginal-decreases in drop-off rates can represent a significant gain in sales. Significant improvements can be sometimes seen through testing elements like copy text, layouts, images and colors, but not always. In these tests, users only see one of two versions, since the goal is to discover which of the two versions is preferable.
A/B testing for product pricing
A/B testing can be used to determine the right price for the product, as this is perhaps one of the most difficult tasks when a new product or service is launched. A/B testing (especially valid for digital goods) is an excellent way to find out which price-point and offering maximize the total revenue.
Political A/B testing
A/B tests have also been used by political campaigns. In 2007, Barack Obama's presidential campaign used A/B testing as a way to garner online attraction and understand what voters wanted to see from the presidential candidate. For example, Obama's team tested four distinct buttons on their website that led users to sign up for newsletters. Additionally, the team used six different accompanying images to draw in users. Through A/B testing, staffers were able to determine how to effectively draw in voters and garner additional interest.
HTTP Routing and API feature testing
A/B testing is very common when deploying a newer version of an API. For real-time user experience testing, an HTTP Layer-7 Reverse proxy is configured in such a way that, N% of the HTTP traffic goes into the newer version of the backend instance, while the remaining 100-N% of HTTP traffic hits the (stable) older version of the backend HTTP application service. This is usually done for limiting the exposure of customers to a newer backend instance such that, if there is a bug on the newer version, only N% of the total user agents or clients get affected while others get routed to a stable backend, which is a common ingress control mechanism.
| Mathematics | Statistics | null |
11817652 | https://en.wikipedia.org/wiki/Loupe | Loupe | A loupe ( ) is a simple, small magnification device used to see small details more closely. They generally have higher magnification than a magnifying glass, and are designed to be held or worn close to the eye. A loupe does not have an attached handle, and its focusing lens(es) are contained in an opaque cylinder or cone. On some loupes this cylinder folds into an enclosing housing that protects the lenses when not in use.
Optics
Three basic types of loupes exist:
Simple lenses, generally used for low-magnification designs because of high optical aberration.
Compound lenses, generally used for higher magnifications to control optical aberration.
Prismatic, multiple lenses with prisms.
Uses
Loupes are used in many professions where magnification enables precision work to be done with greater efficiency and ease. Examples include surgery, dentistry, ophthalmology, the jewelry trade, gemology, questioned document examination and watchmaking. Loupes are also sometimes used in photography and printing.
Jewellers and gemologists
Jewellers typically use a monocular, handheld loupe to magnify gemstones and other jewelry that they wish to inspect. A 10× magnification is good to use for inspecting jewelry and hallmarks and is the Gemological Institute of America's standard for grading diamond clarity. Stones will sometimes be inspected at higher magnifications than 10×, although the depth of field and field of view become too small to be instructive. The accepted standard for grading diamonds is therefore that inclusions and blemishes visible at 10× impact the clarity grade. The inclusions in VVS diamonds are hard to find even at 10×.
Watchmaking
Loupes are employed to assist watchmakers in assembling mechanical watches. Many aspects require the use of the loupe, in particular the assembly of the watch mechanism itself, the assembly and details of the watch dial, as well as the formation of the watch strap and installation of precious stones onto the watch face.
Photography
Analog (film) photographers use loupes to review, edit or analyze negatives and slides on a light table. Typical magnifications for viewing slides full-frame depend on image format; 35 mm frames (24×36 mm slides to 38×38 mm superslides) are best viewed at ca. 5×, while ca. 3× is optimal for viewing medium format slides (6×4.5 cm / 6×6 cm / 6×7 cm). Often, a 10× loupe is used to examine critical sharpness. Photographers using large format cameras may use a loupe to view the ground glass image to aid in focusing. Users of digital single-lens reflex cameras use loupes to help to identify dust and other particles on the sensor, in preparation for sensor cleaning.
Dentistry
Dentists, hygienists, and dental therapists typically use binocular loupe glasses since they need both hands free when performing dental procedures. The magnification helps with accurate diagnoses of oral conditions and enhances surgical precision when completing treatment. Additionally, loupes can improve dentists' posture which can decrease occupational strain.
Some dental loupes are flip-type, which take the form of two small cylinders, one in front of each lens of the glasses. Other types are inset within the lens of the glasses.
Dental caries, also known as cavities, are most accurately identified by visual and tactile examination of a clean, dry tooth. Magnification enables dentists to improve their ability to differentiate between a stain and a cavity. Cavities are rated and scored based on their visual presentation. If magnification is too high diagnosis becomes difficult due to the small field of view. Ideal magnification for diagnostic purposes is up to 2×. Treatment of dental caries, periodontal disease, and pulpal disease are all aided by magnification.
The dental specialty of endodontics has performed the vast majority of research regarding magnification in dentistry. Because the identification of accessory canals in addition to the primary pulp canals is essential to complete nonsurgical root canal therapy, magnification provides dentists enhanced visualization to locate and treat more obscured canals.
Treatment of periodontal disease is achieved by removing calculus deposits, plaque and therefore bacteria which causes inflammation and subsequently bone destruction. In severe cases, surgery to reduce pocket depth is indicated. Periodontists and hygienists must visualize plaque and calculus to remove it. Magnification can assist dentists and hygienists with identification and removal of plaque and calculus in addition to improving visualization for periodontal surgery.
Ergonomics have long been a pain point for doctors who need to physically strain, bending over and looking down, to treat their patients. Over time this posture results in discomfort, pain, and even neuromuscular disease. Some modern loupes address this by incorporating refractive prisms which alter the course of the light through the telescopes, so that the dentist can maintain a neutral, upright position with eyes relaxed and looking straight ahead.
A typical magnification for use in dentistry is 2.5×, but dental loupes can be anywhere in the range from 2× to 8×. Optimal magnification is a function of the type of work the doctor does - namely, how much detail he or she needs to see, taking into consideration that when magnification increases, the field of view decreases. As a tool that sits on the face and is used for hours at a time, weight is also a significant factor in considering the type of loupes to use.
Together with proper access to the oral cavity, light is an important part of performing precision dentistry. Because a dentist's head often eclipses the overhead dental lamp, loupes may be fitted with a light source. Loupe-mounted lights used to be fed by fiber optic cables that connected to either a wall-mounted or table-top light source. Newer models feature a more convenient LED lamp within the loupe-mounted light and an electric cord coming from either the conventional wall-mounted or table-top light source or a belt clip rechargeable battery pack. Options for loupe-mounted cameras and video recorders are also available.
Surgery
Surgeons in many specialties commonly use loupes when doing surgery on delicate structures. The loupes used by surgeons are mounted in the lenses of glasses and are custom made for the individual surgeon, taking into account their corrected vision, interpupillary distance and desired focal distance. Multiple magnification powers are available. They are most commonly used in otolaryngology, neurosurgery, ophthalmology, plastic surgery, cardiac surgery, orthopedic surgery, and vascular surgery.
Geology
The loupe is a vital geological field tool used to identify small mineral crystals and structures in rocks.
Collectables
Loupes are an essential tool in both numismatics, the study of currency, and the related practice of coin collection. Coin collectors frequently employ loupes for better evaluation of the quality of their coins, since identifying surface wear is vital when attempting to classify the grade of a coin. Uncirculated coins (coins without wear) can command a substantial premium over coins with slight wear. This wear cannot always be seen with the naked eye. Numismatists can also employ loupes to identify some counterfeit coins that would pass a naked-eye visual inspection. Loupes are similarly used for evaluating other collectable objects, such as trading cards and antiques.
Archival conservation
Conservators often use hand held loupes or head-mounted binocular magnifiers such as the Optivisor to examine artifacts and documents requiring cleaning or repair.
| Technology | Optical instruments | null |
7181923 | https://en.wikipedia.org/wiki/Space-filling%20model | Space-filling model | In chemistry, a space-filling model, also known as a calotte model, is a type of three-dimensional (3D) molecular model where the atoms are represented by spheres whose radii are proportional to the radii of the atoms and whose center-to-center distances are proportional to the distances between the atomic nuclei, all in the same scale. Atoms of different chemical elements are usually represented by spheres of different colors.
Space-filling calotte models are also referred to as CPK models after the chemists Robert Corey, Linus Pauling, and Walter Koltun, who over a span of time developed the modeling concept into a useful form. They are distinguished from other 3D representations, such as the ball-and-stick and skeletal models, by the use of the "full size" space-filling spheres for the atoms. The models are tactile and manually rotatable. They are useful for visualizing the effective shape and relative dimensions of a molecule, and (because of the rotatability) the shapes of the surface of the various conformers. On the other hand, these models mask the chemical bonds between the atoms, and make it difficult to see the structure of the molecule that is obscured by the atoms nearest to the viewer in a particular pose. For this reason, such models are of greater utility if they can be used dynamically, especially when used with complex molecules (e.g., see the greater understanding of the molecules shape given when the THC model is clicked on to rotate).
History
Space-filling models arise out of a desire to represent molecules in ways that reflect the electronic surfaces that molecules present, that dictate how they interact, one with another (or with surfaces, or macromolecules such as enzymes, etc.). Crystallographic data are the starting point for understanding static molecular structure, and these data contain the information rigorously required to generate space-filling representations (e.g., see these crystallographic models); most often, however, crystallographers present the locations of atoms derived from crystallography via "thermal ellipsoids" whose cut-off parameters are set for convenience both to show the atom locations (with anisotropies), and to allow representation of the covalent bonds or other interactions between atoms as lines. In short, for reasons of utility, crystallographic data historically have appeared in presentations closer to ball-and-stick models. Hence, while crystallographic data contain the information to create space-filling models, it remained for individuals interested in modeling an effective static shape of a molecule, and the space it occupied, and the ways in which it might present a surface to another molecule, to develop the formalism shown above.
In 1952, Robert Corey and Linus Pauling described accurate scale models of molecules which they had built at Caltech. In their models, they envisioned the surface of the molecule as being determined by the van der Waals radius of each atom of the molecule, and crafted atoms as hardwood spheres of diameter proportional to each atom's van der Waals radius, in the scale 1 inch = 1 Å. To allow bonds between atoms a portion of each sphere was cut away to create a pair of matching flat faces, with the cuts dimensioned so that the distance between sphere centers was proportional to the lengths of standard types of chemical bonds. A connector was designed—a metal bushing that threaded into each sphere at the center of each flat face. The two spheres were then firmly held together by a metal rod inserted into the pair of opposing bushing (with fastening by screws). The models also had special features to allow representation of hydrogen bonds.
In 1965, Walter L. Koltun designed and patented a simplified system with molded plastic atoms of various colours, which were joined by specially designed snap connectors; this simpler system accomplished essentially the same ends as the Corey-Pauling system, and allowed for the development of the models as a popular way of working with molecules in training and research environments. Such colour-coded, bond length-defined, van der Waals-type space-filling models are now commonly known as CPK models, after these three developers of the specific concept.
In modern research efforts, attention returned to use of data-rich crystallographic models in combination with traditional and new computational methods to provide space-filling models of molecules, both simple and complex, where added information such as which portions of the surface of the molecule were readily accessible to solvent, or how the electrostatic characteristics of a space-filling representation—which in the CPK case is almost fully left to the imagination—could be added to the visual models created. The two closing images give examples of the latter type of calculation and representation, and its utility.
| Physical sciences | Substance | Chemistry |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.