id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
2072849 | https://en.wikipedia.org/wiki/Lath | Lath | A lath or slat is a thin, narrow strip of straight-grained wood used under roof shingles or tiles, on lath and plaster walls and ceilings to hold plaster, and in lattice and trellis work.
Lath has expanded to mean any type of backing material for plaster. This includes metal wire mesh or expanded metal that is applied to a wood or metal framework as matrix over which stucco or plaster is applied, as well as drywall products called gypsum or rock lath. Historically, reed mat was also used as a lath material.
One of the key elements of lath, whether wooden slats or wire mesh, are the openings or gaps that allow plaster or stucco to ooze behind and form a mechanical bond to the lath. This is not necessary for gypsum lath, which relies on a chemical bond.
Etymology
The word is recorded from the late 13th century and is likely derived from the Old English word *, a variant of . This in turn would seem to stem from a Proto-Germanic word *laþþo, from which have sprung words in many Germanic languages, e.g. Dutch , German . The root has also found its way into Romance languages, cf. Italian , French . The related German word () denotes a board, plank, sash, shutter, counter and hence also a shop.
Types of lath
Wooden
Wooden-slat laths are still used today in building construction to form a base or groundwork for plaster, but modern lath and plaster applications are mostly limited to conservation projects.
Tiles, slates, and other coverings on roofs and walls are often fastened to laths, sometimes also called battens or slats. Such strips of wood are also employed to form lattice-work, or are used as the bars of Venetian blinds, and window shutters. Lath is also used on many tobacco farms in the Connecticut Valley as a means to carry and hang the plant in barns. This is achieved by using one of two methods: hooking or spearing. A "spear" lath is a regular lath that is held in an upright position. A worker then mounts a spear on top and "spears" the tobacco onto the lath. The other form of lath is called "hook" lath, which has small hooks attached that allows workers to hook the stems of tobacco plants onto the lath, often between two lengths of twine attached to the lath and twisted mechanically.
A lathhouse is an open structure roofed with laths in order to grow plants which need shelter from the sun.
Laths were also used to fix reeds to a timber structure before plastering.
In Cape Cod, laths were used in the early 1880s for building wooden lobster traps.
Historically there were three ways of making wood lath for plaster: riven lath, accordion lath, and circular sawn lath. Riven lath was traditionally split with the grain from chestnut, oak, and similar hardwoods, or from softwoods like eastern white pine. Individual laths were riven and nailed in place. Because they are split with the grain, riven lath is stronger than later forms of lath production. Accordion laths are thin, sawn boards that are partially split with a hatchet or axe. The splits are then spread apart to form gaps for the plaster to key into. The name derived from the spreading action, which is like pulling an accordion open. After the circular saw came into use in the early 19th century, lath for plastering was sawn in sawmills and delivered to the building site. Ihe 1930s to 1940s the use of lath and plaster was replaced with plasterboard, which was cheaper and easier to use.
Counter-lath
Counter-lath is a term used in roofing and plastering for a piece of wood placed perpendicular to the lath. In roofing, a counter-lath is a slight piece of timber parallel with and between common rafters to give the lath extra support, or a lath placed by eye between every two gauged ones. When plastering, sometimes a counter-lath is placed perpendicular to the lath as a fillet (a thin, narrow strip of material) to space the lath off of the surface to allow the plaster to pass through the lath and create a key.
Metal lath
Metal lath dates from the late 19th century and is used extensively today with plaster and stucco in home and commercial construction. In addition to providing a matrix to which the stucco can adhere, metal lath adds strength and rigidity. . Metal lath can be stapled directly to studs, and is capable of bending to easily form corners and curves. Three coats of plaster are required when using metal lath.
Several types of metal lath have been developed for a variety of applications:
Expanded metal lath is made by slitting and pulling apart a thin sheet of metal, which produces diamond-shape holes through which the plaster can form keys.
Ribbed lath is made from slit and expanded metal with V-shaped ribs which give it more stiffness, and is designed to span larger distance between framing supports
Self-furring lath is an expanded metal lath which is dimpled to hold itself off from a solid surface
Wire lath is made from welded or woven wires and is similar to hardware cloth
Paper backed wire lath is wire lath with building paper attached
Strip laths is metal lath that is several inches wide and is often used to reinforce joints and on corners
Corner lath is pre-bent for use in making corners
Wire mesh used on inside corners to prevent cracking is called Cornerite.
Gypsum lath
Gypsum lath (rock lath) consists of gypsum plaster sandwiched between two sheets of absorbent paper. The finish side (to which plaster is trawled) is treated with gypsum crystals for the plaster to chemically bond to and is sometimes perforated to allow mechanical bonding. It is commonly used in place of wood lath since it is noncombustible, easy to use, and can give better results. Gypsum board can be purchased in sheets of various sizes and screwed or nailed directly onto a building's studs. Due to its rigidity, it is most suited for use on straight walls.
Gypsum board was improved in 1910 by the paper wrapping the edges and multiple variations were developed in the 1930s. Gypsum lath is available with a foil facing, which acts as a vapor barrier and heat reflector, and as a veneer base for plaster veneer.
Keys
Keys are formed by plaster that oozes through the spaces or gaps between wooden lath, or the holes in metal lath, and around to the lath's back side. This secures the plaster to the lath by creating a sort of hook. Wooden and metal lath depend on the mechanical bond created by keys to adhere the plaster to the lath.
Framing
Lath can be attached directly to the frame of a building, such as the studs of a timber structure. Alternatively, lath can be attached to a timber or metal frame called a furring, which is then attached to the building structure. Furrings are often used in masonry construction. Frames are also used when using lath and plaster to create decorative, curved, or ornamental work.
Lath failure
There are several reasons that a plaster and lath wall may fail. First, the lath itself can sometimes pull away from the frame on which it is mounted. This is generally due to the use of non-galvanized nails. The lath can also fail because of decay from moisture or insect damage. Moisture can also cause wooden lath to expand and contract, causing the plaster around it to crack.
Additionally, failure can also occur in the plaster keys. Over time, the keys can deteriorate and crack, weakening their ability to hold the plaster onto the lath. The addition of hair in plaster helps to prevent this by adding strength. Problems can also occur in the keys if they were not properly formed to begin with, which can happen when laths are set too close together for plaster to travel though. Key failure often manifests as looseness and sagging in walls or ceilings, and in worst cases can lead to plaster breakage and collapse.
Repair methods
Repairing damaged lath and plaster walls is generally more economical than replacing it. Often, the plaster needs the repairs, and not the lath itself. As long as the lath and the first coats (brown coat) of plaster do not have any significant damage, minor cracks can simply be patched. For larger cracks, and when the brown coat is damaged, several base coats need to be applied prior to the patch coat. Metal lath can also be added to wooden lath prior to coating to add strength and increase keying.
If the back side of the lath is accessible, it is possible to create new keys where they have failed. This is most often done by conservators when one wants to maintain the original finished surface of the wall or ceiling. After bracing the failing plaster in place, a bonding agent and a plaster are applied to the back of the laths and forced through the gaps to the back side of the original plaster.
Benefits of lath
Lath and plaster walls have several benefits, including fire and mold resistance, soundproofing, and heat insulation. Though wooden lath can be susceptible to mold growth and decay, metal lath covered in plaster creates an environment that is inhospitable to toxic molds. Metal lath and plaster walls can be twice as resistant to fire as drywall, and are capable of achieving a two-hour fire rating with a assembly. Two inches of plaster and lath can also achieve the same decibel rating as of drywall.
| Technology | Building materials | null |
2073309 | https://en.wikipedia.org/wiki/Reef%20lobster | Reef lobster | Reef lobsters, Enoplometopus, are a genus of small lobsters that live on reefs in the Indo-Pacific, Caribbean and warmer parts of the Atlantic Ocean.
Description
Species of Enoplometopus occur from coral reefs at depths of less than to rocky reefs at depths of . They are brightly coloured, with stripes, rings, or spots. They are typically mainly red, orange, purplish and white. Reef lobsters are small (depending on species, up to ), nocturnal (spending the day in caves or crevices), and very timid. The species can be distinguished by their colouration and morphology.
As a result of their bright colours, they are popular in the aquarium trade, and unregulated collection combined with destruction of coral reefs may threaten some species. Due to uncertainty over the impact of these potential threats, the majority are considered data deficient by the International Union for Conservation of Nature.
Reef lobsters are distinguished from clawed lobsters (family Nephropidae) by having full chelae (claws) only on the first pair of pereiopods, the second and third pairs being only subchelate (where the last segment of the appendage can press against a short projection from the penultimate one). Clawed lobsters have full claws on the first three pereiopods. Males, unlike those of nephropoid lobsters, have an extra lobe on the second pleopod, which is assumed to have some function in reproduction. Reef lobsters have a shallow cervical groove while clawed lobsters have a deep cervical groove.
Although there is no fossil record of reef lobsters, there is some evidence that they may be related to the extinct genus Eryma which lived from the Permo-Triassic to the late Cretaceous. It was latter found to be sister taxon of the Jurassic Lobster Uncina posidoniae, with the clade Enoplometopoidea including both enoplometopid and enigmatic uncinid lobsters.
Species
The genus contains the following species:
| Biology and health sciences | Crayfishes and lobsters | Animals |
2073470 | https://en.wikipedia.org/wiki/Manifold | Manifold | In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an -dimensional manifold, or -manifold for short, is a topological space with the property that each point has a neighborhood that is homeomorphic to an open subset of -dimensional Euclidean space.
One-dimensional manifolds include lines and circles, but not self-crossing curves such as a figure 8. Two-dimensional manifolds are also called surfaces. Examples include the plane, the sphere, and the torus, and also the Klein bottle and real projective plane.
The concept of a manifold is central to many parts of geometry and modern mathematical physics because it allows complicated structures to be described in terms of well-understood topological properties of simpler spaces. Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. The concept has applications in computer-graphics given the need to associate pictures with coordinates (e.g. CT scans).
Manifolds can be equipped with additional structure. One important class of manifolds are differentiable manifolds; their differentiable structure allows calculus to be done. A Riemannian metric on a manifold allows distances and angles to be measured. Symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity.
The study of manifolds requires working knowledge of calculus and topology.
Motivating examples
Circle
After a line, a circle is the simplest example of a topological manifold. Topology ignores bending, so a small piece of a circle is treated the same as a small piece of a line. Considering, for instance, the top part of the unit circle, x2 + y2 = 1, where the y-coordinate is positive (indicated by the yellow arc in Figure 1). Any point of this arc can be uniquely described by its x-coordinate. So, projection onto the first coordinate is a continuous and invertible mapping from the upper arc to the open interval (−1, 1):
Such functions along with the open regions they map are called charts. Similarly, there are charts for the bottom (red), left (blue), and right (green) parts of the circle:
Together, these parts cover the whole circle, and the four charts form an atlas for the circle.
The top and right charts, and respectively, overlap in their domain: their intersection lies in the quarter of the circle where both and -coordinates are positive. Both map this part into the interval , though differently. Thus a function can be constructed, which takes values from the co-domain of back to the circle using the inverse, followed by back to the interval. If a is any number in , then:
Such a function is called a transition map.
The top, bottom, left, and right charts do not form the only possible atlas. Charts need not be geometric projections, and the number of charts is a matter of choice. Consider the charts
and
Here s is the slope of the line through the point at coordinates (x, y) and the fixed pivot point (−1, 0); similarly, t is the opposite of the slope of the line through the points at coordinates (x, y) and (+1, 0). The inverse mapping from s to (x, y) is given by
It can be confirmed that x2 + y2 = 1 for all values of s and t. These two charts provide a second atlas for the circle, with the transition map
(that is, one has this relation between s and t for every point where s and t are both nonzero).
Each chart omits a single point, either (−1, 0) for s or (+1, 0) for t, so neither chart alone is sufficient to cover the whole circle. It can be proved that it is not possible to cover the full circle with a single chart. For example, although it is possible to construct a circle from a single line interval by overlapping and "gluing" the ends, this does not produce a chart; a portion of the circle will be mapped to both ends at once, losing invertibility.
Sphere
The sphere is an example of a surface. The unit sphere of implicit equation
may be covered by an atlas of six charts: the plane divides the sphere into two half spheres ( and ), which may both be mapped on the disc by the projection on the plane of coordinates. This provides two charts; the four other charts are provided by a similar construction with the two other coordinate planes.
As with the circle, one may define one chart that covers the whole sphere excluding one point. Thus two charts are sufficient, but the sphere cannot be covered by a single chart.
This example is historically significant, as it has motivated the terminology; it became apparent that the whole surface of the Earth cannot have a plane representation consisting of a single map (also called "chart", see nautical chart), and therefore one needs atlases for covering the whole Earth surface.
Other curves
Manifolds need not be connected (all in "one piece"); an example is a pair of separate circles.
Manifolds need not be closed; thus a line segment without its end points is a manifold. They are never countable, unless the dimension of the manifold is 0. Putting these freedoms together, other examples of manifolds are a parabola, a hyperbola, and the locus of points on a cubic curve (a closed loop piece and an open, infinite piece).
However, excluded are examples like two touching circles that share a point to form a figure-8; at the shared point, a satisfactory chart cannot be created. Even with the bending allowed by topology, the vicinity of the shared point looks like a "+", not a line. A "+" is not homeomorphic to a line segment, since deleting the center point from the "+" gives a space with four components (i.e. pieces), whereas deleting a point from a line segment gives a space with at most two pieces; topological operations always preserve the number of pieces.
Mathematical definition
Informally, a manifold is a space that is "modeled on" Euclidean space.
There are many different kinds of manifolds. In geometry and topology, all manifolds are topological manifolds, possibly with additional structure. A manifold can be constructed by giving a collection of coordinate charts, that is, a covering by open sets with homeomorphisms to a Euclidean space, and patching functions: homeomorphisms from one region of Euclidean space to another region if they correspond to the same part of the manifold in two different coordinate charts. A manifold can be given additional structure if the patching functions satisfy axioms beyond continuity. For instance, differentiable manifolds have homeomorphisms on overlapping neighborhoods diffeomorphic with each other, so that the manifold has a well-defined set of functions which are differentiable in each neighborhood, thus differentiable on the manifold as a whole.
Formally, a (topological) manifold is a second countable Hausdorff space that is locally homeomorphic to a Euclidean space.
Second countable and Hausdorff are point-set conditions; second countable excludes spaces which are in some sense 'too large' such as the long line, while Hausdorff excludes spaces such as "the line with two origins" (these generalizations of manifolds are discussed in non-Hausdorff manifolds).
Locally homeomorphic to a Euclidean space means that every point has a neighborhood homeomorphic to an open subset of the Euclidean space for some nonnegative integer .
This implies that either the point is an isolated point (if ), or it has a neighborhood homeomorphic to the open ball
This implies also that every point has a neighborhood homeomorphic to
since is homeomorphic, and even diffeomorphic to any open ball in it (for ).
The that appears in the preceding definition is called the local dimension of the manifold. Generally manifolds are taken to have a constant local dimension, and the local dimension is then called the dimension of the manifold. This is, in particular, the case when manifolds are connected. However, some authors admit manifolds that are not connected, and where different points can have different dimensions. If a manifold has a fixed dimension, this can be emphasized by calling it a . For example, the (surface of a) sphere has a constant dimension of 2 and is therefore a pure manifold whereas the disjoint union of a sphere and a line in three-dimensional space is not a pure manifold. Since dimension is a local invariant (i.e. the map sending each point to the dimension of its neighbourhood over which a chart is defined, is locally constant), each connected component has a fixed dimension.
Sheaf-theoretically, a manifold is a locally ringed space, whose structure sheaf is locally isomorphic to the sheaf of continuous (or differentiable, or complex-analytic, etc.) functions on Euclidean space. This definition is mostly used when discussing analytic manifolds in algebraic geometry.
Charts, atlases, and transition maps
The spherical Earth is navigated using flat maps or charts, collected in an atlas. Similarly, a manifold can be described using mathematical maps, called coordinate charts, collected in a mathematical atlas. It is not generally possible to describe a manifold with just one chart, because the global structure of the manifold is different from the simple structure of the charts. For example, no single flat map can represent the entire Earth without separation of adjacent features across the map's boundaries or duplication of coverage. When a manifold is constructed from multiple overlapping charts, the regions where they overlap carry information essential to understanding the global structure.
Charts
A coordinate map, a coordinate chart, or simply a chart, of a manifold is an invertible map between a subset of the manifold and a simple space such that both the map and its inverse preserve the desired structure. For a topological manifold, the simple space is a subset of some Euclidean space and interest focuses on the topological structure. This structure is preserved by homeomorphisms, invertible maps that are continuous in both directions.
In the case of a differentiable manifold, a set of charts called an atlas, whose transition functions (see below) are all differentiable, allows us to do calculus on it. Polar coordinates, for example, form a chart for the plane minus the positive x-axis and the origin. Another example of a chart is the map χtop mentioned above, a chart for the circle.
Atlases
The description of most manifolds requires more than one chart. A specific collection of charts which covers a manifold is called an atlas. An atlas is not unique as all manifolds can be covered in multiple ways using different combinations of charts. Two atlases are said to be equivalent if their union is also an atlas.
The atlas containing all possible charts consistent with a given atlas is called the maximal atlas (i.e. an equivalence class containing that given atlas). Unlike an ordinary atlas, the maximal atlas of a given manifold is unique. Though useful for definitions, it is an abstract object and not used directly (e.g. in calculations).
Transition maps
Charts in an atlas may overlap and a single point of a manifold may be represented in several charts. If two charts overlap, parts of them represent the same region of the manifold, just as a map of Europe and a map of Russia may both contain Moscow. Given two overlapping charts, a transition function can be defined which goes from an open ball in to the manifold and then back to another (or perhaps the same) open ball in . The resultant map, like the map T in the circle example above, is called a change of coordinates, a coordinate transformation, a transition function, or a transition map.
Additional structure
An atlas can also be used to define additional structure on the manifold. The structure is first defined on each chart separately. If all transition maps are compatible with this structure, the structure transfers to the manifold.
This is the standard way differentiable manifolds are defined. If the transition functions of an atlas for a topological manifold preserve the natural differential structure of (that is, if they are diffeomorphisms), the differential structure transfers to the manifold and turns it into a differentiable manifold. Complex manifolds are introduced in an analogous way by requiring that the transition functions of an atlas are holomorphic functions. For symplectic manifolds, the transition functions must be symplectomorphisms.
The structure on the manifold depends on the atlas, but sometimes different atlases can be said to give rise to the same structure. Such atlases are called compatible.
These notions are made precise in general through the use of pseudogroups.
Manifold with boundary
A manifold with boundary is a manifold with an edge. For example, a sheet of paper is a 2-manifold with a 1-dimensional boundary. The boundary of an -manifold with boundary is an -manifold. A disk (circle plus interior) is a 2-manifold with boundary. Its boundary is a circle, a 1-manifold. A square with interior is also a 2-manifold with boundary. A ball (sphere plus interior) is a 3-manifold with boundary. Its boundary is a sphere, a 2-manifold.
In technical language, a manifold with boundary is a space containing both interior points and boundary points. Every interior point has a neighborhood homeomorphic to the open -ball . Every boundary point has a neighborhood homeomorphic to the "half" -ball . Any homeomorphism between half-balls must send points with to points with . This invariance allows to "define" boundary points; see next paragraph.
Boundary and interior
Let be a manifold with boundary. The interior of , denoted , is the set of points in which have neighborhoods homeomorphic to an open subset of . The boundary of , denoted , is the complement of in . The boundary points can be characterized as those points which land on the boundary hyperplane of under some coordinate chart.
If is a manifold with boundary of dimension , then is a manifold (without boundary) of dimension and is a manifold (without boundary) of dimension .
Construction
A single manifold can be constructed in different ways, each stressing a different aspect of the manifold, thereby leading to a slightly different viewpoint.
Charts
Perhaps the simplest way to construct a manifold is the one used in the example above of the circle. First, a subset of is identified, and then an atlas covering this subset is constructed. The concept of manifold grew historically from constructions like this. Here is another example, applying this method to the construction of a sphere:
Sphere with charts
A sphere can be treated in almost the same way as the circle. In mathematics a sphere is just the surface (not the solid interior), which can be defined as a subset of :
The sphere is two-dimensional, so each chart will map part of the sphere to an open subset of . Consider the northern hemisphere, which is the part with positive z coordinate (coloured red in the picture on the right). The function defined by
maps the northern hemisphere to the open unit disc by projecting it on the (x, y) plane. A similar chart exists for the southern hemisphere. Together with two charts projecting on the (x, z) plane and two charts projecting on the (y, z) plane, an atlas of six charts is obtained which covers the entire sphere.
This can be easily generalized to higher-dimensional spheres.
Patchwork
A manifold can be constructed by gluing together pieces in a consistent manner, making them into overlapping charts. This construction is possible for any manifold and hence it is often used as a characterisation, especially for differentiable and Riemannian manifolds. It focuses on an atlas, as the patches naturally provide charts, and since there is no exterior space involved it leads to an intrinsic view of the manifold.
The manifold is constructed by specifying an atlas, which is itself defined by transition maps. A point of the manifold is therefore an equivalence class of points which are mapped to each other by transition maps. Charts map equivalence classes to points of a single patch. There are usually strong demands on the consistency of the transition maps. For topological manifolds they are required to be homeomorphisms; if they are also diffeomorphisms, the resulting manifold is a differentiable manifold.
This can be illustrated with the transition map t = 1⁄s from the second half of the circle example. Start with two copies of the line. Use the coordinate s for the first copy, and t for the second copy. Now, glue both copies together by identifying the point t on the second copy with the point s = 1⁄t on the first copy (the points t = 0 and s = 0 are not identified with any point on the first and second copy, respectively). This gives a circle.
Intrinsic and extrinsic view
The first construction and this construction are very similar, but represent rather different points of view. In the first construction, the manifold is seen as embedded in some Euclidean space. This is the extrinsic view. When a manifold is viewed in this way, it is easy to use intuition from Euclidean spaces to define additional structure. For example, in a Euclidean space, it is always clear whether a vector at some point is tangential or normal to some surface through that point.
The patchwork construction does not use any embedding, but simply views the manifold as a topological space by itself. This abstract point of view is called the intrinsic view. It can make it harder to imagine what a tangent vector might be, and there is no intrinsic notion of a normal bundle, but instead there is an intrinsic stable normal bundle.
n-Sphere as a patchwork
The n-sphere Sn is a generalisation of the idea of a circle (1-sphere) and sphere (2-sphere) to higher dimensions. An n-sphere Sn can be constructed by gluing together two copies of . The transition map between them is inversion in a sphere, defined as
This function is its own inverse and thus can be used in both directions. As the transition map is a smooth function, this atlas defines a smooth manifold.
In the case n = 1, the example simplifies to the circle example given earlier.
Identifying points of a manifold
It is possible to define different points of a manifold to be the same point. This can be visualized as gluing these points together in a single point, forming a quotient space. There is, however, no reason to expect such quotient spaces to be manifolds. Among the possible quotient spaces that are not necessarily manifolds, orbifolds and CW complexes are considered to be relatively well-behaved. An example of a quotient space of a manifold that is also a manifold is the real projective space, identified as a quotient space of the corresponding sphere.
One method of identifying points (gluing them together) is through a right (or left) action of a group, which acts on the manifold. Two points are identified if one is moved onto the other by some group element. If M is the manifold and G is the group, the resulting quotient space is denoted by M / G (or G \ M).
Manifolds which can be constructed by identifying points include tori and real projective spaces (starting with a plane and a sphere, respectively).
Gluing along boundaries
Two manifolds with boundaries can be glued together along a boundary. If this is done the right way, the result is also a manifold. Similarly, two boundaries of a single manifold can be glued together.
Formally, the gluing is defined by a bijection between the two boundaries. Two points are identified when they are mapped onto each other. For a topological manifold, this bijection should be a homeomorphism, otherwise the result will not be a topological manifold. Similarly, for a differentiable manifold, it has to be a diffeomorphism. For other manifolds, other structures should be preserved.
A finite cylinder may be constructed as a manifold by starting with a strip [0,1] × [0,1] and gluing a pair of opposite edges on the boundary by a suitable diffeomorphism. A projective plane may be obtained by gluing a sphere with a hole in it to a Möbius strip along their respective circular boundaries.
Cartesian products
The Cartesian product of manifolds is also a manifold.
The dimension of the product manifold is the sum of the dimensions of its factors. Its topology is the product topology, and a Cartesian product of charts is a chart for the product manifold. Thus, an atlas for the product manifold can be constructed using atlases for its factors. If these atlases define a differential structure on the factors, the corresponding atlas defines a differential structure on the product manifold. The same is true for any other structure defined on the factors. If one of the factors has a boundary, the product manifold also has a boundary. Cartesian products may be used to construct tori and finite cylinders, for example, as S1 × S1 and S1 × [0,1], respectively.
History
The study of manifolds combines many important areas of mathematics: it generalizes concepts such as curves and surfaces as well as ideas from linear algebra and topology.
Early development
Before the modern concept of a manifold there were several important results.
Non-Euclidean geometry considers spaces where Euclid's parallel postulate fails. Saccheri first studied such geometries in 1733, but sought only to disprove them. Gauss, Bolyai and Lobachevsky independently discovered them 100 years later. Their research uncovered two types of spaces whose geometric structures differ from that of classical Euclidean space; these gave rise to hyperbolic geometry and elliptic geometry. In the modern theory of manifolds, these notions correspond to Riemannian manifolds with constant negative and positive curvature, respectively.
Carl Friedrich Gauss may have been the first to consider abstract spaces as mathematical objects in their own right. His theorema egregium gives a method for computing the curvature of a surface without considering the ambient space in which the surface lies. Such a surface would, in modern terminology, be called a manifold; and in modern terms, the theorem proved that the curvature of the surface is an intrinsic property. Manifold theory has come to focus exclusively on these intrinsic properties (or invariants), while largely ignoring the extrinsic properties of the ambient space.
Another, more topological example of an intrinsic property of a manifold is its Euler characteristic. Leonhard Euler showed that for a convex polytope in the three-dimensional Euclidean space with V vertices (or corners), E edges, and F faces,The same formula will hold if we project the vertices and edges of the polytope onto a sphere, creating a topological map with V vertices, E edges, and F faces, and in fact, will remain true for any spherical map, even if it does not arise from any convex polytope. Thus 2 is a topological invariant of the sphere, called its Euler characteristic. On the other hand, a torus can be sliced open by its 'parallel' and 'meridian' circles, creating a map with V = 1 vertex, E = 2 edges, and F = 1 face. Thus the Euler characteristic of the torus is 1 − 2 + 1 = 0. The Euler characteristic of other surfaces is a useful topological invariant, which can be extended to higher dimensions using Betti numbers. In the mid nineteenth century, the Gauss–Bonnet theorem linked the Euler characteristic to the Gaussian curvature.
Synthesis
Investigations of Niels Henrik Abel and Carl Gustav Jacobi on inversion of elliptic integrals in the first half of 19th century led them to consider special types of complex manifolds, now known as Jacobians. Bernhard Riemann further contributed to their theory, clarifying the geometric meaning of the process of analytic continuation of functions of complex variables.
Another important source of manifolds in 19th century mathematics was analytical mechanics, as developed by Siméon Poisson, Jacobi, and William Rowan Hamilton. The possible states of a mechanical system are thought to be points of an abstract space, phase space in Lagrangian and Hamiltonian formalisms of classical mechanics. This space is, in fact, a high-dimensional manifold, whose dimension corresponds to the degrees of freedom of the system and where the points are specified by their generalized coordinates. For an unconstrained movement of free particles the manifold is equivalent to the Euclidean space, but various conservation laws constrain it to more complicated formations, e.g. Liouville tori. The theory of a rotating solid body, developed in the 18th century by Leonhard Euler and Joseph-Louis Lagrange, gives another example where the manifold is nontrivial. Geometrical and topological aspects of classical mechanics were emphasized by Henri Poincaré, one of the founders of topology.
Riemann was the first one to do extensive work generalizing the idea of a surface to higher dimensions. The name manifold comes from Riemann's original German term, Mannigfaltigkeit, which William Kingdon Clifford translated as "manifoldness". In his Göttingen inaugural lecture, Riemann described the set of all possible values of a variable with certain constraints as a Mannigfaltigkeit, because the variable can have many values. He distinguishes between stetige Mannigfaltigkeit and diskrete Mannigfaltigkeit (continuous manifoldness and discontinuous manifoldness), depending on whether the value changes continuously or not. As continuous examples, Riemann refers to not only colors and the locations of objects in space, but also the possible shapes of a spatial figure. Using induction, Riemann constructs an n-fach ausgedehnte Mannigfaltigkeit (n times extended manifoldness or n-dimensional manifoldness) as a continuous stack of (n−1) dimensional manifoldnesses. Riemann's intuitive notion of a Mannigfaltigkeit evolved into what is today formalized as a manifold. Riemannian manifolds and Riemann surfaces are named after Riemann.
Poincaré's definition
In his very influential paper, Analysis Situs, Henri Poincaré gave a definition of a differentiable manifold (variété) which served as a precursor to the modern concept of a manifold.
In the first section of Analysis Situs, Poincaré defines a manifold as the level set of a continuously differentiable function between Euclidean spaces that satisfies the nondegeneracy hypothesis of the implicit function theorem. In the third section, he begins by remarking that the graph of a continuously differentiable function is a manifold in the latter sense. He then proposes a new, more general, definition of manifold based on a 'chain of manifolds' (une chaîne des variétés).
Poincaré's notion of a chain of manifolds is a precursor to the modern notion of atlas. In particular, he considers two manifolds defined respectively as graphs of functions and . If these manifolds overlap (a une partie commune), then he requires that the coordinates depend continuously differentiably on the coordinates and vice versa ('...les sont fonctions analytiques des et inversement'). In this way he introduces a precursor to the notion of a chart and of a transition map.
For example, the unit circle in the plane can be thought of as the graph of the function or else the function in a neighborhood of every point except the points (1, 0) and (−1, 0); and in a neighborhood of those points, it can be thought of as the graph of, respectively, and . The circle can be represented by a graph in the neighborhood of every point because the left hand side of its defining equation has nonzero gradient at every point of the circle. By the implicit function theorem, every submanifold of Euclidean space is locally the graph of a function.
Hermann Weyl gave an intrinsic definition for differentiable manifolds in his lecture course on Riemann surfaces in 1911–1912, opening the road to the general concept of a topological space that followed shortly. During the 1930s Hassler Whitney and others clarified the foundational aspects of the subject, and thus intuitions dating back to the latter half of the 19th century became precise, and developed through differential geometry and Lie group theory. Notably, the Whitney embedding theorem showed that the intrinsic definition in terms of charts was equivalent to Poincaré's definition in terms of subsets of Euclidean space.
Topology of manifolds: highlights
Two-dimensional manifolds, also known as a 2D surfaces embedded in our common 3D space, were considered by Riemann under the guise of Riemann surfaces, and rigorously classified in the beginning of the 20th century by Poul Heegaard and Max Dehn. Poincaré pioneered the study of three-dimensional manifolds and raised a fundamental question about them, today known as the Poincaré conjecture. After nearly a century, Grigori Perelman proved the Poincaré conjecture (see the Solution of the Poincaré conjecture). William Thurston's geometrization program, formulated in the 1970s, provided a far-reaching extension of the Poincaré conjecture to the general three-dimensional manifolds. Four-dimensional manifolds were brought to the forefront of mathematical research in the 1980s by Michael Freedman and in a different setting, by Simon Donaldson, who was motivated by the then recent progress in theoretical physics (Yang–Mills theory), where they serve as a substitute for ordinary 'flat' spacetime. Andrey Markov Jr. showed in 1960 that no algorithm exists for classifying four-dimensional manifolds. Important work on higher-dimensional manifolds, including analogues of the Poincaré conjecture, had been done earlier by René Thom, John Milnor, Stephen Smale and Sergei Novikov. A very pervasive and flexible technique underlying much work on the topology of manifolds is Morse theory.
Additional structure
Topological manifolds
The simplest kind of manifold to define is the topological manifold, which looks locally like some "ordinary" Euclidean space . By definition, all manifolds are topological manifolds, so the phrase "topological manifold" is usually used to emphasize that a manifold lacks additional structure, or that only its topological properties are being considered. Formally, a topological manifold is a topological space locally homeomorphic to a Euclidean space. This means that every point has a neighbourhood for which there exists a homeomorphism (a bijective continuous function whose inverse is also continuous) mapping that neighbourhood to . These homeomorphisms are the charts of the manifold.
A topological manifold looks locally like a Euclidean space in a rather weak manner: while for each individual chart it is possible to distinguish differentiable functions or measure distances and angles, merely by virtue of being a topological manifold a space does not have any particular and consistent choice of such concepts. In order to discuss such properties for a manifold, one needs to specify further structure and consider differentiable manifolds and Riemannian manifolds discussed below. In particular, the same underlying topological manifold can have several mutually incompatible classes of differentiable functions and an infinite number of ways to specify distances and angles.
Usually additional technical assumptions on the topological space are made to exclude pathological cases. It is customary to require that the space be Hausdorff and second countable.
The dimension of the manifold at a certain point is the dimension of the Euclidean space that the charts at that point map to (number n in the definition). All points in a connected manifold have the same dimension. Some authors require that all charts of a topological manifold map to Euclidean spaces of same dimension. In that case every topological manifold has a topological invariant, its dimension.
Differentiable manifolds
For most applications, a special kind of topological manifold, namely, a differentiable manifold, is used. If the local charts on a manifold are compatible in a certain sense, one can define directions, tangent spaces, and differentiable functions on that manifold. In particular it is possible to use calculus on a differentiable manifold. Each point of an n-dimensional differentiable manifold has a tangent space. This is an n-dimensional Euclidean space consisting of the tangent vectors of the curves through the point.
Two important classes of differentiable manifolds are smooth and analytic manifolds. For smooth manifolds the transition maps are smooth, that is, infinitely differentiable. Analytic manifolds are smooth manifolds with the additional condition that the transition maps are analytic (they can be expressed as power series). The sphere can be given analytic structure, as can most familiar curves and surfaces.
A rectifiable set generalizes the idea of a piecewise smooth or rectifiable curve to higher dimensions; however, rectifiable sets are not in general manifolds.
Riemannian manifolds
To measure distances and angles on manifolds, the manifold must be Riemannian. A Riemannian manifold is a differentiable manifold in which each tangent space is equipped with an inner product in a manner which varies smoothly from point to point. Given two tangent vectors and , the inner product gives a real number. The dot (or scalar) product is a typical example of an inner product. This allows one to define various notions such as length, angles, areas (or volumes), curvature and divergence of vector fields.
All differentiable manifolds (of constant dimension) can be given the structure of a Riemannian manifold. The Euclidean space itself carries a natural structure of Riemannian manifold (the tangent spaces are naturally identified with the Euclidean space itself and carry the standard scalar product of the space). Many familiar curves and surfaces, including for example all -spheres, are specified as subspaces of a Euclidean space and inherit a metric from their embedding in it.
Finsler manifolds
A Finsler manifold allows the definition of distance but does not require the concept of angle; it is an analytic manifold in which each tangent space is equipped with a norm, , in a manner which varies smoothly from point to point. This norm can be extended to a metric, defining the length of a curve; but it cannot in general be used to define an inner product.
Any Riemannian manifold is a Finsler manifold.
Lie groups
Lie groups, named after Sophus Lie, are differentiable manifolds that carry also the structure of a group which is such that the group operations are defined by smooth maps.
A Euclidean vector space with the group operation of vector addition is an example of a non-compact Lie group. A simple example of a compact Lie group is the circle: the group operation is simply rotation. This group, known as , can be also characterised as the group of complex numbers of modulus 1 with multiplication as the group operation.
Other examples of Lie groups include special groups of matrices, which are all subgroups of the general linear group, the group of matrices with non-zero determinant. If the matrix entries are real numbers, this will be an -dimensional disconnected manifold. The orthogonal groups, the symmetry groups of the sphere and hyperspheres, are dimensional manifolds, where is the dimension of the sphere. Further examples can be found in the table of Lie groups.
Other types of manifolds
A complex manifold is a manifold whose charts take values in and whose transition functions are holomorphic on the overlaps. These manifolds are the basic objects of study in complex geometry. A one-complex-dimensional manifold is called a Riemann surface. An -dimensional complex manifold has dimension as a real differentiable manifold.
A CR manifold is a manifold modeled on boundaries of domains in .
'Infinite dimensional manifolds': to allow for infinite dimensions, one may consider Banach manifolds which are locally homeomorphic to Banach spaces. Similarly, Fréchet manifolds are locally homeomorphic to Fréchet spaces.
A symplectic manifold is a kind of manifold which is used to represent the phase spaces in classical mechanics. They are endowed with a 2-form that defines the Poisson bracket. A closely related type of manifold is a contact manifold.
A combinatorial manifold is a kind of manifold which is discretization of a manifold. It usually means a piecewise linear manifold made by simplicial complexes.
A digital manifold is a special kind of combinatorial manifold which is defined in digital space. See digital topology.
Classification and invariants
Different notions of manifolds have different notions of classification and invariant; in this section we focus on smooth closed manifolds.
The classification of smooth closed manifolds is well understood in principle, except in dimension 4: in low dimensions (2 and 3) it is geometric, via the uniformization theorem and the solution of the Poincaré conjecture, and in high dimension (5 and above) it is algebraic, via surgery theory. This is a classification in principle: the general question of whether two smooth manifolds are diffeomorphic is not computable in general. Further, specific computations remain difficult, and there are many open questions.
Orientable surfaces can be visualized, and their diffeomorphism classes enumerated, by genus. Given two orientable surfaces, one can determine if they are diffeomorphic by computing their respective genera and comparing: they are diffeomorphic if and only if the genera are equal, so the genus forms a complete set of invariants.
This is much harder in higher dimensions: higher-dimensional manifolds cannot be directly visualized (though visual intuition is useful in understanding them), nor can their diffeomorphism classes be enumerated, nor can one in general determine if two different descriptions of a higher-dimensional manifold refer to the same object.
However, one can determine if two manifolds are different if there is some intrinsic characteristic that differentiates them. Such criteria are commonly referred to as invariants, because, while they may be defined in terms of some presentation (such as the genus in terms of a triangulation), they are the same relative to all possible descriptions of a particular manifold: they are invariant under different descriptions.
One could hope to develop an arsenal of invariant criteria that would definitively classify all manifolds up to isomorphism. It is known that for manifolds of dimension 4 and higher, no program exists that can decide whether two manifolds are diffeomorphic.
Smooth manifolds have a rich set of invariants, coming from point-set topology, classic algebraic topology, and geometric topology. The most familiar invariants, which are visible for surfaces, are orientability (a normal invariant, also detected by homology) and genus (a homological invariant).
Smooth closed manifolds have no local invariants (other than dimension), though geometric manifolds have local invariants, notably the curvature of a Riemannian manifold and the torsion of a manifold equipped with an affine connection. This distinction between local invariants and no local invariants is a common way to distinguish between geometry and topology. All invariants of a smooth closed manifold are thus global.
Algebraic topology is a source of a number of important global invariant properties. Some key criteria include the simply connected property and orientability (see below). Indeed, several branches of mathematics, such as homology and homotopy theory, and the theory of characteristic classes were founded in order to study invariant properties of manifolds.
Surfaces
Orientability
In dimensions two and higher, a simple but important invariant criterion is the question of whether a manifold admits a meaningful orientation. Consider a topological manifold with charts mapping to . Given an ordered basis for , a chart causes its piece of the manifold to itself acquire a sense of ordering, which in 3-dimensions can be viewed as either right-handed or left-handed. Overlapping charts are not required to agree in their sense of ordering, which gives manifolds an important freedom. For some manifolds, like the sphere, charts can be chosen so that overlapping regions agree on their "handedness"; these are orientable manifolds. For others, this is impossible. The latter possibility is easy to overlook, because any closed surface embedded (without self-intersection) in three-dimensional space is orientable.
Some illustrative examples of non-orientable manifolds include: (1) the Möbius strip, which is a manifold with boundary, (2) the Klein bottle, which must intersect itself in its 3-space representation, and (3) the real projective plane, which arises naturally in geometry.
Möbius strip
Begin with an infinite circular cylinder standing vertically, a manifold without boundary. Slice across it high and low to produce two circular boundaries, and the cylindrical strip between them. This is an orientable manifold with boundary, upon which "surgery" will be performed. Slice the strip open, so that it could unroll to become a rectangle, but keep a grasp on the cut ends. Twist one end 180°, making the inner surface face out, and glue the ends back together seamlessly. This results in a strip with a permanent half-twist: the Möbius strip. Its boundary is no longer a pair of circles, but (topologically) a single circle; and what was once its "inside" has merged with its "outside", so that it now has only a single side. Similarly to the Klein Bottle below, this two dimensional surface would need to intersect itself in two dimensions, but can easily be constructed in three or more dimensions.
Klein bottle
Take two Möbius strips; each has a single loop as a boundary. Straighten out those loops into circles, and let the strips distort into cross-caps. Gluing the circles together will produce a new, closed manifold without boundary, the Klein bottle. Closing the surface does nothing to improve the lack of orientability, it merely removes the boundary. Thus, the Klein bottle is a closed surface with no distinction between inside and outside. In three-dimensional space, a Klein bottle's surface must pass through itself. Building a Klein bottle which is not self-intersecting requires four or more dimensions of space.
Real projective plane
Begin with a sphere centered on the origin. Every line through the origin pierces the sphere in two opposite points called antipodes. Although there is no way to do so physically, it is possible (by considering a quotient space) to mathematically merge each antipode pair into a single point. The closed surface so produced is the real projective plane, yet another non-orientable surface. It has a number of equivalent descriptions and constructions, but this route explains its name: all the points on any given line through the origin project to the same "point" on this "plane".
Genus and the Euler characteristic
For two dimensional manifolds a key invariant property is the genus, or "number of handles" present in a surface. A torus is a sphere with one handle, a double torus is a sphere with two handles, and so on. Indeed, it is possible to fully characterize compact, two-dimensional manifolds on the basis of genus and orientability. In higher-dimensional manifolds genus is replaced by the notion of Euler characteristic, and more generally Betti numbers and homology and cohomology.
Maps of manifolds
Just as there are various types of manifolds, there are various types of maps of manifolds. In addition to continuous functions and smooth functions generally, there are maps with special properties. In geometric topology a basic type are embeddings, of which knot theory is a central example, and generalizations such as immersions, submersions, covering spaces, and ramified covering spaces.
Basic results include the Whitney embedding theorem and Whitney immersion theorem.
In Riemannian geometry, one may ask for maps to preserve the Riemannian metric, leading to notions of isometric embeddings, isometric immersions, and Riemannian submersions; a basic result is the Nash embedding theorem.
Scalar-valued functions
A basic example of maps between manifolds are scalar-valued functions on a manifold,
or
sometimes called regular functions or functionals, by analogy with algebraic geometry or linear algebra. These are of interest both in their own right, and to study the underlying manifold.
In geometric topology, most commonly studied are Morse functions, which yield handlebody decompositions, while in mathematical analysis, one often studies solution to partial differential equations, an important example of which is harmonic analysis, where one studies harmonic functions: the kernel of the Laplace operator. This leads to such functions as the spherical harmonics, and to heat kernel methods of studying manifolds, such as hearing the shape of a drum and some proofs of the Atiyah–Singer index theorem.
Generalizations of manifolds
Infinite dimensional manifolds The definition of a manifold can be generalized by dropping the requirement of finite dimensionality. Thus an infinite dimensional manifold is a topological space locally homeomorphic to a topological vector space over the reals. This omits the point-set axioms, allowing higher cardinalities and non-Hausdorff manifolds; and it omits finite dimension, allowing structures such as Hilbert manifolds to be modeled on Hilbert spaces, Banach manifolds to be modeled on Banach spaces, and Fréchet manifolds to be modeled on Fréchet spaces. Usually one relaxes one or the other condition: manifolds with the point-set axioms are studied in general topology, while infinite-dimensional manifolds are studied in functional analysis.
Orbifolds An orbifold is a generalization of manifold allowing for certain kinds of "singularities" in the topology. Roughly speaking, it is a space which locally looks like the quotients of some simple space (e.g. Euclidean space) by the actions of various finite groups. The singularities correspond to fixed points of the group actions, and the actions must be compatible in a certain sense.
Algebraic varieties and schemes Non-singular algebraic varieties over the real or complex numbers are manifolds. One generalizes this first by allowing singularities, secondly by allowing different fields, and thirdly by emulating the patching construction of manifolds: just as a manifold is glued together from open subsets of Euclidean space, an algebraic variety is glued together from affine algebraic varieties, which are zero sets of polynomials over algebraically closed fields. Schemes are likewise glued together from affine schemes, which are a generalization of algebraic varieties. Both are related to manifolds, but are constructed algebraically using sheaves instead of atlases.
Because of singular points, a variety is in general not a manifold, though linguistically the French variété, German Mannigfaltigkeit and English manifold are largely synonymous. In French an algebraic variety is called une variété algébrique (an algebraic variety), while a smooth manifold is called une variété différentielle (a differential variety).
Stratified space A "stratified space" is a space that can be divided into pieces ("strata"), with each stratum a manifold, with the strata fitting together in prescribed ways (formally, a filtration by closed subsets). There are various technical definitions, notably a Whitney stratified space (see Whitney conditions) for smooth manifolds and a topologically stratified space for topological manifolds. Basic examples include manifold with boundary (top dimensional manifold and codimension 1 boundary) and manifolds with corners (top dimensional manifold, codimension 1 boundary, codimension 2 corners). Whitney stratified spaces are a broad class of spaces, including algebraic varieties, analytic varieties, semialgebraic sets, and subanalytic sets.
CW-complexes A CW complex is a topological space formed by gluing disks of different dimensionality together. In general the resulting space is singular, hence not a manifold. However, they are of central interest in algebraic topology, especially in homotopy theory.
Homology manifolds A homology manifold is a space that behaves like a manifold from the point of view of homology theory. These are not all manifolds, but (in high dimension) can be analyzed by surgery theory similarly to manifolds, and failure to be a manifold is a local obstruction, as in surgery theory.
Differential spaces Let be a nonempty set. Suppose that some family of real functions on was chosen. Denote it by . It is an algebra with respect to the pointwise addition and multiplication. Let be equipped with the topology induced by . Suppose also that the following conditions hold. First: for every , where , and arbitrary , the composition . Second: every function, which in every point of locally coincides with some function from , also belongs to . A pair for which the above conditions hold, is called a Sikorski differential space.
| Mathematics | Geometry | null |
2076709 | https://en.wikipedia.org/wiki/Concrete%20slab | Concrete slab | A concrete slab is a common structural element of modern buildings, consisting of a flat, horizontal surface made of cast concrete. Steel-reinforced slabs, typically between 100 and 500 mm thick, are most often used to construct floors and ceilings, while thinner mud slabs may be used for exterior paving .
In many domestic and industrial buildings, a thick concrete slab supported on foundations or directly on the subsoil, is used to construct the ground floor. These slabs are generally classified as ground-bearing or suspended. A slab is ground-bearing if it rests directly on the foundation, otherwise the slab is suspended.
For multi-story buildings, there are several common slab designs :
Beam and block, also referred to as rib and block, is mostly used in residential and industrial applications. This slab type is made up of pre-stressed beams and hollow blocks and are temporarily propped until set, typically after 21 days.
A hollow core slab which is precast and installed on site with a crane
In high rise buildings and skyscrapers, thinner, pre-cast concrete slabs are slung between the steel frames to form the floors and ceilings on each level. Cast in-situ slabs are used in high rise buildings and large shopping complexes as well as houses. These in-situ slabs are cast on site using shutters and reinforced steel.
On technical drawings, reinforced concrete slabs are often abbreviated to "r.c.c. slab" or simply "r.c.". Calculations and drawings are often done by structural engineers in CAD software.
Thermal performance
Energy efficiency has become a primary concern for the construction of new buildings, and the prevalence of concrete slabs calls for careful consideration of its thermal properties in order to minimise wasted energy. Concrete has similar thermal properties to masonry products, in that it has a relatively high thermal mass and is a good conductor of heat.
In some special cases, the thermal properties of concrete have been employed, for example as a heatsink in nuclear power plants or a thermal buffer in industrial freezers.
Thermal conductivity
Thermal conductivity of a concrete slab indicates the rate of heat transfer through the solid mass by conduction, usually in regard to heat transfer to or from the ground. The coefficient of thermal conductivity, k, is proportional to density of the concrete, among other factors. The primary influences on conductivity are moisture content, type of aggregate, type of cement, constituent proportions, and temperature. These various factors complicate the theoretical evaluation of a k-value, since each component has a different conductivity when isolated, and the position and proportion of each components affects the overall conductivity. To simplify this, particles of aggregate may be considered to be suspended in the homogeneous cement. Campbell-Allen and Thorne (1963) derived a formula for the theoretical thermal conductivity of concrete. In practice this formula is rarely applied, but remains relevant for theoretical use. Subsequently, Valore (1980) developed another formula in terms of overall density. However, this study concerned hollow concrete blocks and its results are unverified for concrete slabs.
The actual value of k varies significantly in practice, and is usually between 0.8 and 2.0 W m−1 K−1. This is relatively high when compared to other materials, for example the conductivity of wood may be as low as 0.04 W m−1 K−1. One way of mitigating the effects of thermal conduction is to introduce insulation .
Thermal mass
The second consideration is the high thermal mass of concrete slabs, which applies similarly to walls and floors, or wherever concrete is used within the thermal envelope. Concrete has a relatively high thermal mass, meaning that it takes a long time to respond to changes in ambient temperature. This is a disadvantage when rooms are heated intermittently and require a quick response, as it takes longer to warm the entire building, including the slab. However, the high thermal mass is an advantage in climates with large daily temperature swings, where the slab acts as a regulator, keeping the building cool by day and warm by night.
Typically concrete slabs perform better than implied by their R-value. The R-value does not consider thermal mass, since it is tested under constant temperature conditions. Thus, when a concrete slab is subjected to fluctuating temperatures, it will respond more slowly to these changes and in many cases increase the efficiency of a building. In reality, there are many factors which contribute to the effect of thermal mass, including the depth and composition of the slab, as well as other properties of the building such as orientation and windows.
Thermal mass is also related to thermal diffusivity, heat capacity and insulation. Concrete has low thermal diffusivity, high heat capacity, and its thermal mass is negatively affected by insulation (e.g. carpet).
Insulation
Without insulation, concrete slabs cast directly on the ground can cause a significant amount of extraneous energy transfer by conduction, resulting in either lost heat or unwanted heat. In modern construction, concrete slabs are usually cast above a layer of insulation such as expanded polystyrene, and the slab may contain underfloor heating pipes. However, there are still uses for a slab that is not insulated, for example in outbuildings which are not heated or cooled to room temperature . In these cases, casting the slab directly onto a substrate of aggregate will maintain the slab near the temperature of the substrate throughout the year, and can prevent both freezing and overheating.
A common type of insulated slab is the beam and block system (mentioned above) which is modified by replacing concrete blocks with expanded polystyrene blocks. This not only allows for better insulation but decreases the weight of slab which has a positive effect on load bearing walls and foundations.
Design
Ground-bearing slabs
Ground-bearing slabs, also known as "on-ground" or "slab-on-grade", are commonly used for ground floors on domestic and some commercial applications. It is an economical and quick construction method for sites that have non-reactive soil and little slope.
For ground-bearing slabs, it is important to design the slab around the type of soil, since some soils such as clay are too dynamic to support a slab consistently across its entire area. This results in cracking and deformation, potentially leading to structural failure of any members attached to the floor, such as wall studs.
Levelling the site before pouring concrete is an important step, as sloping ground will cause the concrete to cure unevenly and will result in differential expansion. In some cases, a naturally sloping site may be levelled simply by removing soil from the uphill site. If a site has a more significant grade, it may be a candidate for the "cut and fill" method, where soil from the higher ground is removed, and the lower ground is built up with fill.
In addition to filling the downhill side, this area of the slab may be supported on concrete piers which extend into the ground. In this case, the fill material is less important structurally as the dead weight of the slab is supported by the piers. However, the fill material is still necessary to support the curing concrete and its reinforcement.
There are two common methods of filling - controlled fill and rolled fill.
Controlled fill: Fill material is compacted in several layers by a vibrating plate or roller. Sand fills areas up to around 800 mm deep, and clay may be used to fill areas up to 400 mm deep. However, clay is much more reactive than sand, so it should be used sparingly and carefully. Clay must be moist during compaction to homogenise it.
Rolled fill: Fill is repeatedly compacted by an excavator, but this method of compaction is less effective than a vibrator or roller. Thus, the regulations on maximum depth are typically stricter.
Proper curing of ground-bearing concrete is necessary to obtain adequate strength. Since these slabs are inevitably poured on-site (rather than precast as some suspended slabs are), it can be difficult to control conditions to optimize the curing process. This is usually aided by a membrane, either plastic (temporary) or a liquid compound (permanent).
Ground-bearing slabs are usually supplemented with some form of reinforcement, often steel rebar. However, in some cases such as concrete roads, it is acceptable to use an unreinforced slab if it is adequately engineered .
Suspended slabs
For a suspended slab, there are a number of designs to improve the strength-to-weight ratio. In all cases the top surface remains flat, and the underside is modulated:
A corrugated slab is designed when the concrete is poured into a corrugated steel tray, more commonly called decking. This steel tray improves strength of the slab, and prevents the slab from bending under its own weight. The corrugations run in one direction only.
A ribbed slab gives considerably more strength in one direction. This is achieved with concrete beams bearing load between piers or columns, and thinner, integral ribs in the perpendicular direction. An analogy in carpentry would be a subfloor of bearers and joists. Ribbed slabs have higher load ratings than corrugated or flat slabs, but are inferior to waffle slabs.
A waffle slab gives added strength in both directions using a matrix of recessed segments beneath the slab. This is the same principle used in the ground-bearing version, the waffle slab foundation. Waffle slabs are usually deeper than ribbed slabs of equivalent strength, and are heavier hence require stronger foundations. However, they provide increased mechanical strength in two dimensions, a characteristic important for vibration resistance and soil movement.
Unreinforced slabs
Unreinforced or "plain" slabs are becoming rare and have limited practical applications, with one exception being the mud slab . They were once common in the US, but the economic value of reinforced ground-bearing slabs has become more appealing for many engineers. Without reinforcement, the entire load on these slabs is supported by the strength of the concrete, which becomes a vital factor. As a result, any stress induced by a load, static or dynamic, must be within the limit of the concrete's flexural strength to prevent cracking. Since unreinforced concrete is relatively very weak in tension, it is important to consider the effects of tensile stress caused by reactive soil, wind uplift, thermal expansion, and cracking. One of the most common applications for unreinforced slabs is in concrete roads.
Mud slabs
Mud slabs, also known as rat slabs, are thinner than the more common suspended or ground-bearing slabs (usually 50 to 150 mm), and usually contain no reinforcement. This makes them economical and easy to install for temporary or low-usage purposes such as subfloors, crawlspaces, pathways, paving, and levelling surfaces. In general, they may be used for any application which requires a flat, clean surface. This includes use as a base or "sub-slab" for a larger structural slab. On uneven or steep surfaces, this preparatory measure is necessary to provide a flat surface on which to install rebar and waterproofing membranes. In this application, a mud slab also prevents the plastic bar chairs from sinking into soft topsoil which can cause spalling due to incomplete coverage of the steel. Sometimes a mud slab may be a substitute for coarse aggregate. Mud slabs typically have a moderately rough surface, finished with a float.
Axes of support
One-way slabs
A one-way slab has moment-resisting reinforcement only in its short axis, and is used when the moment in the long axis is negligible. Such designs include corrugated slabs and ribbed slabs. Non-reinforced slabs may also be considered one-way if they are supported on only two opposite sides (i.e. they are supported in one axis). A one-way reinforced slab may be stronger than a two-way non-reinforced slab, depending on the type of load.
The calculation of reinforcement requirements for a one-way slab can be extremely tedious and time-consuming, and one can never be completely certain of the best design. Even minor changes to the project can necessitate recalculation of the reinforcement requirements. There are many factors to consider during the structural structure design of one-way slabs, including:
Load calculations
Bending moment calculation
Acceptable depth of flexure and deflection
Type and distribution of reinforcing steel
Two-way slabs
A two-way slab has moment resisting reinforcement in both directions. This may be implemented due to application requirements such as heavy loading, vibration resistance, clearance below the slab, or other factors. However, an important characteristic governing the requirement of a two-way slab is the ratio of the two horizontal lengths. If where is the short dimension and is the long dimension, then moment in both directions should be considered in design. In other words, if the axial ratio is greater than two, a two-way slab is required.
A non-reinforced slab is two-way if it is supported in both horizontal axes.
Construction
A concrete slab may be prefabricated (precast), or constructed on site.
Prefabricated
Prefabricated concrete slabs are built in a factory and transported to the site, ready to be lowered into place between steel or concrete beams. They may be pre-stressed (in the factory), post-stressed (on site), or unstressed. It is vital that the wall supporting structure is built to the correct dimensions, or the slabs may not fit.
On-site
On-site concrete slabs are built on the building site using formwork - a type of boxing into which the wet concrete is poured. If the slab is to be reinforced, the rebars, or metal bars, are positioned within the formwork before the concrete is poured in. Plastic-tipped metal or plastic bar chairs, are used to hold the rebar away from the bottom and sides of the form-work, so that when the concrete sets it completely envelops the reinforcement. This concept is known as concrete cover. For a ground-bearing slab, the formwork may consist only of side walls pushed into the ground. For a suspended slab, the formwork is shaped like a tray, often supported by a temporary scaffold until the concrete sets.
The formwork is commonly built from wooden planks and boards, plastic, or steel. On commercial building sites, plastic and steel are gaining popularity as they save labour. On low-budget or small-scale jobs, for instance when laying a concrete garden path, wooden planks are very common. After the concrete has set the wood may be removed.
Formwork can also be permanent, and remain in situ post concrete pour. For large slabs or paths that are poured in sections, this permanent formwork can then also act as isolation joints within concrete slabs to reduce the potential for cracking due to concrete expansion or movement.
In some cases formwork is not necessary - for instance, a ground slab surrounded by dense soil, brick or block foundation walls, where the walls act as the sides of the tray and hardcore (rubble) acts as the base.
| Technology | Building materials | null |
30552217 | https://en.wikipedia.org/wiki/Stellar%20mass | Stellar mass | Stellar mass is a phrase that is used by astronomers to describe the mass of a star. It is usually enumerated in terms of the Sun's mass as a proportion of a solar mass (). Hence, the bright star Sirius has around . A star's mass will vary over its lifetime as mass is lost with the stellar wind or ejected via pulsational behavior, or if additional mass is accreted, such as from a companion star.
Properties
Stars are sometimes grouped by mass based upon their evolutionary behavior as they approach the end of their nuclear fusion lifetimes.
Very-low-mass stars with masses below 0.5 do not enter the asymptotic giant branch (AGB) but evolve directly into white dwarfs. (At least in theory; the lifetimes of such stars are long enough—longer than the age of the universe to date—that none has yet had time to evolve to this point and be observed.)
Low-mass stars with a mass below about 1.8–2.2 (depending on composition) do enter the AGB, where they develop a degenerate helium core.
Intermediate-mass stars undergo helium fusion and develop a degenerate carbon–oxygen core.
Massive stars have a minimum mass of 5–10 . These stars undergo carbon fusion, with their lives ending in a core-collapse supernova explosion. Black holes created as a result of a stellar collapse are termed stellar-mass black holes.
The combination of the radius and the mass of a star determines the surface gravity. Giant stars have a much lower surface gravity than main sequence stars, while the opposite is the case for degenerate, compact stars such as white dwarfs. The surface gravity can influence the appearance of a star's spectrum, with higher gravity causing a broadening of the absorption lines.
Range
One of the most massive stars known is Eta Carinae, with ; its lifespan is very short—only several million years at most. A study of the Arches Cluster suggests that is the upper limit for stars in the current era of the universe. The reason for this limit is not precisely known, but it is partially due to the Eddington luminosity which defines the maximum amount of luminosity that can pass through the atmosphere of a star without ejecting the gases into space. However, a star named R136a1 in the RMC 136a star cluster has been measured at 215 , putting this limit into question. A study has determined that stars larger than 150 in R136 were created through the collision and merger of massive stars in close binary systems, providing a way to sidestep the 150 limit.
The first stars to form after the Big Bang may have been larger, up to 300 or more, due to the complete absence of elements heavier than lithium in their composition. This generation of supermassive, population III stars is long extinct, however, and currently only theoretical.
With a mass only 93 times that of Jupiter (), or .09 , AB Doradus C, a companion to AB Doradus A, is the smallest known star undergoing nuclear fusion in its core. For stars with similar metallicity to the Sun, the theoretical minimum mass the star can have, and still undergo fusion at the core, is estimated to be about 75 . When the metallicity is very low, however, a recent study of the faintest stars found that the minimum star size seems to be about 8.3% of the solar mass, or about 87 . Smaller bodies are called brown dwarfs, which occupy a poorly defined grey area between stars and gas giants.
Change
The Sun is losing mass from the emission of electromagnetic energy and by the ejection of matter with the solar wind. It is expelling about per year. The mass loss rate will increase when the Sun enters the red giant stage, climbing to y−1 when it reaches the tip of the red-giant branch. This will rise to y−1 on the asymptotic giant branch, before peaking at a rate of 10−5 to 10−4 y−1 as the Sun generates a planetary nebula. By the time the Sun becomes a degenerate white dwarf, it will have lost 46% of its starting mass.
| Physical sciences | Stellar astronomy | Astronomy |
22965409 | https://en.wikipedia.org/wiki/Mormyrinae | Mormyrinae | The subfamily Mormyrinae contains all but one of the genera of the African freshwater fish family Mormyridae in the order Osteoglossiformes. They are often called elephantfish due to a long protrusion below their mouths used to detect buried invertebrates that is suggestive of a tusk or trunk (some such as Marcusenius senegalensis gracilis are sometimes called trunkfish though this term is usually associated with an unrelated group of fish). They can also be called tapirfish.
Fish in this subfamily have a high brain to body mass ratio due to an expanded cerebellum (called a gigantocerebellum) used in their electroperception. Linked to this they are notable for holding the zoological record at around 60% as the brains that consume the most energy as a percentage of the body's metabolic rate of any animal. Previous to this discovery, it was the “human brain, which had been thought to hold the record in this respect”.p. 605 The human brain in comparison uses only 20%.
Mormyrinae is the largest subfamily in the Osteoglossiformes order with around 170 species.
Unique brain percentage of body energy consumption
The range with which the adult brain in all animals regardless of body size consumes energy as a percentage of the body's energy is roughly 2% to 8%. The only exceptions of animal brains using more than 10% (in terms of O2 intake) are a few primates (11–13%) and humans. However, research published in 1996 in The Journal of Experimental Biology by Göran Nilsson at Uppsala University found that mormyrinae brains utilize roughly 60% of their body O2 consumption. This is due to the combination of large brain size (3.1% of body mass compared to 2% in humans) and them being ectothermic.
The body energy expenditure of ectothermic animals is about 1/13 of that of endotherms but the energy expenditure of the brains of both ectothermic and endothermic animals are similar. Other high brain percentage (2.6–3.7 % of the body mass) animals exist such as bats, swallows, crows and sparrows but these due to their endothermy also have high body energy metabolism. The unusual high brain energy consumption percentage of mormyrinae fish is thus due to them having the unusual combination of a large brain in a low energy consuming body. The actual energy consumption per unit mass of its brain is not in fact particularly high and indeed lower (2.02 mg g1 h1) than that in some other fish such as Salmonidae (2.20 mg g−1 h−1). In comparison, that of rats is 6.02 mg g−1 h−1 and humans 2.61 mg g−1 h−1.Table 1
The oxygen for this in low oxygen conditions comes from gulping air at the water surface.
Large brains
Unlike mammals, the part of the brain enlarged in mormyrinae fish is the cerebellum not the cerebrum and reflecting this is called a gigantocerebellum. This enlarged cerebellum links to their electroreception. They generate weak electrical fields from specialized electric organ muscles. To detect these fields from those created by other mormyrinae fish, their prey animals, and how their nearby environment distorts them, their skin contains three types of electroreceptors. The electroperception they enable is used in hunting prey, electrolocation, and communication (Knollenorgans are the specialized electrical detection organs for this function). This electroperception, however, requires complex information processing in special neurocircuitry since it is dependent upon the ability to distinguish between self-generated and other generated electric fields, and their self-created aspects and their environment modification. To enable this specialized information processing, with each self-generated electrical discharge, an efference copy of it is made for comparison with the detected electric field it creates. The cerebellum plays a key role in processing such efference copy dependent perception. The muddy waters where they live has resulted in such electroperception playing a key role in their survival and this has resulted in their gigantocerebellum.
Classification
The classification by osteology-based traits of Mormyridae into the two subfamilies of Mormyrinae and Petrocephalinae has been confirmed using molecular phylogeny methods. The classification below comes from FishBase.
Subfamily Mormyrinae
Boulengeromyrus Taverne & Géry, 1968
Brevimyrus Taverne, 1971
Brienomyrus Taverne, 1971
Campylomormyrus Bleeker, 1874
Cyphomyrus Pappenheim, 1906
Genyomyrus Boulenger, 1898
Gnathonemus Gill, 1863
Heteromormyrus Steindachner, 1866
Hippopotamyrus Pappenheim, 1906
Hyperopisus Gill, 1862
Isichthys Gill, 1863
Ivindomyrus Taverne & Géry, 1975
Marcusenius Gill, 1862
Mormyrops J. P. Müller, 1843
Mormyrus Linnaeus, 1758
Myomyrus Boulenger, 1898
Oxymormyrus Bleeker, 1874
Paramormyrops Taverne, Thys van den audenaerde& Heymer, 1977
Pollimyrus Taverne, 1971
Stomatorhinus Boulenger, 1898
In culture
| Biology and health sciences | Osteoglossiformes | Animals |
29029594 | https://en.wikipedia.org/wiki/Human%20head | Human head | In human anatomy, the head is at the top of the human body. It supports the face and is maintained by the skull, which itself encloses the brain.
Structure
The human head consists of a fleshy outer portion, which surrounds the bony skull. The brain is enclosed within the skull. There are 22 bones in the human head. The head rests on the neck, and the seven cervical vertebrae support it. The human head typically weighs between Over 98% of humans fit into this range. There have been odd incidences where human beings have abnormally small or large heads. The Zika virus was responsible for underdeveloped heads in the early 2000s.
The face is the anterior part of the head, containing the eyes, nose, and mouth. On either side of the mouth, the cheeks provide a fleshy border to the oral cavity. The ears sit to either side of the head.
Blood supply
The head receives blood supply through the internal and external carotid arteries. These supply the area outside of the skull (external carotid artery) and inside of the skull (internal carotid artery). The area inside the skull also receives blood supply from the vertebral arteries, which travel up through the cervical vertebrae.
Nerve supply
The twelve pairs of cranial nerves provide the majority of nervous control to the head. The sensation to the face is provided by the branches of the trigeminal nerve, the fifth cranial nerve. Sensation to other portions of the head is provided by the cervical nerves.
Modern texts are in agreement about which areas of the skin are served by which nerves, but there are minor variations in some of the details. The borders designated by diagrams in the 1918 edition of Gray's Anatomy are similar but not identical to those generally accepted today.
The cutaneous innervation of the head is as follows:
Ophthalmic nerve (green)
Maxillary nerve (pink)
Mandibular nerve (yellow)
Cervical plexus (purple)
Dorsal rami of cervical nerves (blue) and others are in picture which show following in upper column
Function
The head contains sensory organs: two eyes, two ears, a nose and tongue inside of the mouth. It also houses the brain. Together, these organs function as a processing center for the body by relaying sensory information to the brain. Humans can process information faster by having this central nerve cluster.
Society and culture
For humans, the front of the head (the face) is the main distinguishing feature between different people due to its easily discernible features, such as eye and hair colors, shapes of the sensory organs, and the wrinkles. Humans easily differentiate between faces because of the brain's predisposition toward facial recognition. When observing a relatively unfamiliar species, all faces seem nearly identical. Human infants are biologically programmed to recognize subtle differences in anthropomorphic facial features.
People who have greater than average intelligence are sometimes depicted in cartoons as having bigger heads as a way of notionally indicating that they have a "larger head". Additionally, in science fiction, an extraterrestrial having a big head is often symbolic of high intelligence. Despite this depiction, advances in neurobiology have shown that the functional diversity of the brain means that a difference in overall brain size is only slightly to moderately correlated to differences in overall intelligence between two humans.
The head is a source for many metaphors and metonymies in human language, including referring to things typically near the human head ( "the head of the bed"), things physically similar to the way a head is arranged spatially to a body ("the head of the table"), metaphorically ("the head of the class"), and things that represent some characteristics associated with the head, such as intelligence ("there are a lot of good heads in this company").
Ancient Greeks had a method for evaluating sexual attractiveness based on the Golden ratio, part of which included measurements of the head.
Headhunting is the practice of taking and preserving a person's head after killing the person. Headhunting has been practiced across the Americas, Europe, Asia, and Oceania for millennia.
Clothing
Headpieces can signify status, origin, religious/spiritual beliefs, social grouping, team affiliation, occupation, or fashion choices.
In many cultures, covering the head is seen as a sign of respect. Often, some or all of the head must be covered and veiled when entering holy places or places of prayer. For many centuries, women in Europe, the Middle East, and South Asia have covered their head hair as a sign of modesty. This trend has changed drastically in Europe in the 20th century, although is still observed in other parts of the world. In addition, a number of religions require men to wear specific head clothing—such as the Islamic taqiyah, Jewish yarmulke, or the Sikh turban. The same goes for women with the Muslim hijab or Christian nun's habit.
A hat is a head covering that can serve a variety of purposes. Hats may be worn as part of a uniform or used as a protective device, such as a hard hat, a covering for warmth, a covering that meets sensory needs in some neurodivergent people, or a fashion accessory. Hats can also be indicative of social status in some areas of the world.
Anthropometry
While numerous charts detailing head sizes in infants and children exist, most do not measure average head circumference past the age of 21. Reference charts for adult head circumference also generally feature homogeneous samples and fail to take height and weight into account.
One study in the United States estimated the average human head circumference to be in males and in females. A British study by Newcastle University showed an average size of 57.2 cm for males and 55.2 cm for females with average size varying proportionally with height
Macrocephaly can be an indicator of increased risk for some types of cancer in individuals who carry the genetic mutation that causes Cowden syndrome. For adults, this refers to head sizes greater than 58 centimeters in men or greater than 57 centimeters in women.
| Biology and health sciences | Human anatomy | Health |
2887043 | https://en.wikipedia.org/wiki/Plumb%20bob | Plumb bob | A plumb bob, plumb bob level, or plummet, is a weight, usually with a pointed tip on the bottom, suspended from a string and used as a vertical direction as a reference line, or plumb-line. It is a precursor to the spirit level and used to establish a vertical datum. It is typically made of stone, wood, or lead, but can also be made of other metals. If it is used for decoration, it may be made of bone or ivory.
The instrument has been used since at least the time of ancient Egypt to ensure that constructions are "plumb", or vertical. It is also used in surveying, to establish the nadir (opposite of zenith) with respect to gravity of a point in space. It is used with a variety of instruments (including levels, theodolites, and steel tapes) to set the instrument exactly over a fixed survey marker or to transcribe positions onto the ground for placing a marker.
Etymology
The plumb in plumb bob derives from Latin plumbum ('lead'), the material once used for the weighted bob at the end. The adjective plumb developed by extension, as did the noun aplomb, from the notion of "standing upright".
Use
Until the modern age, plumb bobs were used on most tall structures to provide vertical datum lines for the building measurements. A section of the scaffolding would hold a plumb line, which was centered over a datum mark on the floor. As the building proceeded upward, the plumb line would also be taken higher, still centered on the datum. Many cathedral spires, domes and towers still have brass datum marks inlaid into their floors, which signify the center of the structure above.
A plumb bob and line alone can determine only a vertical reference. However, if they are mounted on a suitable scale the instrument may also be used as an inclinometer to measure angles to the vertical.
Ancient Egyptians used a plumb line attached to the top outer part of a tool resembling a letter E; when placed against a wall, the plumb line would indicate a vertical line. An A-frame level with a plumb line hung from the vertex was also used to find horizontal; these were used in Europe until the mid–19th century. A variation of this tool has the plumb line hung from the top of an inverted T shape.
The early skyscrapers used heavy plumb bobs, hung on wire in their elevator shafts.
A plumb bob may be in a container of water (when conditions are above freezing temperatures), molasses, very viscous oils or other liquids to dampen any swinging movement, functioning as a shock absorber.
Determining center of gravity of an irregular shape
Students of figure drawing will also make use of a plumb line to find the vertical axis through the center of gravity of their subject and lay it down on paper as a point of reference. The device used may be purpose-made plumb lines, or simply makeshift devices made from a piece of string and a weighted object, such as a metal washer. This plumb line is important for lining up anatomical geometries and visualizing the subject's center of balance.
| Technology | Surveying tools | null |
35783123 | https://en.wikipedia.org/wiki/Grizzly%20bear | Grizzly bear | The grizzly bear (Ursus arctos horribilis), also known as the North American brown bear or simply grizzly, is a population or subspecies of the brown bear inhabiting North America.
In addition to the mainland grizzly (Ursus arctos horribilis), other morphological forms of brown bear in North America are sometimes identified as grizzly bears. These include three living populations—the Kodiak bear (U. a. middendorffi), the Kamchatka bear (U. a. beringianus), and the peninsular grizzly (U. a. gyas)—as well as the extinct California grizzly (U. a. californicus†) and Mexican grizzly (formerly U. a. nelsoni†). On average, grizzly bears near the coast tend to be larger while inland grizzlies tend to be smaller.
The Ussuri brown bear (U. a. lasiotus), inhabiting the Ussuri Krai, Sakhalin, the Amur Oblast, the Shantar Islands, Iturup Island, and Kunashir Island in Siberia, northeastern China, North Korea, and Hokkaidō in Japan, is sometimes referred to as the "black grizzly", although it is no more closely related to North American brown bears than other subspecies of the brown bear around the world.
Classification
Meaning of "grizzly"
Meriwether Lewis and William Clark first described it as grisley, which could be interpreted as either "grizzly" (i.e., "grizzled"—that is, with grey-tipped hair) or "grisly" ("fear-inspiring", now usually "gruesome"). The modern spelling supposes the former meaning; even so, naturalist George Ord formally classified it in 1815 as U. horribilis for its character.
Evolution and genetics
Phylogenetics
Several studies have been conducted on the genetic history of the grizzly bear. Classification has been revised along genetic lines. There are two morphological forms of Ursus arctos: the grizzly and the coastal brown bears, but these morphological forms do not have distinct mtDNA lineages. The genome of the grizzly bear was sequenced in 2018 and found to be 2,328.64Mb (mega-basepairs) in length, and contain 30,387 genes.
Ursus arctos
Brown bears originated in Eurasia, and first migrated to North America between 177,000 BP ~ 111,000 BP. Most grizzly bears belong to this initial population of North American brown bear (clade 4), which continues to be the dominant mitochondrial grouping south of subarctic North America. Genetic divergences suggest brown bears first migrated south during MIS-5 (~92,000 - 83,000 BP) upon the opening of the ice-free corridor, with the first fossils being near Edmonton (26,000 BP). Other mitochondrial lineages appear later- the Alexander and Haida Gwaii archipelagoes have an endemic lineage, which first appears around 20,000 BP. After a local extinction in Beringia ~33,000 BP, two closely related lineages repopulated Alaska and northern Canada from Eurasia after the Last Glacial Maximum (>25,000 BP).
In the 19th century, the grizzly was classified as 86 distinct species. By 1928 only seven grizzly species remained, and by 1953, only one species remained globally. However, modern genetic testing reveals the grizzly to be a subspecies of the brown bear (Ursus arctos). Biologist R.L. Rausch found that North America has but one species of grizzly. Therefore, everywhere it is the "brown bear"; in North America, it is the "grizzly", but these are all the same species, Ursus arctos.
Subspecies in North America
In 1963, Rausch reduced the number of North American subspecies to one, Ursus arctos middendorffi. Further testing of Y-chromosomes is required to yield an accurate new taxonomy with different subspecies. Coastal grizzlies, often referred to by the popular but geographically redundant synonym of "brown bear" or "Alaskan brown bear" are larger and darker than inland grizzlies, which is why they, too, were considered a different species from grizzlies. Kodiak Grizzly Bears were also at one time considered distinct. Therefore, at one time the thought was there were five different "species" of brown bear, including three in North America.
It remains an open question how many subspecies of Ursus arctos are present in North America. Traditionally, the following have been recognized alongside U. a. horribilis proper: Alaskan brown bear (U. a. alascensis), California grizzly bear (U. a. californicus), Dall Island brown bear (U. a. dalli), the Alaska Peninsula brown bear (U. a. gyas), Kodiak bear (U. a. middendorffi), Mexican grizzly bear (U. a. nelsoni), ABC Islands bear (U. a. sitkensis), and Stickeen brown bear (U. a. stikeenensis).
One study based on mitochondrial DNA recovered no distinct genetic groupings of North American brown bears, implying that previous grizzly bear subspecies designations are unwarranted and these bears should all be considered populations of U. a. horribilis. The only genetically anomalous grouping was the ABC Islands bear, which bears genetic introgression from the polar bear. A formal taxonomic revision was not performed, however, and the implied synonymy has not been accepted by taxonomic authorities. Furthermore, a recent whole-genome study suggests that certain Alaskan brown bears, including the Kodiak and Alaskan Peninsula grizzly bears, are members of a Eurasian brown bear lineage, more closely related to the Kamchatka brown bear than to other North American brown bears. Until the systematics of North American brown bears is studied in more depth, other North American subspecies have been provisionally considered separate from U. a. horribilis.
Appearance
Size
Grizzly bears are some of the largest subspecies of brown bear, only being beaten by the Kamchatka brown bears and the Kodiak bears. Grizzly bears vary in size depending on timing and populations.
The largest populations are the coastal grizzlies in the Alaskan peninsula, with males weighing and females weighing .
The populations in northern interior Canada are much smaller, with males weighing and females weighing . This is actually similar to the American black bear population of the area.
Average total length in this subspecies is between and , with an average shoulder height of and hindfoot length of . Newborn bears may weigh less than .
Characteristics
Although variable in color from blond to nearly black, grizzly bear fur is typically brown with darker legs and commonly white or blond tipped fur on the flank and back.
Grizzly bears overlap with Black Bears in range, but there are numerous factors that can differentiate the two:
A pronounced muscular hump appears on adult grizzlies' shoulders; black bears do not have this hump.
Aside from the distinguishing hump a grizzly bear can be identified by a "dished in" profile of their face with short, rounded ears, whereas a black bear has a straight face profile and longer ears.
A grizzly bear can also be identified by its rump, which is lower than its shoulders; a black bear's rump is higher than its shoulders.
A grizzly bear's front claws measure about in length; a black bear's claws measure about in length.
Range
In North America, grizzly bears previously ranged from Alaska down to Mexico and as far east as the western shores of Hudson Bay; the species is now found in Alaska, south through much of western Canada, and into portions of the northwestern United States (including Washington, Idaho, Montana, and Wyoming), extending as far south as Yellowstone and Grand Teton National Parks. In Canada, there are approximately 25,000 grizzly bears occupying British Columbia, Alberta, the Yukon, the Northwest Territories, Nunavut, and the northern part of Manitoba.
An article published in 1954 suggested they may be present in the tundra areas of the Ungava Peninsula and the northern tip of Labrador-Quebec. In British Columbia, grizzly bears inhabit approximately 90% of their original territory. There were approximately 25,000 grizzly bears in British Columbia when the European settlers arrived. However, population size has since significantly decreased due to hunting and habitat loss. In 2008, it was estimated there were 16,000 grizzly bears. A revised Grizzly bear count in 2012 for British Columbia was 15,075. Population estimates for British Columbia are based on hair-snagging, DNA-based inventories, mark-and-recapture, and a refined multiple regression model. In 2003, researchers from the University of Alberta spotted a grizzly on Melville Island in the high Arctic, which is the most northerly sighting ever documented.
Populations
Around 60,000 wild grizzly bears are located throughout North America, 30,000 of which are found in Alaska. and up to 29,000 live in Canada. The Alaskan population of 30,000 individuals is the highest population of any province / state in North America. Populations in Alaska are densest along the coast, where food supplies such as salmon are more abundant. The Admiralty Island National Monument protects the densest population: 1,600 bears on a 1,600 square-mile island. The majority of Canada's grizzlies live in British Columbia.
In the lower 48 United States, around 1,000 are found in the Northern Continental Divide in northwestern Montana. About 1,000 more live in the Greater Yellowstone Ecosystem in the tri-state area of Wyoming, Idaho and Montana. There are an estimated 70–100 grizzly bears living in northern and eastern Idaho. In September 2007, a hunter produced evidence of one bear in the Selway-Bitterroot Wilderness ecosystem, by killing a male grizzly bear there.
In the North Cascades ecosystem of northern Washington, grizzly bear populations are estimated to be fewer than 20 bears, but there is a longterm management plan to reintroduce the bears to North Cascades National Park.
Extirpated populations and recovery
The grizzly bear's original range included much of the Great Plains and the southwestern states, but it has been extirpated in most of those areas. Combining Canada and the United States, grizzly bears inhabit approximately half the area of their historical range.
Although the once-abundant California grizzly bear appears prominently on the state flag of California and was the symbol of the Bear Flag Republic before the state of California's admission to the Union in 1850, the subspecies or population is currently extinct. The last known grizzlies in California were killed in the Sierra foothills east of Fresno in the early 1920s.
The killing of the last grizzly bear in Arizona in 1936 at Escudilla Mountain is included in Aldo Leopold's Sand County Almanac. There has been no confirmed sighting of a grizzly in Colorado since 1979.
Other provinces and the United States may use a combination of methods for population estimates. Therefore, it is difficult to say precisely what methods were used to produce total population estimates for Canada and North America, as they were likely developed from a variety of studies. The grizzly bear currently has legal protection in Mexico, European countries, some areas of Canada, and in all of the United States. However, it is expected that repopulating its former range will be a slow process, due to various reasons, including the bear's slow reproductive habits and the effects of reintroducing such a large animal to areas prized for agriculture and livestock.
Biology
Hibernation
Grizzly bears hibernate for five to seven months each year (except where the climate is warm—the California grizzly did not hibernate). During this time, female grizzly bears give birth to their offspring, who then consume milk from their mother and gain strength for the remainder of the hibernation period. To prepare for hibernation, grizzlies must prepare a den and consume an immense amount of food because they do not eat during hibernation. Grizzly bears also do not defecate or urinate throughout the entire hibernation period. The male grizzly bear's hibernation ends in early to mid-March, while females emerge in April or early May.
In preparation for winter, bears can gain approximately , during a period of hyperphagia, before going into hibernation. The bear often waits for a substantial snowstorm before it enters its den: such behavior lessens the chances that predators will find the den. The dens are typically at elevations above on north-facing slopes. There is some debate among professionals as to whether grizzly bears technically hibernate: much of this debate revolves around body temperature and the ability of the bears to move around during hibernation on occasion. Grizzly bears can "partially" recycle their body wastes during this period. Although inland or Rocky Mountain grizzlies spend nearly half of their life in dens, coastal grizzlies with better access to food sources spend less time in dens. In some areas where food is very plentiful year round, grizzly bears skip hibernation altogether.
Reproduction
Except for females with cubs, grizzlies are normally solitary, active animals, but in coastal areas, grizzlies gather around streams, lakes, rivers, and ponds during the salmon spawn. Females (sows) produce one to four young (usually two) that are small and weigh only about at birth. A sow is protective of her offspring and will attack if she thinks she or her cubs are threatened.
Grizzly bears have one of the lowest reproductive rates of all terrestrial mammals in North America. This is due to numerous ecological factors. Grizzly bears do not reach sexual maturity until they are at least five years old. Once mated with a male in the summer, the female delays embryo implantation until hibernation, during which miscarriage can occur if the female does not receive the proper nutrients and caloric intake. On average, females produce two cubs in a litter and the mother cares for the cubs for up to two years, during which the mother will not mate.
Once the young leave or are killed, females may not produce another litter for three or more years, depending on environmental conditions. Male grizzly bears have large territories, up to , making finding a female scent difficult in such low population densities. Population fragmentation of grizzlies may destabilize the population from inbreeding depression. The gestation period for grizzly bears is approximately 180–250 days.
Litter size varies between one and four cubs, typically comprising twins or triplets. Cubs are always born in the mother's winter den while she is in hibernation. Female grizzlies are fiercely protective of their cubs, being able to fend off predators including larger male bears. Cubs feed entirely on their mother's milk until summer comes, after which they still drink milk but begin to eat solid foods. Cubs gain weight rapidly during their time with the mother—their weight will have increased from in the two years spent with the mother. Mothers may see their cubs in later years but both avoid each other.
Lifespan
The average lifespan for a male is estimated at 22 years, with that of a female being slightly longer at 26. Females live longer than males due to their less dangerous life; they do not engage in seasonal breeding fights as males do. The oldest known wild inland grizzly was about 34 years old in Alaska; the oldest known coastal bear was 39, but most grizzlies die in their first year of life. Captive grizzlies have lived as long as 44 years.
Movement
They have a tendency to chase fleeing animals, and although it has been said anecdotally that grizzly bears (Ursus arctos horribilis) can run at , the maximum speed reliably recorded at Yellowstone is . In addition, they can climb trees.
Ecology
Diet
Although grizzlies are of the order Carnivora and have the digestive system of carnivores, they are normally omnivores: their diets consist of both plants and animals. They have been known to prey on large mammals, when available, such as moose, elk, caribou, white-tailed deer, mule deer, bighorn sheep, bison, and even black bears, though they are more likely to take calves and injured individuals rather than healthy adults. Grizzly bears feed on fish such as salmon, trout, and bass, and those with access to a more protein-enriched diet in coastal areas potentially grow larger than inland individuals. Grizzly bears also readily scavenge food or carrion left behind by other animals. Grizzly bears will also eat birds and their eggs, and gather in large numbers at fishing sites to feed on spawning salmon. They frequently prey on baby deer left in the grass, and occasionally they raid the nests of raptors such as bald eagles.
Coastal Canadian and Alaskan grizzlies are larger than those that reside in the Rocky Mountains. This is due, in part, to the richness of their diets. In Yellowstone National Park in the United States, the grizzly bear's diet consists mostly of whitebark pine nuts, tubers, grasses, various rodents, army cutworm moths, and scavenged carcasses. None of these, however, match the fat content of the salmon available in Alaska and British Columbia. With the high fat content of salmon, it is not uncommon to encounter grizzlies in Alaska weighing . Grizzlies in Alaska supplement their diet of salmon and clams with sedge grass and berries. In areas where salmon are forced to leap waterfalls, grizzlies gather at the base of the falls to feed on and catch the fish. Salmon are at a disadvantage when they leap waterfalls because they cluster together at their bases and are therefore easier targets for the grizzlies. Grizzly bears are well-documented catching leaping salmon in their mouths at Brooks Falls in Katmai National Park and Preserve in Alaska. They are also very experienced in chasing the fish around and pinning them with their claws. At such sites such as Brooks Falls and McNeil Falls in Alaska, big male grizzlies fight regularly for the best fishing spots. Grizzly bears along the coast also forage for razor clams, and frequently dig into the sand to seek them. During the spring and fall, directly before and after the salmon runs, berries and grass make up the mainstay of the diets of coastal grizzlies.
Inland grizzlies may eat fish too, most notably in Yellowstone grizzlies eating Yellowstone cutthroat trout. The relationship with cutthroat trout and grizzlies is unique because it is the only example where Rocky Mountain grizzlies feed on spawning salmonid fish. However, grizzly bears themselves and invasive lake trout threaten the survival of the trout population and there is a slight chance that the trout will be eliminated.
Grizzly bears occasionally prey on small mammals, such as marmots, ground squirrels, lemmings, and voles. The most famous example of such predation is in Denali National Park and Preserve, where grizzlies chase, pounce on, and dig up Arctic ground squirrels to eat. In some areas, grizzly bears prey on hoary marmots, overturning rocks to reach them, and in some cases preying on them when they are in hibernation. Larger prey includes bison and moose, which are sometimes taken by bears in Yellowstone National Park. Because bison and moose are dangerous prey, grizzlies usually use cover to stalk them and/or pick off weak individuals or calves. Grizzlies in Alaska also regularly prey on moose calves, which in Denali National Park may be their main source of meat. In fact, grizzly bears are such important predators of moose and elk calves in Alaska and Yellowstone that they may kill as many as 51 percent of elk or moose calves born that year. Grizzly bears have also been blamed in the decline of elk in Yellowstone National Park when the actual predators were thought to be gray wolves. In northern Alaska, grizzlies are a significant predator of caribou, mostly taking sick or old individuals or calves. Several studies show that grizzly bears may follow the caribou herds year-round in order to maintain their food supply. In northern Alaska, grizzly bears often encounter muskox. Despite the fact that muskox do not usually occur in grizzly habitat and that they are bigger and more powerful than caribou, predation on muskox by grizzlies has been recorded.
Grizzlies along the Alaskan coast also scavenge on dead or washed up whales. Usually such incidents involve only one or two grizzlies at a carcass, but up to ten large males have been seen at a time eating a dead humpback whale. Dead seals and sea lions are also consumed.
Although the diets of grizzly bears vary extensively based on seasonal and regional changes, plants make up a large portion of them, with some estimates as high as 80–90%. Various berries constitute an important food source when they are available. These can include blueberries, blackberries (Rubus fruticosus), salmon berries (Rubus spectabilis), cranberries (Vaccinium oxycoccos), buffalo berries (Shepherdia argentea), soapberries (Shepherdia canadensis), and huckleberries (Vaccinium parvifolium), depending on the environment. Insects such as ladybugs, ants, and bees are eaten if they are available in large quantities. In Yellowstone National Park, grizzly bears may obtain half of their yearly caloric needs by feeding on miller moths that congregate on mountain slopes. When food is abundant, grizzly bears will feed in groups. For example, many grizzly bears will visit meadows right after an avalanche or glacier slide. This is due to an influx of legumes, such as Hedysarum, which the grizzlies consume in massive amounts. When food sources become scarcer, however, they separate once again.
Interspecific competition
The relationship between grizzly bears and other predators is mostly one-sided; grizzly bears will approach feeding predators to steal their kill. In general, the other species will leave the carcasses for the bear to avoid competition or predation. Any parts of the carcass left uneaten are scavenged by smaller animals.
Wolves
With the reintroduction of gray wolves to Yellowstone, many visitors have witnessed a once common struggle between a keystone species, the grizzly bear, and its historic rival, the gray wolf. The interactions of grizzly bears with the wolves of Yellowstone have been under considerable study. Typically, the conflict will be in the defence of young or over a carcass, which is commonly an elk killed by wolves.
The grizzly bear uses its keen sense of smell to locate the kill. As the wolves and grizzly compete for the kill, one wolf may try to distract the bear while the others feed. The bear then may retaliate by chasing the wolves. If the wolves become aggressive with the bear, it is normally in the form of quick nips at its hind legs. Thus, the bear will sit down and use its ability to protect itself in a full circle. Rarely do interactions such as these end in death or serious injury to either animal. One carcass simply is not usually worth the risk to the wolves (if the bear has the upper hand due to strength and size) or to the bear (if the wolves are too numerous or persistent).
While wolves usually dominate grizzly bears during interactions at wolf dens, both grizzly and black bears have been reported killing wolves and their cubs at wolf dens even when the wolves were acting in defence.
Big cats
Cougars generally give the bears a wide berth. Grizzlies have less competition with cougars than with other predators, such as coyotes, wolves, and other bears. When a grizzly descends on a cougar feeding on its kill, the cougar usually gives way to the bear. When a cougar does stand its ground, it will use its superior agility and its claws to harass the bear, yet stay out of its reach until one of them gives up. Grizzly bears occasionally kill cougars in disputes over kills. There have been several anecdotes, primarily from the late 19th and early 20th centuries, of cougars and grizzly bears killing each other in fights to the death.
The other big cat present in the United States which might pose a threat to bears is the jaguar; however, both species have been extirpated in the regions of the Southwest where their former habitats overlapped, and grizzlies remain so far absent from the regions along the U.S.-Mexico border, where jaguars appear to be returning.
Other bears
Black bears generally stay out of grizzly territory, but grizzlies may occasionally enter black bear terrain to obtain food sources both bears enjoy, such as pine nuts, acorns, mushrooms, and berries. When a black bear sees a grizzly coming, it either turns tail and runs or climbs a tree.
Black bears are not strong competition for prey because they have a more herbivorous diet. Confrontations are rare because of the differences in size, habitats, and diets of the bear species. When this happens, it is usually with the grizzly being the aggressor. The black bear will only fight when it is a smaller grizzly such as a yearling or when the black bear has no other choice but to defend itself. There is at least one confirmed observation of a grizzly bear digging out, killing, and eating a black bear when the latter was in hibernation.
The segregation of black bear and grizzly bear populations is possibly due to competitive exclusion. In certain areas, grizzly bears outcompete black bears for the same resources. For example, many Pacific coastal islands off British Columbia and Alaska support either the black bear or the grizzly, but rarely both.
In regions where both species coexist, they are divided by landscape gradients such as the age of forest, elevation, and land openness. Grizzly bears tend to favor old forests with high productivity, higher elevations and more open habitats compared with black bears. However, a bear shot in autumn 1986 in Michigan was thought by some to be a grizzly×black bear hybrid, due to its unusually large size and its proportionately larger braincase and skull, but DNA testing was unable to determine whether it was a large American black bear or a grizzly bear.
Encounters between grizzly bears and polar bears have increased in recent times due to global warming. In encounters the grizzly is usually the more aggressive one and often dominate in fight. However, healthy polar bears seem to be dominant over the grizzly.
However, conflict is not the only result of the two bears meeting; in some instances grizzly–polar bear hybrids (called grolar bears or pizzly bears depending on the sex of the parents) are produced.
Various small predators
Coyotes, foxes, and wolverines are generally regarded merely as pests to grizzlies rather than competition, though they may compete for smaller prey, such as ground squirrels and rabbits. All three will try to scavenge whatever they can from the bears. Wolverines are aggressive enough to occasionally persist until the bear finishes eating, leaving more scraps than normal for the smaller animal. Packs of coyotes have also displaced grizzly bears in disputes over kills. However, the removal of wolves and grizzlies in California may have greatly reduced the abundance of the endangered San Joaquin Kit Fox.
Ecological role
The grizzly bear has several relationships with its ecosystem. One such relationship is a mutualistic relationship with fleshy-fruit bearing plants. After the grizzly consumes the fruit, the seeds are excreted and thereby dispersed in a germinable condition. Some studies have shown germination success is indeed increased as a result of seeds being deposited along with nutrients in feces. This makes grizzly bears important seed distributors in their habitats.
While foraging for tree roots, plant bulbs, or ground squirrels, bears stir up the soil. This process not only helps grizzlies access their food, but also increases species richness in alpine ecosystems. An area that contains both bear digs and undisturbed land has greater plant diversity than an area that contains just undisturbed land. Along with increasing species richness, soil disturbance causes nitrogen to be dug up from lower soil layers, and makes nitrogen more readily available in the environment. An area that has been dug by the grizzly bear has significantly more nitrogen than an undisturbed area.
Nitrogen cycling is not only facilitated by grizzlies digging for food, it is also accomplished via their habit of carrying salmon carcasses into surrounding forests. It has been found that spruce tree (Picea glauca) foliage within of the stream where the salmon have been obtained contains nitrogen originating from salmon on which the bears preyed. These nitrogen influxes to the forest are directly related to the presence of grizzly bears and salmon.
Grizzlies directly regulate prey populations and also help prevent overgrazing in forests by controlling the populations of other species in the food chain. An experiment in Grand Teton National Park in Wyoming in the United States showed removal of wolves and grizzly bears caused populations of their herbivorous prey to increase. This, in turn, changed the structure and density of plants in the area, which decreased the population sizes of migratory birds. This provides evidence grizzly bears represent a keystone predator, having a major influence on the entire ecosystem they inhabit.
When grizzly bears fish for salmon along the coasts of Alaska and British Columbia, they often only eat the skin, brain and roe of the fish. In doing so, they provide a food source for gulls, ravens, and foxes, all of which eat salmon as well; this benefits both the bear and the smaller predators.
Interaction with humans
Relationship with Native Americans
Native American tribes living among brown bears often view them with a mixture of awe and fear. North American brown bears have at times been so feared by the Natives that they were rarely hunted by them, especially when alone. At traditional grizzly hunts in some western tribes such as the Gwichʼin, the expedition was conducted with the same preparation and ceremoniality as intertribal warfare and was never done except with a company of four to ten warriors. The tribe members who dealt the killing blow were highly esteemed among their compatriots. Californian Natives actively avoided prime bear habitat and would not allow their young men to hunt alone for fear of bear attacks. During the Spanish colonial period, some tribes would seek aid from European colonists to deal with problem bears instead of hunting grizzlies themselves. Many authors in the American West wrote of Natives or voyageurs with lacerated faces and missing noses or eyes, due to attacks from grizzlies.
Many Native American tribes both respect and fear the brown bear. In Kwakiutl mythology, American black and brown bears became enemies when Grizzly Bear Woman killed Black Bear Woman for being lazy. Black Bear Woman's children, in turn, killed Grizzly Bear Woman's own cubs. Sleeping Bear Dunes is named after an Ojibwe legend, where a female bear and her cubs swam across Lake Michigan. According to the legend, the two cubs drowned and became the Manitou islands. The mother bear eventually got to shore and slept, waiting patiently for her cubs to arrive. Over the years, the sand covered the mother bear up, creating a huge sand dune.
Conflicts with humans
Grizzlies are considered more aggressive compared to black bears when defending themselves and their offspring. Unlike the smaller black bears, adult grizzlies do not climb trees well, and respond to danger by standing their ground and warding off their attackers. Mothers defending cubs are the most prone to attacking, and are responsible for 70% of humans killed by grizzlies.
Grizzly bears normally avoid contact with people. In spite of their obvious physical advantage they rarely actively hunt humans. Most grizzly bear attacks result from a bear that has been surprised at very close range, especially if it has a supply of food to protect, or female grizzlies protecting their offspring.
Increased human–bear interaction has created "problem bears": bears adapted to human activities or habitat. Exacerbating this is the fact that intensive human use of grizzly habitat coincides with the seasonal movement of grizzly bears. Aversive conditioning using rubber bullets, foul-tasting chemicals, or acoustic deterrent devices attempt to condition bears to associate humans with unpleasantness, but is ineffective when the bears have already learned to positively associate humans with food. Such bears are translocated or killed because they pose a threat to humans. The B.C. government kills approximately 50 problem bears each year and overall spends more than one million dollars annually to address bear complaints, relocate bears or kill them. A bear killing a human in a national park may be killed to prevent its attacking again.
Bear awareness programs have been developed by communities in grizzly bear territory to help prevent conflicts with both black and grizzly bears. The main premise of these programs is to teach humans to manage foods that attract bears. Keeping garbage securely stored, harvesting fruit when ripe, securing livestock behind electric fences, and storing pet food indoors are all measures promoted by bear awareness programs. Revelstoke, British Columbia, is a community that demonstrates the success of this approach. In the ten years preceding the development of a community education program in Revelstoke, 16 grizzlies were destroyed and a further 107 were relocated away from the town. An education program run by Revelstoke Bear Aware was put in place in 1996. Since the program began just four grizzlies have been eliminated and five have been relocated.
For back-country campers, hanging food between trees at a height unreachable to bears is a common procedure, although some grizzlies can climb and reach hanging food in other ways. An alternative to hanging food is to use a bear canister.
Traveling in groups of six or more can significantly reduce the chance of bear-related injuries while hiking in bear country. Grizzly bears are especially dangerous because of the force of their bite, which has been measured at over 8 megapascals (1160 psi). It has been estimated that a bite from a grizzly can crush a bowling ball.
Bear-watching
In the past 20 years in Alaska, ecotourism has boomed. While many people come to Alaska to bear-hunt, the majority come to watch the bears and observe their habits. Some of the best bear viewing in the world occurs on coastal areas of the Alaska Peninsula, including in Lake Clark National Park and Preserve, Katmai National Park and Preserve, and the McNeil River State Game Sanctuary and Refuge. Here bears gather in large numbers to feast on concentrated food sources, including sedges in the salt marshes, clams in the nearby tidal flats, salmon in the estuary streams, and berries on the neighboring hillsides.
Katmai National Park and Preserve is one of the best spots to view brown bears. , the bear population in Katmai is estimated to be 2,100. The park is located on the Alaskan Peninsula about southwest of the city of Anchorage. At Brooks Camp, a famous site exists where grizzlies can be seen catching salmon from atop a platform–it can be even viewed online from a cam. In coastal areas of the park, such as Hallo Bay, Geographic Harbor, Swikshak Lagoon, American Creek, Big River, Kamishak River, Savonoski River, Moraine Creek, Funnel Creek, Battle Creek, Nantuk Creek, Kukak Bay, and Kaflia Bay bears can be seen fishing alongside wolves, eagles, and river otters. Coastal areas host the highest population densities year round because there is a larger variety of food sources available, but Brooks Camp hosts the highest population (100 bears).
The McNeil River State Game Sanctuary and Refuge, on the McNeil River, is home to the greatest concentration of brown bears in the world. An estimated 144 individual bears have been identified at the falls in a single summer with as many as 74 at one time; 60 or more bears at the falls is a frequent sight, and it is not uncommon to see 100 bears at the falls throughout a single day. The McNeil River State Game Refuge, containing Chenik Lake and a smaller number of grizzly bears, has been closed to grizzly hunting since 1995. All of the Katmai-McNeil area is closed to hunting except for Katmai National Preserve, where regulated legal hunting takes place. In all, the Katmai-McNeil area has an estimated 2,500 grizzly bears.
Admiralty Island, in southeast Alaska, was known to early natives as Xootsnoowú, meaning "fortress of bears," and is home to the densest grizzly population in North America. An estimated 1600 grizzlies live on the island, which itself is only long. One place to view grizzly bears in the island is probably Pack Creek, in the Stan Price State Wildlife Sanctuary. 20 to 30 grizzlies can be observed at the creek at one time and like Brooks Camp, visitors can watch bears from an above platform. Kodiak Island, hence its name, is another place to view bears. An estimated 3,500 Kodiak grizzly bears inhabit the island, 2,300 of these in the Kodiak National Wildlife Refuge. The O'Malley River is considered the best place on Kodiak Island to view grizzly bears.
Protection
The grizzly bear is listed as threatened in the contiguous United States and endangered in parts of Canada. In May 2002, the Canadian Species at Risk Act listed the Prairie population (Alberta, Saskatchewan and Manitoba range) of grizzly bears as extirpated in Canada. As of 2002, grizzly bears were listed as special concern under the COSEWIC registry and considered threatened under the U.S. Fish and Wildlife Service.
Within the United States, the U.S. Fish and Wildlife Service concentrates its effort to restore grizzly bears in six recovery areas. These are Northern Continental Divide (Montana), Yellowstone (Montana, Wyoming, and Idaho), Cabinet-Yaak (Montana and Idaho), Selway-Bitterroot (Montana and Idaho), Selkirk (Idaho and Washington), and North Cascades (Washington). The grizzly population in these areas is estimated at 1,000 in the Northern Continental Divide, 1,000 in Yellowstone, 40 in the Yaak portion of the Cabinet-Yaak, and 15 in the Cabinet portion (in northwestern Montana), 105 in Selkirk region of Idaho, 10–20 in the North Cascades, and none currently in Selway-Bitterroots, although there have been sightings. These are estimates because bears move in and out of these areas. In the recovery areas that adjoin Canada, bears also move back and forth across the international boundary.
The U.S. Fish and Wildlife Service claims the Cabinet-Yaak and Selkirk areas are linked through British Columbia, a claim that is disputed. U.S. and Canadian national parks, such as Banff National Park, Yellowstone and Grand Teton, and Theodore Roosevelt National Park are subject to laws and regulations designed to protect the bears.
On 9 January 2006, the US Fish and Wildlife Service proposed to remove Yellowstone grizzlies from the list of threatened and protected species. In March 2007, the U.S. Fish and Wildlife Service "de-listed" the population, effectively removing Endangered Species Act protections for grizzlies in the Yellowstone National Park area. Several environmental organizations, including the NRDC, brought a lawsuit against the federal government to relist the grizzly bear. On 22 September 2009, U.S. District Judge Donald W. Molloy reinstated protection due to the decline of whitebark pine tree, whose nuts are an important source of food for the bears. In early March 2016, the U.S. Fish and Wildlife Service proposed to withdraw Endangered Species Act protections from grizzly bears in and around Yellowstone National Park. The population has risen from 136 bears in 1975 to an estimated 700 in 2017, and was "delisted" in June 2017. It was argued that the population had sufficiently recovered from the threat of extinction, however numerous conservation and tribal organizations argued that the grizzly population remained genetically vulnerable. They successfully sued the administration (Crow Tribe et al v. Zinke) and on 30 July 2019, the Yellowstone grizzly was officially returned to federal protection.
In Alberta, Canada, intense DNA hair-snagging studies in 2000 showed the grizzly population to be increasing faster than what it was formerly believed to be, and Alberta Sustainable Resource Development calculated a population of 841 bears. In 2002, the Endangered Species Conservation Committee recommended that the Alberta grizzly bear population be designated as threatened due to recent estimates of grizzly bear mortality rates that indicated the population was in decline. A recovery plan released by the provincial government in March 2008 indicated the grizzly population is lower than previously believed. In 2010, the provincial government formally listed its population of about 700 grizzlies as "Threatened".
Environment Canada consider the grizzly bear to a "special concern" species, as it is particularly sensitive to human activities and natural threats. In Alberta and British Columbia, the species is considered to be at risk. In 2008, it was estimated there were 16,014 grizzly bears in the British Columbia population, which was lower than previously estimated due to refinements in the population model.
Conservation efforts
Conservation efforts have become an increasingly vital investment over recent decades, as population numbers have dramatically declined. Establishment of parks and protected areas are one of the main focuses currently being tackled to help reestablish the low grizzly bear population in British Columbia. One example of these efforts is the Khutzeymateen Grizzly Bear Sanctuary located along the north coast of British Columbia; at in size, it is composed of key habitat for this threatened species. Regulations such as limited public access, as well as a strict no hunting policy, have enabled this location to be a safe haven for local grizzlies in the area. When choosing the location of a park focused on grizzly bear conservation, factors such as habitat quality and connectivity to other habitat patches are considered.
The Refuge for Endangered Wildlife located on Grouse Mountain in Vancouver is an example of a different type of conservation effort for the diminishing grizzly bear population. The refuge is a five-acre terrain which has functioned as a home for two orphaned grizzly bears since 2001. The purpose of this refuge is to provide awareness and education to the public about grizzly bears, as well as providing an area for research and observation of this secluded species.
Another factor currently being taken into consideration when designing conservation plans for future generations are anthropogenic barriers in the form of urban development and roads. These elements are acting as obstacles, causing fragmentation of the remaining grizzly bear population habitat and prevention of gene flow between subpopulations (for example, Banff National Park). This, in turn, is creating a decline in genetic diversity, and therefore the overall fitness of the general population is lowered. In light of these issues, conservation plans often include migration corridors by way of long strips of "park forest" to connect less developed areas, or by way of tunnels and overpasses over busy roads. Using GPS collar tracking, scientists can study whether or not these efforts are actually making a positive contribution towards resolving the problem. To date, most corridors are found to be infrequently used, and thus genetic isolation is currently occurring, which can result in inbreeding and therefore an increased frequency of deleterious genes through genetic drift. Current data suggest female grizzly bears are disproportionately less likely than males to use these corridors, which can prevent mate access and decrease the number of offspring.
In the United States, national efforts have been made since 1982 for the recovery plan of grizzly bears. The Interagency Grizzly Bear Recovery Committee is one of many organizations committed to the recovery of grizzly bears in the lower 48 states. There are five recovery zones for grizzly bears in the lower 48 states including the North Cascades ecosystem in Washington state. The National Park Service and U.S. Fish and Wildlife initiated the process of an environmental impact statement in the fall of 2014 to begin the recovery process of grizzly bears to the North Cascades region. A final plan and environmental impact statement was released in the spring of 2017 with a record of decision to follow.
In 2017, the Trump administration stripped parklands of previous regulations that protected wildlife living on the land, putting species such as the grizzly bear at risk. Specifically, federal protections on the grizzly bear in Yellowstone National Parks were removed. Regulations that protected the bears against hunting methods with Park Service rules (specifically in park lands in Alaska) were revisited by the Department of Interior. The National Parks Conservation Association (NPCA) supports common sense opportunities for hunting in national preserves," but the state of Alaska's wildlife management leads for the killing of more bears, which increases the population of moose and caribou. The rise in moose and caribou works in favor of sport hunters. Theresa Pierno, President and CEO of National Parks Conservation Association stated, "The State of Alaska's lawsuit against the Park Service and Fish and Wildlife Service seeks to overturn common sense regulations, which underwent a thorough and transparent public process. More than 70,000 Americans said 'no' to baiting bears with grease-soaked donuts in Denali National Park and Preserve. The public was right to want to stop sport hunters from crawling into bears' dens and using flashlights to wake and kill mother bears and their cubs. The state's attempt to dismantle the results of this public process jeopardizes the stewardship of federal public lands, which belong to all Americans."
A press release on 3 October 2022, stated that a federal district court, based in Alaska, will be returning to look over a National Park Service rule relating to hunting practices, including baiting bears. The Interior Department and Park Service's decision permits the law to reside in place while conducting revisions.
| Biology and health sciences | Bears | Animals |
24479046 | https://en.wikipedia.org/wiki/Planetary%20mass | Planetary mass | In astronomy, planetary mass is a measure of the mass of a planet-like astronomical object. Within the Solar System, planets are usually measured in the astronomical system of units, where the unit of mass is the solar mass (), the mass of the Sun. In the study of extrasolar planets, the unit of measure is typically the mass of Jupiter () for large gas giant planets, and the mass of Earth () for smaller rocky terrestrial planets.
The mass of a planet within the Solar System is an adjusted parameter in the preparation of ephemerides. There are three variations of how planetary mass can be calculated:
If the planet has natural satellites, its mass can be calculated using Newton's law of universal gravitation to derive a generalization of Kepler's third law that includes the mass of the planet and its moon. This permitted an early measurement of Jupiter's mass, as measured in units of the solar mass.
The mass of a planet can be inferred from its effect on the orbits of other planets. In 1931-1948 flawed applications of this method led to incorrect calculations of the mass of Pluto.
Data from influence collected from the orbits of space probes can be used. Examples include Voyager probes to the outer planets and the MESSENGER spacecraft to Mercury.
Also, numerous other methods can give reasonable approximations. For instance, Varuna, a potential dwarf planet, rotates very quickly upon its axis, as does the dwarf planet Haumea. Haumea has to have a very high density in order not to be ripped apart by centrifugal forces. Through some calculations, one can place a limit on the object's density. Thus, if the object's size is known, a limit on the mass can be determined. See the links in the aforementioned articles for more details on this.
Choice of units
The choice of solar mass, , as the basic unit for planetary mass comes directly from the calculations used to determine planetary mass. In the most precise case, that of the Earth itself, the mass is known in terms of solar masses to twelve significant figures: the same mass, in terms of kilograms or other Earth-based units, is only known to five significant figures, which is less than a millionth as precise.
The difference comes from the way in which planetary masses are calculated. It is impossible to "weigh" a planet, and much less the Sun, against the sort of mass standards which are used in the laboratory. On the other hand, the orbits of the planets give a great range of observational data as to the relative positions of each body, and these positions can be compared to their relative masses using Newton's law of universal gravitation (with small corrections for General Relativity where necessary). To convert these relative masses to Earth-based units such as the kilogram, it is necessary to know the value of the Newtonian constant of gravitation, G. This constant is remarkably difficult to measure in practice, and its value is known to a relative precision of only
The solar mass is quite a large unit on the scale of the Solar System: . The largest planet, Jupiter, is 0.09% the mass of the Sun, while the Earth is about three millionths (0.0003%) of the mass of the Sun.
When comparing the planets among themselves, it is often convenient to use the mass of the Earth (ME or ) as a standard, particularly for the terrestrial planets. For the mass of gas giants, and also for most extrasolar planets and brown dwarfs, the mass of Jupiter () is a convenient comparison.
Planetary mass and planet formation
The mass of a planet has consequences for its structure by having a large mass, especially while it is in the hand of process of formation. A body with enough mass can overcome its compressive strength and achieve a rounded shape (roughly hydrostatic equilibrium). Since 2006, these objects have been classified as dwarf planet if it orbits around the Sun (that is, if it is not the satellite of another planet). The threshold depends on a number of factors, such as composition, temperature, and the presence of tidal heating. The smallest body that is known to be rounded is Saturn's moon Mimas, at about the mass of Earth; on the other hand, bodies as large as the Kuiper belt object Salacia, at about the mass of Earth, may not have overcome their compressive strengths. Smaller bodies like asteroids are classified as "small Solar System bodies".
A dwarf planet, by definition, is not massive enough to have gravitationally cleared its neighbouring region of planetesimals. The mass needed to do so depends on location: Mars clears its orbit in its current location, but would not do so if it orbited in the Oort cloud.
The smaller planets retain only silicates and metals, and are terrestrial planets like Earth or Mars. The interior structure of rocky planets is mass-dependent: for example, plate tectonics may require a minimum mass to generate sufficient temperatures and pressures for it to occur. Geophysical definitions would also include the dwarf planets and moons in the outer Solar System, which are like terrestrial planets except that they are composed of ice and rock rather than rock and metal: the largest such bodies are Ganymede, Titan, Callisto, Triton, and Pluto.
If the protoplanet grows by accretion to more than about twice the mass of Earth, its gravity becomes large enough to retain hydrogen in its atmosphere. In this case, it will grow into an ice giant or gas giant. As such, Earth and Venus are close to the maximum size a planet can usually grow to while still remaining rocky. If the planet then begins migration, it may move well within its system's frost line, and become a hot Jupiter orbiting very close to its star, then gradually losing small amounts of mass as the star's radiation strips its atmosphere.
The theoretical minimum mass a star can have, and still undergo hydrogen fusion at the core, is estimated to be about , though fusion of deuterium can occur at masses as low as 13 Jupiters.
Values from the DE405 ephemeris
The DE405/LE405 ephemeris from the Jet Propulsion Laboratory is a widely used ephemeris dating from 1998 and covering the whole Solar System. As such, the planetary masses form a self-consistent set, which is not always the case for more recent data (see below).
Earth mass and lunar mass
Where a planet has natural satellites, its mass is usually quoted for the whole system (planet + satellites), as it is the mass of the whole system which acts as a perturbation on the orbits of other planets. The distinction is very slight, as natural satellites are much smaller than their parent planets (as can be seen in the table above, where only the largest satellites are even listed).
The Earth and the Moon form a case in point, partly because the Moon is unusually large (just over 1% of the mass of the Earth) in relation to its parent planet compared with other natural satellites. There are also very precise data available for the Earth–Moon system, particularly from the Lunar Laser Ranging experiment (LLR).
The geocentric gravitational constant – the product of the mass of the Earth times the Newtonian constant of gravitation – can be measured to high precision from the orbits of the Moon and of artificial satellites. The ratio of the two masses can be determined from the slight wobble in the Earth's orbit caused by the gravitational attraction of the Moon.
More recent values
The construction of a full, high-precision Solar System ephemeris is an onerous task. It is possible (and somewhat simpler) to construct partial ephemerides which only concern the planets (or dwarf planets, satellites, asteroids) of interest by "fixing" the motion of the other planets in the model. The two methods are not strictly equivalent, especially when it comes to assigning uncertainties to the results: however, the "best" estimates – at least in terms of quoted uncertainties in the result – for the masses of minor planets and asteroids usually come from partial ephemerides.
Nevertheless, new complete ephemerides continue to be prepared, most notably the EPM2004 ephemeris from the Institute of Applied Astronomy of the Russian Academy of Sciences. EPM2004 is based on separate observations between 1913 and 2003, more than seven times as many as DE405, and gave more precise masses for Ceres and five asteroids.
IAU best estimates (2009)
A new set of "current best estimates" for various astronomical constants was approved the 27th General Assembly of the International Astronomical Union (IAU) in August 2009.
IAU current best estimates (2012)
The 2009 set of "current best estimates" was updated in 2012 by resolution B2 of the IAU XXVIII General Assembly.
Improved values were given for Mercury and Uranus (and also for the Pluto system and Vesta).
| Physical sciences | Planetary science | Astronomy |
21495014 | https://en.wikipedia.org/wiki/Rook%20%28bird%29 | Rook (bird) | The rook (Corvus frugilegus) is a member of the family Corvidae in the passerine order of birds. It is found in the Palearctic, its range extending from Scandinavia and western Europe to eastern Siberia. It is a large, gregarious, black-feathered bird, distinguished from similar species by the whitish featherless area on the face. Rooks nest collectively in the tops of tall trees, often close to farms or villages; the groups of nests are known as rookeries.
Rooks are mainly resident birds, but the northernmost populations may migrate southwards to avoid the harshest winter conditions. The birds form flocks in winter, often in the company of other Corvus species or jackdaws. They return to their rookeries, and breeding takes place in spring. They forage on arable land and pasture, probing the ground with their strong bills and feeding largely on grubs and soil-based invertebrates, but they also consume cereals and other plant material. Historically, farmers have accused the birds of damaging their crops and have made efforts to drive them away or kill them. Like other corvids, they are intelligent birds with complex behavioural traits and an ability to solve simple problems.
Taxonomy and etymology
The rook was given its binomial name by the Swedish naturalist Carl Linnaeus in 1758 in his Systema Naturae. The binomial is from Latin; Corvus means "raven", and frugilegus means for "fruit-gathering". It is derived from frux (oblique frug-), meaning "fruit", and legere, meaning "to pick". The English-language common name rook is ultimately derived from the bird's harsh call. Two subspecies are recognised; the western rook (C. f. frugilegus) ranges from western Europe to southern Russia and extreme northwestern China, while the eastern rook (C. f. pastinator) ranges from central Siberia and northern Mongolia eastwards across the rest of Asia. Collective nouns for rooks include building, parliament, clamour and storytelling. Their colonial nesting behaviour gave rise to the term rookery.
Description
The rook is a fairly large bird, at adult weight, in length and wingspan. It has black feathers that often show a blue or bluish-purple sheen in bright sunlight. The feathers on the head, neck and shoulders are particularly dense and silky. The legs and feet are generally black, the bill grey-black and the iris dark brown. In adults, a bare area of whitish skin in front of the eye and around the base of the bill is distinctive, and enables the rook to be distinguished from other members of the crow family. This bare patch gives the false impression that the bill is longer than it is and the head more domed. The feathering around the legs also appears shaggier and laxer than the similarly sized carrion crow, the only other member of its genus with which the rook is likely to be confused. Additionally, when seen in flight, the wings of a rook are proportionally longer and narrower than those of the carrion crow. The average lifespan is six years.
The juvenile plumage is black with a slight greenish gloss, except for the hind neck, back and underparts, which are brownish-black. The juvenile is superficially similar to a young crow because it lacks the bare patch at the base of the bill, but it has a thinner beak and loses the facial feathers after about six months.
Distribution and habitat
Western rooks are resident in Ireland, Britain and much of north and central Europe but vagrant to Iceland and parts of Scandinavia, where they typically live south of 60° latitude. They are found in habitats that common ravens dislike, choosing open agricultural areas with pasture or arable land, as long as there are suitable tall trees for breeding. They generally avoid forests, swamps, marshes, heaths and moorland. They are in general lowland birds, with most rookeries found below , but where suitable feeding habitat exists, they may breed at or even higher. Rooks are often associated with human settlements, nesting near farms, villages and open towns, but not in large, heavily built-up areas. The eastern subspecies in Asia differs in being slightly smaller on average, and having a somewhat more fully feathered face. In the north of its range the species has a tendency to move south during autumn, and more southern populations are apt to range sporadically.
The species has been introduced into New Zealand, with several hundred birds being released there from 1862 to 1874. Although their range is very localized, the species is now regarded as an invasive pest and is the subject of active control by many local councils. This has wiped out the larger breeding colonies in New Zealand, and the remaining small groups have become more wary.
Behaviour and ecology
Rooks are highly gregarious birds and are generally seen in flocks of various sizes. Males and females pair-bond for life and pairs stay together within flocks. In the evening, the birds often congregate at their rookery before moving off to their chosen communal roosting site. Flocks increase in size in autumn with different groups amalgamating and birds congregating at dusk before roosting, often in very large numbers and in the company of jackdaws. Roosting usually takes place in woodland or plantations, but a small minority of birds may continue to roost at their rookery all winter, and adult males may roost collectively somewhere nearby. The birds move off promptly in the morning, dispersing for distances of up to .
Large groups of rooks (in breeding colonies or night roost sites) can contribute to changes in soil properties. The amount of ornithogenic material in these soils is very high.
Foraging mostly takes place on the ground, with the birds striding about, or occasionally hopping, and probing the soil with their powerful beaks. Flight is direct, with regular wingbeats and little gliding while in purposeful flight; in contrast, the birds may glide more extensively when wheeling about in leisure flight near the rookery. In the autumn, flocks sometimes perform spectacular aerial group flights, including synchronised movements and individual antics such as dives, tumbles and rolls.
Diet and feeding
Examination of stomach contents show that about 60% of the diet is vegetable matter and the rest is of animal origin. Vegetable foods include cereals, potatoes, roots, fruit, acorns, berries and seeds while the animal part is predominantly earthworms and insect larvae, which the bird finds by probing the ground with its strong bill. It also eats beetles, spiders, millipedes, slugs, snails, small mammals, small birds, their eggs and young, and occasionally carrion.
In urban sites, human food scraps are taken from rubbish dumps and streets, usually in the early hours or at dusk when it is relatively quiet. Like other corvids, rooks will sometimes favour sites with a high level of human interaction, and can often be found scavenging for food in tourist areas or pecking open garbage sacks. Rooks have even been trained to pick up litter in a theme park in France.
Courtship
The male usually initiates courtship, on the ground or in a tree, by bowing several times to the female with drooping wings, at the same time cawing and fanning his tail. The female may respond by crouching down, arching her back and quivering her wings slightly, or she may take the initiative by lowering her head and wings and erecting her partially spread tail over her back. Further similar displays are often followed by begging behaviour by the female and by the male presenting her with food, before coition takes place on the nest. At this stage, nearby male rooks often mob or attack the mating pair, and in the ensuing struggle, any male that finds himself on top of the female will attempt to copulate with her. She terminates these unwanted advances by exiting the nest and perching nearby. A mated pair of rooks will often fondle each other's bills, and this behaviour is also sometimes seen in autumn.
Breeding
Nesting in a rookery is always colonial, usually in the very tops of large trees, often on the remnants of the previous year's nest. In hilly regions, rooks may nest in smaller trees or bushes, and exceptionally on chimneys or church spires. Both sexes participate in nest-building, with the male finding most of the materials and the female putting them in place. The nest is cup-shaped and composed of sticks, consolidated with earth and lined with grasses, moss, roots, dead leaves and straw. Small branches and twigs are broken off trees, though as many are likely to be stolen from nearby nests as are collected direct, and the lining material is also often taken from other nests.
Eggs are usually three to five in number (sometimes six and occasionally seven) and may be laid by the end of March or early April in Ireland and Britain, but in the harsher conditions of eastern Europe and Russia, it may be early May before the clutch is completed. The background colour is bluish-green to greyish-green but this is almost completely obscured by the heavy blotching of ashy grey and brown. The eggs average in size. They are incubated for 16–18 days, almost entirely by the female who is fed by the male. After hatching, the male brings food to the nest while the female broods the young. After ten days, she joins the male in bringing food, which is carried in a throat pouch. The young are fledged by the 32nd or 33rd day but continue to be fed by the parents for some time thereafter. There is normally a single clutch each year, but there are records of birds attempting to breed in the autumn.
In autumn, the young birds of the summer collect into large flocks together with unpaired birds of previous seasons, often in company with jackdaws. It is during this time of year that spectacular aerial displays are performed by the birds. The species is monogamous, with the adults forming long-term pair bonds. Partners often support each other in agonistic encounters and a bird may return to its partner after a quarrel where bill twining, an affiliative behaviour, may take place.
Voice
The call is usually described as caw or kaah, and is somewhat similar to that of the carrion crow, but less raucous. It is variable in pitch and has several variants, used in different situations. The call is given both in flight and while perched, at which time the bird fans its tail and bows while making each caw. Calls in flight are usually given singly, in contrast to the carrion crow's, which are in groups of three or four. Other sounds are made around the rookery; a high-pitched squawk, a "burring" sound and a semi-chirruping call. Solitary birds occasionally "sing", apparently to themselves, uttering strange clicks, wheezes and human-like notes; the song has been described as a "base or guttural reproduction of the varied and spluttering song" of starlings.
Intelligence
Although outside of captivity rooks have not shown probable tool-use, captive rooks have shown the ability to use and understand puzzles. One of the most commonly tested puzzles is the Trap-Tube Problem. Rooks learned how to pull their reward out of the tube while avoiding a trap on one side.
In captivity, when confronted with problems, rooks have been documented as one of several species of birds capable of using tools as well as modifying tools to meet their needs. Rooks learned that if they push a stone off a ledge into a tube, they will get food. The rooks then discovered they could find and bring a stone and carry it to the tube if no stone was there already. They also used sticks and wire, and figured out how to bend a wire into a hook to reach an item. Rooks also understood the notion of water levels. When given stones and a tube full of water with a reward floating, they not only understood that they needed to use the stones but also the best stone to use.
In one set of experiments, rooks managed to knock a reward off a platform by rolling a stone down a tube toward the base of the platform. Rooks also seemed to understand the idea that a heavier stone will be more likely to knock the platform over. In this same test, rooks showed they understood that they needed to pick a stone of a shape that would roll easily.
Rooks also show the ability to work together to receive a reward. In order to receive a reward, multiple rooks had to pull strings along the lid of a box in order for it to move and them to reach the reward. Rooks seem to have no preference regarding working as a group comparative to working singly.
They also seem to have a notion of gravity, comparable to a six-month-old baby and exceeding the abilities of chimpanzees. Although they do not use tools in the wild, research studies have demonstrated that rooks can do so in cognition tests where tools are required, and can rival, and in some circumstances outperform, chimpanzees.
Relationship with humans
Farmers have observed rooks in their fields and thought of them as vermin. After a series of poor harvests in the early 1500s, introduced a Vermin Act in 1532 "ordeyned to dystroye Choughes (i.e. jackdaws), Crowes and Rokes" to protect grain crops from their predations. This act was only enforced in piecemeal fashion, but passed the Act for the Preservation of Grayne in 1566 that was taken up with more vigour and large numbers of birds were culled.
Francis Willughby mentions rooks in his Ornithology (1678): "These birds are noisome to corn and grain: so that the husbandmen are forced to employ children, with hooting and crackers, and rattles of metal, and, finally by throwing of stones, to scare them away." He also mentions scarecrows "placed up and down the fields, and dressed up in a country habit, which the birds taking for countrymen dare not come near the grounds where they stand". It was some time before more observant naturalists like John Jenner Weir and Thomas Pennant appreciated that in consuming ground-based pests, the rooks were doing more good than harm.
Rookeries were often perceived as nuisances in rural Britain, and it was previously the practice to hold rook shoots where the juvenile birds, known as "branchers", were shot before they were able to fly. These events were both a social occasion and a source of food (the rook becomes inedible once mature) as rook and rabbit pie was considered a delicacy.
Rooks have a wide distribution and large total population. The main threats they face are from changes in agricultural land use, the application of seed dressings and pesticides, and persecution through shooting. Although total numbers of birds may be declining slightly across the range, this is not at so rapid a rate as to cause concern, and the International Union for Conservation of Nature has assessed the bird's conservation status as being of "least concern".
| Biology and health sciences | Corvoidea | Animals |
21496038 | https://en.wikipedia.org/wiki/Lactation | Lactation | Lactation describes the secretion of milk from the mammary glands and the period of time that a mother lactates to feed her young. The process naturally occurs with all sexually mature female mammals, although it may predate mammals. The process of feeding milk in all female creatures is called nursing, and in humans it is also called breastfeeding. Newborn infants often produce some milk from their own breast tissue, known colloquially as witch's milk.
In most species, lactation is a sign that the female has been pregnant at some point in her life, although in humans and goats, it can happen without pregnancy. Nearly every species of mammal has teats; except for monotremes, egg-laying mammals, which instead release milk through ducts in the abdomen. In only a handful of species of mammals, certain bat species, is milk production a normal male function.
Galactopoiesis is the maintenance of milk production. This stage requires prolactin. Oxytocin is critical for the milk let-down reflex in response to suckling. Galactorrhea is milk production unrelated to nursing. It can occur in males and females of many mammal species as result of hormonal imbalances such as hyperprolactinaemia.
Purpose
The chief function of a lactation is to provide nutrition and immune protection to the young after birth. Due to lactation, the mother-young pair can survive even if food is scarce or too hard for the young to attain, expanding the environmental conditions the species can withstand. The costly investment of energy and resources into milk is outweighed by the benefit to offspring survival. In almost all mammals, lactation induces a period of infertility (in humans, lactational amenorrhea), which serves to provide the optimal birth spacing for survival of the offspring.
Human
Hormonal influences
From the eighteenth week of pregnancy (the second and third trimesters), a woman's body produces hormones that stimulate the growth of the milk duct system in the breasts:
Progesterone influences the growth in size of alveoli and lobes; high levels of progesterone inhibit lactation before birth. Progesterone levels drop after birth; this triggers the onset of copious milk production.
Estrogen stimulates the milk duct system to grow and differentiate. Like progesterone, high levels of estrogen also inhibit lactation. Estrogen levels also drop at delivery and remain low for the first several months of breastfeeding. Breastfeeding mothers should avoid estrogen-based birth control methods, as a spike in estrogen levels may reduce a mother's milk supply.
Prolactin contributes to the increased growth and differentiation of the alveoli, and also influences differentiation of ductal structures. High levels of prolactin during pregnancy and breastfeeding also increase insulin resistance, increase growth factor levels (IGF-1) and modify lipid metabolism in preparation for breastfeeding. During lactation, prolactin is the main factor maintaining tight junctions of the ductal epithelium and regulating milk production through osmotic balance.
Human placental lactogen (HPL) – from the second month of pregnancy, the placenta releases large amounts of HPL. This hormone is closely associated with prolactin and appears to be instrumental in breast, nipple, and areola growth before birth.
Follicle stimulating hormone (FSH), luteinizing hormone (LH), and human chorionic gonadotropin (hCG), through control of estrogen and progesterone production, and also, by extension, prolactin and growth hormone production, are essential.
Growth hormone (GH) is structurally very similar to prolactin and independently contributes to its galactopoiesis.
Adrenocorticotropic hormone (ACTH) and glucocorticoids such as cortisol have an important lactation inducing function in several animal species, including humans. Glucocorticoids play a complex regulating role in the maintenance of tight junctions.
Thyroid-stimulating hormone (TSH) and thyrotropin-releasing hormone (TRH) are very important galactopoietic hormones whose levels are naturally increased during pregnancy.
Oxytocin contracts the smooth muscle of the uterus during and after birth, and during orgasm(s). After birth, oxytocin contracts the smooth muscle layer of band-like cells surrounding the alveoli to squeeze the newly produced milk into the duct system. Oxytocin is necessary for the milk ejection reflex, or let-down, in response to suckling, to occur.
It is also possible to induce lactation without pregnancy through combinations of birth control pills, galactagogues, and milk expression using a breast pump.
Secretory differentiation
During the latter part of pregnancy, the woman's breasts enter into the Secretory Differentiation stage. This is when the breasts make colostrum (see below), a thick, sometimes yellowish fluid. At this stage, high levels of progesterone inhibit most milk production. It is not a medical concern if a pregnant woman leaks any colostrum before her baby's birth, nor is it an indication of future milk production.
Secretory activation
At birth, prolactin levels remain high, while the delivery of the placenta results in a sudden drop in progesterone, estrogen, and HPL levels. This abrupt withdrawal of progesterone in the presence of high prolactin levels stimulates the copious milk production of Secretory Activation.
When the breast is stimulated, prolactin levels in the blood rise, peak in about 45 minutes, and return to the pre-breastfeeding state about three hours later. The release of prolactin triggers the cells in the alveoli to make milk. Prolactin also transfers to the breast milk. Some research indicates that prolactin in milk is greater at times of higher milk production, and lower when breasts are fuller, and that the highest levels tend to occur between 2 a.m. and 6 a.m.
Other hormones—notably insulin, thyroxine, and cortisol—are also involved, but their roles are not yet well understood. Although biochemical markers indicate that Secretory Activation begins about 30–40 hours after birth, mothers do not typically begin feeling increased breast fullness (the sensation of milk "coming in the breast") until 50–73 hours (2–3 days) after birth.
Colostrum is the first milk a breastfed baby receives. It contains higher amounts of white blood cells and antibodies than mature milk, and is especially high in immunoglobulin A (IgA), which coats the lining of the baby's immature intestines, and helps to prevent pathogens from invading the baby's system. Secretory IgA also helps prevent food allergies. Over the first two weeks after the birth, colostrum production slowly gives way to mature breast milk.
Autocrine control - Galactopoiesis
The hormonal endocrine control system drives milk production during pregnancy and the first few days after the birth. When the milk supply is more firmly established, autocrine (or local) control system begins.
During this stage, the more that milk is removed from the breasts, the more the breast will produce milk. Research also suggests that draining the breasts more fully also increases the rate of milk production. Thus the milk supply is strongly influenced by how often the baby feeds and how well it is able to transfer milk from the breast. Low supply can often be traced to:
not feeding or pumping often enough
inability of the infant to transfer milk effectively caused by, among other things:
jaw or mouth structure deficits
poor latching technique
premature birth
drowsiness in the baby, due to illness, medication or recovery from medical procedures
rare maternal endocrine disorders
hypoplastic breast tissue
inadequate calorie intake or malnutrition of the mother
Milk ejection reflex
This is the mechanism by which milk is transported from the breast alveoli to the nipple. Suckling by the baby stimulates the paraventricular nuclei and supraoptic nucleus in the hypothalamus, which signals to the posterior pituitary gland to produce oxytocin. Oxytocin stimulates contraction of the myoepithelial cells surrounding the alveoli, which already hold milk. The increased pressure causes milk to flow through the duct system and be released through the nipple. This response can be conditioned e.g. to the cry of the baby.
Milk ejection is initiated in the mother's breast by the act of suckling by the baby. The milk ejection reflex (also called let-down reflex) is not always consistent, especially at first. Once a woman is conditioned to nursing, let-down can be triggered by a variety of stimuli, including the sound of any baby. Even thinking about breastfeeding can stimulate this reflex, causing unwanted leakage, or both breasts may give out milk when an infant is feeding from one breast. However, this and other problems often settle after two weeks of feeding. Stress or anxiety can cause difficulties with breastfeeding. The release of the hormone oxytocin leads to the milk ejection or let-down reflex. Oxytocin stimulates the muscles surrounding the breast to squeeze out the milk. Breastfeeding mothers describe the sensation differently. Some feel a slight tingling, others feel immense amounts of pressure or slight pain/discomfort, and still others do not feel anything different. A minority of mothers experience a dysphoric milk ejection reflex immediately before let-down, causing anxiety, anger or nausea, amongst other negative sensations, for up to a few minutes per feed.
A poor milk ejection reflex can be due to sore or cracked nipples, separation from the infant, a history of breast surgery, or tissue damage from prior breast trauma. If a mother has trouble breastfeeding, different methods of assisting the milk ejection reflex may help. These include feeding in a familiar and comfortable location, massage of the breast or back, or warming the breast with a cloth or shower.
Milk ejection reflex mechanism
This is the mechanism by which milk is transported from the breast alveoli to the nipple. Suckling by the baby innervates slowly adapting and rapidly-adapting mechanoreceptors that are densely packed around the areolar region. The electrical impulse follows the spinothalamic tract, which begins by innervation of fourth intercostal nerves. The electrical impulse then ascends the posterolateral tract for one or two vertebral levels and synapses with second-order neurons, called tract cells, in the posterior dorsal horn. The tract cells then decussate via the anterior white commissure to the anterolateral corner and ascend to the supraoptic nucleus and paraventricular nucleus in the hypothalamus, where they synapse with oxytocinergic third-order neurons. The somas of these neurons are located in the hypothalamus, but their axon and axon terminals are located in the infundibulum and pars nervosa of the posterior pituitary, respectively. The oxytocin is produced in the neuron's soma in the supraoptic and paraventricular nuclei, and is then transported down the infundibulum via the hypothalamo-neurohypophyseal tract with the help of the carrier protein, neurophysin I, to the pars nervosa of the posterior pituitary, and then stored in Herring bodies, where they are stored until the synapse between second- and third-order neurons.
Following the electrical impulse, oxytocin is released into the bloodstream. Through the bloodstream, oxytocin makes its way to myoepithelial cells, which lie between the extracellular matrix and luminal epithelial cells that also make up the alveoli in breast tissue. When oxytocin binds to the myoepithelial cells, the cells contract. The increased intra-alveolar pressure forces milk into the lactiferous sinuses, into the lactiferous ducts (a study found that lactiferous sinuses may not exist. If this is true then milk simply enters the lactiferous ducts), and then out the nipple.
Afterpains
A surge of oxytocin also causes the uterus to contract. During breastfeeding, mothers may feel these contractions as afterpains. These may range from period-like cramps to strong labour-like contractions and can be more severe with second and subsequent babies.
Without pregnancy, induced lactation, relactation
In humans, induced lactation and relactation have been observed frequently in some cultures, and demonstrated with varying success in adoptive mothers and wet nurses. It appears plausible that the possibility of lactation in women (or females of other species) who are not biological mothers does confer an evolutionary advantage, especially in groups with high maternal mortality and tight social bonds. The phenomenon has been also observed in most primates, in some lemurs, and in dwarf mongooses.
Lactation can be induced in humans by a combination of physical and psychological stimulation, by drugs, or by a combination of those methods. Several protocols for inducing lactation were developed by Jack Newman and Lenore Goldfarb and are commonly called the Newman-Goldfarb protocols. The "regular protocol" involves the use of birth control pills to mimic the hormone levels of pregnancy with domperidone to stimulate milk production, followed by discontinuing the birth control and the introducing use of a double electric breast pump to induce milk production. Additional protocols exist to support an accelerated timeline and to support induced lactation in menopausal parents.
Some couples may stimulate lactation outside of pregnancy for sexual purposes.
Rare accounts of male lactation (as distinct from galactorrhea) exist in historical medical and anthropological literature. Most recently a subject of transgender health care, multiple case reports have described transgender women successfully inducing lactation. Research has indicated that such breast milk is nutritionally comparable to both the milk of naturally lactating and induced lactating cisgender women.
Domperidone is a drug that can induce lactation.
Evolution
Charles Darwin recognized that mammary glands seemed to have developed specifically from cutaneous glands, and hypothesized that they evolved from glands in brood pouches of fish, where they would provide nourishment for eggs. The latter aspect of his hypothesis has not been confirmed; however, more recently the same mechanism has been postulated for early synapsids.
As all mammals lactate, lactation must have evolved before the last common ancestor of all mammals, which places it at a minimum in the Middle or Late Triassic when monotremes diverged from therians. O. T. Oftedal has argued that therapsids evolved a proto-lacteal fluid in order to keep eggs moist, an adaptation necessitated due to synapsids’ parchment shelled eggs which are more vulnerable to evaporation and dehydration than the mineralized eggs produced by some sauropsids. This protolacteal fluid became a complex, nutrient-rich milk which then allowed a decline in egg size by reducing the dependence on a large yolk in the egg. The evolution of lactation is also believed to have resulted in the more complex dentition seen in mammals, as lactation would have allowed the prolonged development of the jaw before the eruption of teeth.
Oftedal also proposed that the protolacteal fluid was initially secreted through pilosebaceous glands on mammary patches, analogous to the areola, and that hairs on this patch transported the fluid to the hatchlings as is seen in monotremes. In monotremes, they are said to have evolved from apocrine sweat glands. This would have occurred in the mammal lineages that diverged after monotremes, metatheria and eutheria. In this scenario, some genes and signaling pathways involved in lactation evolved from ancient precursors which facilitated secretions from spiny structures, which themselves evolved from odontodes.
Occurrence outside Mammalia
Recent research, as documented in the journal Science, has shed light on the behavior of certain species of caecilians. These studies reveal that some caecilians exhibit a phenomenon wherein they provide their hatchlings with a nutrient-rich substance akin to milk, delivered through a maternal vent. Among the species investigated, the oviparous nonmammalian caecilian amphibian Siphonops annulatus stood out, indicating that the practice of lactation may be more widespread among these creatures than previously thought. As detailed in a 2024 study, researchers collected 16 mothers of the Siphonops annulatus species from cacao plantations in Brazil's Atlantic Forest and filmed them with their altricial hatchlings in the lab. The mothers remained with their offspring, which suckled on a white, viscous liquid from their cloaca, experiencing rapid growth in their first week. This milk-like substance, rich in fats and carbohydrates, is produced in the mother's oviduct epithelium's hypertrophied glands, similar to mammal milk. The substance was released seemingly in response to tactile and acoustic stimulation by the babies. The researchers observed the hatchlings emitting high-pitched clicking sounds as they approached their mothers for milk, a behavior unique among amphibians. This milk-feeding behavior may contribute to the development of the hatchlings' microbiome and immune system, similar to mammalian young. The presence of milk production in caecilians that lay eggs suggests an evolutionary transition between egg-laying and live birth.
Another well known example of nourishing young with secretions of glands is the crop milk of certain birds such as columbiform birds (pigeons and doves), among others. As in mammals, this also appears to be directed by prolactin. Other birds such as flamingos and penguins utilize similar feeding techniques.
The discus fish (Symphysodon) is known for (biparentally) feeding their offspring by epidermal mucus secretion. A closer examination reveals that, as in mammals and birds, the secretion of this nourishing fluid may be controlled by prolactin. Similar behavior is seen in at least 30 species of cichlids.
Lactation is also the hallmark of adenotrophic viviparity – a breeding mechanism developed by some insects, most notably tsetse flies. The single egg of the tsetse develops into a larva inside the uterus where it is fed by a milky substance secreted by a milk gland inside the uterus. The cockroach species Diploptera punctata is also known to feed their offspring by milky secretions.
Toxeus magnus, an ant-mimicking jumping spider species of Southeast Asia, also lactates. It nurses its offspring for about 38 days, although they are able to forage on their own after 21 days. Blocking nursing immediately after birth resulted in complete mortality of the offspring, whereas blocking it 20 days after birth resulted in increased foraging and reduced survival. This form of lactation may have evolved from production of trophic eggs.
| Biology and health sciences | Animal reproduction | Biology |
5219699 | https://en.wikipedia.org/wiki/Human%20Genome%20Project | Human Genome Project | The Human Genome Project (HGP) was an international scientific research project with the goal of determining the base pairs that make up human DNA, and of identifying, mapping and sequencing all of the genes of the human genome from both a physical and a functional standpoint. It started in 1990 and was completed in 2003. It was the world's largest collaborative biological project. Planning for the project began in 1984 by the US government, and it officially launched in 1990. It was declared complete on 14 April 2003, and included about 92% of the genome. Level "complete genome" was achieved in May 2021, with only 0.3% of the bases covered by potential issues. The final gapless assembly was finished in January 2022.
Funding came from the US government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside the government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centres in the United States, the United Kingdom, Japan, France, Germany, and China, working in the International Human Genome Sequencing Consortium (IHGSC).
The Human Genome Project originally aimed to map the complete set of nucleotides contained in a human haploid reference genome, of which there are more than three billion. The genome of any given individual is unique; mapping the human genome involved sequencing samples collected from a small number of individuals and then assembling the sequenced fragments to get a complete sequence for each of the 23 human chromosome pairs (22 pairs of autosomes and a pair of sex chromosomes, known as allosomes). Therefore, the finished human genome is a mosaic, not representing any one individual. Much of the project's utility comes from the fact that the vast majority of the human genome is the same in all humans.
History
The Human Genome Project was a 13-year-long publicly funded project initiated in 1990 with the objective of determining the DNA sequence of the entire euchromatic human genome within 13 years. The idea of such a project originated in the work of Ronald A. Fisher, whose work is also credited with later initiating the project. In 1977, Walter Gilbert, Frederick Sanger, and Paul Berg invented these methods of sequencing DNA.
In May 1985, Robert Sinsheimer organized a workshop at the University of California, Santa Cruz, to discuss the feasibility of building a systematic reference genome using gene sequencing technologies. Gilbert wrote the first plan for what he called The Human Genome Institute on the plane ride home from the workshop. In March 1986, the Santa Fe Workshop was organized by Charles DeLisi and David Smith of the Department of Energy's Office of Health and Environmental Research (OHER). At the same time Renato Dulbecco, President of the Salk Institute for Biological Studies, first proposed the concept of whole genome sequencing in an essay in Science. The published work, titled "A Turning Point in Cancer Research: Sequencing the Human Genome", was shortened from the original proposal of using the sequence to understand the genetic basis of breast cancer. James Watson, one of the discoverers of the double helix shape of DNA in the 1950s, followed two months later with a workshop held at the Cold Spring Harbor Laboratory. Thus the idea for obtaining a reference sequence had three independent origins: Sinsheimer, Dulbecco and DeLisi. Ultimately it was the actions by DeLisi that launched the project.
The fact that the Santa Fe Workshop was motivated and supported by a federal agency opened a path, albeit a difficult and tortuous one, for converting the idea into public policy in the United States. In a memo to the Assistant Secretary for Energy Research Alvin Trivelpiece, then-Director of the OHER Charles DeLisi outlined a broad plan for the project. This started a long and complex chain of events that led to the approved reprogramming of funds that enabled the OHER to launch the project in 1986, and to recommend the first line item for the HGP, which was in President Reagan's 1988 budget submission, and ultimately approved by Congress. Of particular importance in congressional approval was the advocacy of New Mexico Senator Pete Domenici, whom DeLisi had befriended. Domenici chaired the Senate Committee on Energy and Natural Resources, as well as the Budget Committee, both of which were key in the DOE budget process. Congress added a comparable amount to the NIH budget, thereby beginning official funding by both agencies.
Trivelpiece sought and obtained the approval of DeLisi's proposal from Deputy Secretary William Flynn Martin. This chart was used by Trivelpiece in the spring of 1986 to brief Martin and Under Secretary Joseph Salgado regarding his intention to reprogram $4 million to initiate the project with the approval of John S. Herrington. This reprogramming was followed by a line item budget of $13 million in the Reagan administration's 1987 budget submission to Congress. It subsequently passed both Houses. The project was planned to be completed within 15 years.
In 1990 the two major funding agencies, DOE and the National Institutes of Health, developed a memorandum of understanding to coordinate plans and set the clock for the initiation of the Project to 1990. At that time, David J. Galas was Director of the renamed "Office of Biological and Environmental Research" in the US Department of Energy's Office of Science and James Watson headed the NIH Genome Program. In 1993, Aristides Patrinos succeeded Galas and Francis Collins succeeded Watson, assuming the role of overall Project Head as Director of the NIH National Center for Human Genome Research (which would later become the National Human Genome Research Institute). A working draft of the genome was announced in 2000 and the papers describing it were published in February 2001. A more complete draft was published in 2003, and genome "finishing" work continued for more than a decade after that.
The $3 billion project was formally founded in 1990 by the US Department of Energy and the National Institutes of Health, and was expected to take 15 years. In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Australia, China, and myriad other spontaneous relationships. The project ended up costing less than expected, at about $2.7 billion (equivalent to about $5 billion in 2021).
Two technologies enabled the project: gene mapping and DNA sequencing. The gene mapping technique of restriction fragment length polymorphism (RFLP) arose from the search for the location of the breast cancer gene by Mark Skolnick of the University of Utah, which began in 1974. Seeing a linkage marker for the gene, in collaboration with David Botstein, Ray White and Ron Davis conceived of a way to construct a genetic linkage map of the human genome. This enabled scientists to launch the larger human genome effort.
Because of widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as parallel advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by US President Bill Clinton and British Prime Minister Tony Blair on 26 June 2000). This first available rough draft assembly of the genome was completed by the Genome Bioinformatics Group at the University of California, Santa Cruz, primarily led by then-graduate student Jim Kent and his advisor David Haussler. Ongoing sequencing led to the announcement of the essentially complete genome on 14 April 2003, two years earlier than planned. In May 2006, another milestone was passed on the way to completion of the project when the sequence of the very last chromosome was published in Nature.
The various institutions, companies, and laboratories which participated in the Human Genome Project are listed below, according to the NIH:
State of completion
Notably the project was not able to sequence all of the DNA found in human cells; rather, the aim was to sequence only euchromatic regions of the nuclear genome, which make up 92.1% of the human genome. The remaining 7.9% exists in scattered heterochromatic regions such as those found in centromeres and telomeres. These regions by their nature are generally more difficult to sequence and so were not included as part of the project's original plans.
The Human Genome Project (HGP) was declared complete in April 2003. An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on 14 April 2003. Although this was reported to cover 99% of the euchromatic human genome with 99.99% accuracy, a major quality assessment of the human genome sequence was published on 27 May 2004, indicating over 92% of sampling exceeded 99.99% accuracy which was within the intended goal.
In March 2009, the Genome Reference Consortium (GRC) released a more accurate version of the human genome, but that still left more than 300 gaps, while 160 such gaps remained in 2015.
Though in May 2020 the GRC reported 79 "unresolved" gaps, accounting for as much as 5% of the human genome, months later, the application of new long-range sequencing techniques and a hydatidiform mole-derived cell line in which both copies of each chromosome are identical led to the first telomere-to-telomere, truly complete sequence of a human chromosome, the X chromosome. Similarly, an end-to-end complete sequence of human autosomal chromosome 8 followed several months later.
In 2021, it was reported that the Telomere-to-Telomere (T2T) consortium had filled in all of the gaps except five in repetitive regions of ribosomal DNA. Months later, those gaps had also been closed. The full sequence did not contain the Y chromosome, which causes the embryo to become male, being absent in the cell line that served as the source for the DNA analysis. About 0.3% of the full sequence proved difficult to check for quality, and thus might have contained errors, which were being targeted for confirmation. In April 2022, the complete non-Y chromosome sequence was formally published, providing a view of much of the 8% of the genome left out by the HGP. In December 2022, a preprint article claimed that the sequencing of the remaining missing regions of Y chromosome had been performed, thus completing the sequencing of all 24 human chromosomes. In August 2023 this preprint was finally published.
Applications and proposed benefits
The sequencing of the human genome holds benefits for many fields, from molecular medicine to human evolution. The Human Genome Project, through its sequencing of the DNA, can help researchers understand diseases including: genotyping of specific viruses to direct appropriate treatment; identification of mutations linked to different forms of cancer; the design of medication and more accurate prediction of their effects; advancement in forensic applied sciences; biofuels and other energy applications; agriculture, animal husbandry, bioprocessing; risk assessment; bioarcheology, anthropology and evolution.
The sequence of the DNA is stored in databases available to anyone on the Internet. The US National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations, such as the UCSC Genome Browser at the University of California, Santa Cruz, and Ensembl present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data because the data itself is difficult to interpret without such programs. Generally speaking, advances in genome sequencing technology have followed Moore's Law, a concept from computer science which states that integrated circuits can increase in complexity at an exponential rate. This means that the speeds at which whole genomes can be sequenced can increase at a similar rate, as was seen during the development of the Human Genome Project.
Techniques and analysis
The process of identifying the boundaries between genes and other features in a raw DNA sequence is called genome annotation and is in the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. Beginning in 2008, a new technology known as RNA-seq was introduced that allowed scientists to directly sequence the messenger RNA in cells. This replaced previous methods of annotation, which relied on the inherent properties of the DNA sequence, with direct measurement, which was much more accurate. Today, annotation of the human genome and other genomes relies primarily on deep sequencing of the transcripts in every human tissue using RNA-seq. These experiments have revealed that over 90% of genes contain at least one and usually several alternative splice variants, in which the exons are combined in different ways to produce 2 or more gene products from the same locus.
Subsequent projects sequenced the genomes of multiple distinct ethnic groups, though as of 2019 there is still only one "reference genome".
Findings
Key findings of the draft (2001) and complete (2004) genome sequences include:
There are approximately 22,300 protein-coding genes in human beings, the same range as in other mammals.
The human genome has significantly more segmental duplications (nearly identical, repeated sections of DNA) than had been previously suspected.
At the time when the draft sequence was published, fewer than 7% of protein families appeared to be vertebrate specific.
Accomplishments
The human genome has approximately 3.1 billion base pairs. The Human Genome Project was started in 1990 with the goal of sequencing and identifying all base pairs in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. It is considered a megaproject.
The genome was broken into smaller pieces; approximately 150,000 base pairs in length. These pieces were then ligated into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.
Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust, as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute, the Wellcome Sanger Institute (then called The Sanger Centre) based at the Wellcome Genome Campus, Washington University in St. Louis, and Baylor College of Medicine.
The UN Educational, Scientific and Cultural Organization (UNESCO) served as an important channel for the involvement of developing countries in the Human Genome Project.
Public versus private approaches
In 1998 a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300 million Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. While the Celera project focused its efforts on production sequencing and assembly of the human genome, the public HGP also funded mapping and sequencing of the worm, fly, and yeast genomes, funding of databases, development of new technologies, supporting bioinformatics and ethics programs, as well as polishing and assessment of the genome assembly. Both the Celera and public approaches spent roughly $250 million on the production sequencing effort. For sequence assembly, Celera made use of publicly available maps at GenBank, which Celera was capable of generating, but the availability of which was "beneficial" to the privately-funded project.
Celera used a technique called whole genome shotgun sequencing, employing pairwise end sequencing, which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.
Celera initially announced that it would seek patent protection on "only 200–300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100–300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes.
Celera also promised to publish their findings in accordance with the terms of the 1996 "Bermuda Statement", by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitors were compelled to release the first draft of the human genome before Celera for this reason. On 7 July 2000, the UCSC Genome Bioinformatics Group released the first working draft on the web. The scientific community downloaded about 500 GB of information from the UCSC genome server in the first 24 hours of free and unrestricted access.
In March 2000 President Clinton, along with Prime Minister Tony Blair in a dual statement, urged that all researchers who wished to research the sequence should have "unencumbered access" to the genome sequence. The statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days.
Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper) described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to approximately 92% of the sequence currently.
Genome donors
In the International Human Genome Sequencing Consortium (IHGSC) public-sector HGP, researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones taken from many different libraries were used in the overall project, with most of those libraries being created by Pieter J. de Jong. Much of the sequence (>70%) of the reference genome produced by the public HGP came from a single anonymous male donor from Buffalo, New York, (code name RP11; the "RP" refers to Roswell Park Comprehensive Cancer Center).
HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) – each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, because of quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome) compared to female samples (which contain two X chromosomes). The other 22 chromosomes (the autosomes) are the same for both sexes.
Although the main sequencing phase of the HGP has been completed, studies of DNA variation continued in the International HapMap Project, whose goal was to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes, or "haps"). The DNA samples for the HapMap came from a total of 270 individuals; Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre d'Etude du Polymorphisme Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.
In the Celera Genomics private-sector project, DNA from five different individuals was used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of 21 samples in the pool, five of which were selected for use.
Developments
With the sequence in hand the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.
It is anticipated that detailed knowledge of the human genome will offer new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics, started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, hemostasis disorders, cystic fibrosis, liver diseases, and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.
There are also many tangible benefits for biologists. For example a researcher investigating a certain form of cancer may have narrowed down their search to a particular gene. By visiting the human genome database on the internet, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its functions, its evolutionary relationships to other human genes, or to genes in mice, yeast, or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, and diseases associated with this gene or other datatypes. Further, a deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.
Analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and their closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data in this project.
The project inspired and paved the way for genomic work in other fields, such as agriculture. For example by studying the genetic composition of Tritium aestivum, the world's most commonly used bread wheat, great insight has been gained into the ways that domestication has impacted the evolution of the plant. It is being investigated which loci are most susceptible to manipulation, and how this plays out in evolutionary terms. Genetic sequencing has allowed these questions to be addressed for the first time, as specific loci can be compared in wild and domesticated strains of the plant. This will allow for advances in genetic modification in the future which could yield healthier and disease-resistant wheat crops, among other things.
Ethical, legal, and social issues
At the onset of the Human Genome Project, several ethical, legal, and social concerns were raised in regard to how increased knowledge of the human genome could be used to discriminate against people. One of the main concerns of most individuals was the fear that both employers and health insurance companies would refuse to hire individuals or refuse to provide insurance to people because of a health concern indicated by someone's genes. In 1996, the United States passed the Health Insurance Portability and Accountability Act (HIPAA), which protects against the unauthorized and non-consensual release of individually identifiable health information to any entity not actively engaged in the provision of healthcare services to a patient.
Along with identifying all of the approximately 20,000–25,000 genes in the human genome (estimated at between 80,000 and 140,000 at the start of the project), the Human Genome Project also sought to address the ethical, legal, and social issues that were created by the onset of the project. For that, the Ethical, Legal, and Social Implications (ELSI) program was founded in 1990. Five percent of the annual budget was allocated to address the ELSI arising from the project. This budget started at approximately $1.57 million in the year 1990, but increased to approximately $18 million in the year 2014.
While the project may offer significant benefits to medicine and scientific research, some authors have emphasized the need to address the potential social consequences of mapping the human genome. Historian of science Hans-Jörg Rheinberger wrote that "the prospect of 'molecularizing' diseases and their possible cure will have a profound impact on what patients expect from medical help, and on a new generation of doctors' perception of illness."
In July 2024 an investigation by Undark Magazine and co-published with STAT News revealed for the first time several ethical lapses by the scientists spearheading the Human Genome Project. Chief among these was the use of roughly 75 percent of a single donor's DNA in the construction of the reference genome, despite informed consent forms, provided to each of the 20 anonymous donors participating, that indicated no more than 10 percent of any one donor's DNA would be used. About 10 percent of the reference genome belonged to one of the project's lead scientists, Pieter De Jong.
| Biology and health sciences | Genetics | Biology |
5225951 | https://en.wikipedia.org/wiki/Yinlong | Yinlong | Yinlong (, meaning "hidden dragon") is a genus of basal ceratopsian dinosaur from the Late Jurassic Period of China. By far the earliest known ceratopsian, it was a small, primarily bipedal herbivore.
Discovery and species
A coalition of American and Chinese paleontologists, including Xu Xing, Catherine Forster, Jim Clark, and Mo Jinyou, described and named Yinlong in 2006. The generic name is derived from the Mandarin Chinese words 隱 (yǐn: "hidden") and 龍 (lóng: "dragon"), a reference to the movie Crouching Tiger, Hidden Dragon, large portions of which were filmed in the western Chinese province of Xinjiang, near the locality where this animal's fossil remains were discovered. Long is the word most often used in the Chinese media when referring to dinosaurs. The species was named after the American vertebrate paleontologist William Randall Downs III, a frequent participant in paleontological expeditions to China who died the year before Yinlong was discovered.
The known fossil material of Yinlong consists of many skeletons and skulls. The first specimen discovered was a single exceptionally well-preserved skeleton, complete with skull, of a nearly adult animal, found in 2004 in the Middle-Late Jurassic strata of the Shishugou Formation located in Xinjiang Province, China. Yinlong was discovered in an upper section of this formation which dates to the Oxfordian stage of the Late Jurassic, or 161.2 to 155.7 million years ago. Most other described ceratopsians are known from the later Cretaceous Period.
Description
Yinlong was a relatively small dinosaur, reaching in length and in body mass. Despite a virtually frill-less and totally hornless skull, Yinlong is a ceratopsian. Its skull is deep and wide and relatively large compared to most ornithischians, but also proportionately smaller than most other ceratopsians. Long robust hindlimbs and shorter slender forelimbs with three-fingered hands suggests a bipedal lifestyle like many small ornithopods.
Classification
A small rostral bone on the end of the upper jaw clearly identifies Yinlong as a ceratopsian, although the skull displays several features, especially the ornamentation of the squamosal bone of the skull roof, which were previously thought to be unique to pachycephalosaurians. The presence of these features in Yinlong indicates these as actual synapomorphies (unique features) of the larger group Marginocephalia, which contains both the pachycephalosaurs and the ceratopsians, although these features have been lost in all known ceratopsians more derived than Yinlong. The addition of these characters further strengthens the support for Marginocephalia. Yinlong also preserves skull features reminiscent of the family Heterodontosauridae, providing support for the hypothesis that heterodontosaurids are closely related to marginocephalians The group containing Marginocephalia and Heterodontosauridae has been named Heterodontosauriformes. However, this hypothesis was not supported by a subsequent analysis of basal ornithischians that was carried out as part of a study on the postcranial anatomy of Yinlong, which resolved the below phylogeny of Ceratopsia.
Paleobiology
Diet
Yinlong was discovered with seven gastroliths preserved in the abdominal cavity. Gastroliths, stones stored in the digestive tract and used to grind plant material, are also found in other ceratopsians such as Psittacosaurus, and are also widely distributed in most other dinosaur groups, including birds.
Growth
In 2024, bone histology based on specimens of various ontogenetic stage (1 early juvenile, 2 late juveniles, 4 subadults and 3 adults) suggested that Yinlong reached sexual maturity at 6 years old, much younger than the age of sexual maturity for Psittacosaurus but older than that for ceratopsids. The study also found evidence of growth rates higher than those of extant squamates and crocodiles but lower than those of large-sized dinosaurs and extant mammals and bird.
| Biology and health sciences | Ornitischians | Animals |
31591547 | https://en.wikipedia.org/wiki/Instagram | Instagram | Instagram is an American photo and video sharing social networking service owned by Meta Platforms. It allows users to upload media that can be edited with filters, be organized by hashtags, and be associated with a location via geographical tagging. Posts can be shared publicly or with preapproved followers. Users can browse other users' content by tags and locations, view trending content, like photos, and follow other users to add their content to a personal feed. A Meta-operated image-centric social media platform, it is available on iOS, Android, Windows 10, and the web. Users can take photos and edit them using built-in filters and other tools, then share them on other social media platforms like Facebook. It supports 32 languages including English, Hindi, Spanish, French, Korean, and Japanese.
Instagram was originally distinguished by allowing content to be framed only in a square (1:1) aspect ratio of 640 pixels to match the display width of the iPhone at the time. In 2015, this restriction was eased with an increase to 1080 pixels. It also added messaging features, the ability to include multiple images or videos in a single post, and a Stories feature—similar to its main competitor, Snapchat, which allowed users to post their content to a sequential feed, with each post accessible to others for 24 hours. As of January 2019, Stories is used by 500 million people daily.
Instagram was launched for iOS in October 2010 by Kevin Systrom and Mike Krieger. It rapidly gained popularity, reaching 1million registered users in two months, 10 million in a year, and 1 billion in June 2018. In April 2012, Facebook acquired the service for approximately US$1 billion in cash and stock. The Android version of Instagram was released in April 2012, followed by a feature-limited desktop interface in November 2012, a Fire OS app in June 2014, and an app for Windows 10 in October 2016. Although often admired for its success and influence, Instagram has also been criticized for negatively affecting teens' mental health, its policy and interface changes, its alleged censorship, and illegal and inappropriate content uploaded by users.
History
Instagram began development in San Francisco as Burbn, a mobile check-in app created by Kevin Systrom and Mike Krieger. On March 5, 2010, Systrom closed a $500,000 () seed funding round with Baseline Ventures and Andreessen Horowitz while working on Burbn. Realizing that it was too similar to Foursquare, they refocused their app on photo-sharing, which had become a popular feature among its users. They renamed it Instagram, a portmanteau of instant camera and telegram.
2010–2011: Beginnings and major funding
Josh Riedel joined the company in October as Community Manager, Shayne Sweeney joined in November as an engineer, and Jessica Zollman joined as a Community Evangelist in August 2011.
The first Instagram post was a photo of South Beach Harbor at Pier 38, posted by Mike Krieger at 5:26p.m. on July16, 2010. On October6, 2010, the Instagram iOS app was officially released through the App Store. In February 2011, it was reported that Instagram had raised $7 million () in Series A funding from a variety of investors, including Benchmark Capital, Jack Dorsey, Chris Sacca (through Capital fund), and Adam D'Angelo. The deal valued Instagram at around $20 million. In April 2012, Instagram raised $50 million () from venture capitalists with a valuation of $500 million (). Joshua Kushner was the second largest investor in Instagram's Series B fundraising round, leading his investment firm, Thrive Capital, to double its money after the sale to Facebook.
2012–2014: Additional platforms and acquisition by Facebook
On April 3, 2012, Instagram released a version of its app for Android phones, and it was downloaded more than one million times in less than one day. The Android app has since received two significant updates: first, in March 2014, which cut the file size of the app by half and added performance improvements; then in April 2017, to add an offline mode that allows users to view and interact with content without an Internet connection. At the time of the announcement, it was reported that 80% of Instagram's 600 million users were located outside the U.S., and while the aforementioned functionality was live at its announcement, Instagram also announced its intention to make more features available offline, and that they were "exploring an iOS version". On April 9, 2012, Facebook, Inc. (now Meta Platforms) bought Instagram for $1 billion () in cash and stock, with a plan to keep the company independently managed. Britain's Office of Fair Trading approved the deal on August 14, 2012, and on August 22, 2012, the Federal Trade Commission in the U.S. closed its investigation, allowing the deal to proceed. On September 6, 2012, the deal between Instagram and Facebook officially closed with a purchase price of $300 million in cash and 23 million shares of stock.
The deal closed just before Facebook's scheduled initial public offering according to CNN. The deal price was compared to the $35 million Yahoo! paid for Flickr in 2005. Mark Zuckerberg said Facebook was "committed to building and growing Instagram independently." According to Wired, the deal netted Systrom $400 million.
In November 2012, Instagram launched website profiles, allowing anyone to see user feeds from a web browser with limited functionality, as well as a selection of badges, and web widget buttons to link to profiles. Since the app's launch it had used the Foursquare API technology to provide named location tagging. In March 2014, Instagram started to test and switch the technology to use Facebook Places.
2015–2017: Redesign and Windows app
In June 2015, the desktop website user interface was redesigned to become more flat and minimalistic, but with more screen space for each photo and to resemble the layout of Instagram's mobile website. Furthermore, one row of pictures only has three instead of five photos to match the mobile layout. The slideshow banner on the top of profile pages, which simultaneously slide-showed seven picture tiles of pictures posted by the user, alternating at different times in a random order, has been removed. In addition, the formerly angular profile pictures became circular.
In April 2016, Instagram released a Windows 10 Mobile app, after years of demand from Microsoft and the public to release an app for the platform. The platform previously had a beta version of Instagram, first released on November 21, 2013, for Windows Phone 8. The new app added support for videos (viewing and creating posts or stories, and viewing live streams), album posts and direct messages. Similarly, an app for Windows 10 personal computers and tablets was released in October 2016. In May, Instagram updated its mobile website to allow users to upload photos, and to add a "lightweight" version of the Explore tab.
On May 11, 2016, Instagram revamped its design, adding a black-and-white flat design theme for the app's user interface, and a less skeuomorphistic, more abstract, "modern" and colorful icon. Rumors of a redesign first started circulating in April, when The Verge received a screenshot from a tipster, but at the time, an Instagram spokesperson simply told the publication that it was only a concept. On December 6, 2016, Instagram introduced comment liking. However, unlike post likes, the user who posted a comment does not receive notifications about comment likes in their notification inbox. Uploaders can optionally decide to deactivate comments on a post.
The mobile website allows uploading pictures since May 4, 2017. Image filters and the ability to upload videos were not introduced then. On April 30, 2019, the Windows 10 Mobile app was discontinued, though the mobile website remains available as a progressive web application (PWA) with limited functionality. The app remains available on Windows 10 computers and tablets, also updated to a PWA in 2020.
2018–2019: IGTV, removal of the like counter, management changes
To comply with the GDPR regulations regarding data portability, Instagram introduced the ability for users to download an archive of their user data in April 2018. IGTV launched on June 20, 2018, as a standalone video application. The application was shut down and removed from app stores in March 2022, citing low usage and a shift to short-form video content. On September 24, 2018, Krieger and Systrom announced in a statement they would be stepping down from Instagram. On October 1, 2018, it was announced that Adam Mosseri would be the new head of Instagram.
During Facebook F8, it was announced that Instagram would, beginning in Canada, pilot the removal of publicly displayed "like" counts for content posted by other users. Like counts would only be visible to the user who originally posted the content. Mosseri stated that this was intended to have users "worry a little bit less about how many likes they're getting on Instagram and spend a bit more time connecting with the people that they care about." It has been argued that low numbers of likes in relativity to others could contribute to a lower self-esteem in users. The pilot began in May 2019, and was extended to 6 other markets in July. The pilot was expanded worldwide in November 2019. Also in July 2019, Instagram announced that it would implement new features designed to reduce harassment and negative comments on the service.
In August 2019, Instagram also began to pilot the removal of the "Following" tab from the app, which had allowed users to view a feed of the likes and comments made by users they follow. The change was made official in October, with head of product Vishal Shah stating that the feature was underused and that some users were "surprised" when they realized their activity was being surfaced in this manner. In October 2019, Instagram introduced a limit on the number of posts visible in page scrolling mode unless logged in. Until this point, public profiles had been available to all users, even when not logged in. Following the change, after viewing a number of posts a pop-up requires the user to log in to continue viewing content.
In the same month, Instagram launched a separate app known as Threads. Similar to Snapchat, the app allowed users to communicate through messaging and video chats. It was integrated with Instagram's "Close friends" feature, so that users could send images, photos, and texts privately to others, and also had Instagram's photo editing system embedded into the app. However, Instagram discontinued this version of Threads in December 2021, mainly due to most of its features being rolled out on Instagram itself, as well as low usage compared to other social media applications. Threads was not well-received among Instagram's user base. Since its launch, only approximately 220,000 users globally downloaded the app, which represented less than 0.1% of Instagram's monthly active users, indicating a lack of success in driving adoption.
2020–present
In March 2020, Instagram launched a new feature called "Co-Watching". The new feature allows users to share posts with each other over video calls. According to Instagram, they pushed forward the launch of Co-Watching in order to meet the demand for virtually connecting with friends and family due to social distancing as a result of the COVID-19 pandemic.
In August 2020, Instagram began a pivot to video, introducing a new feature called "Reels". The intent was to compete with the video-sharing site TikTok. Instagram also added suggested posts in August 2020. After scrolling through posts from the past 48 hours, Instagram displays posts related to their interests from accounts they do not follow. In February 2021, Instagram began testing a new feature called Vertical Stories, said by some sources to be inspired by TikTok. The same month, they also began testing the removal of ability to share feed posts to stories. In March 2021, Instagram launched a new feature in which four people can go live at once. Instagram also announced that adults would not be allowed to message teens who don't follow them as part of a series of new child safety policies.
In May 2021, Instagram began allowing users in some regions to add pronouns to their profile page. On October 4, 2021, Meta services suffered their worst outage since 2008, bringing down Instagram, Facebook, and WhatsApp. Security experts identified the problem as possibly being DNS-related. On March 17, 2022, Zuckerberg confirmed plans to add non-fungible tokens (NFTs) to the platform.
In September 2022, Ireland's Data Protection Commission fined the company $402 million under privacy laws recently adopted by the European Union over how it handled the privacy data of minors. After being trialled in mid-2022, Instagram introduced | Technology | Social network and blogging | null |
25893496 | https://en.wikipedia.org/wiki/Line%20of%20purples | Line of purples | In color theory, the line of purples or purple boundary is the locus on the edge of the chromaticity diagram formed between extreme spectral red and violet. Except for these endpoints of the line, colors on the line are non-spectral (no monochromatic light source can generate them). Rather, every color on the line is a unique mixture in a ratio of fully saturated red and fully saturated violet, the two spectral color endpoints of visibility on the spectrum of pure hues. Colors on the line and spectral colors are the only ones that are fully saturated in the sense that, for any point on the line, no other possible color being a mixture of red and violet is more saturated than it.
Unlike spectral colors, which may be implemented, for example, by the nearly monochromatic light of a laser, with precision much finer than human chromaticity resolution, colors on the line are more difficult to depict. The sensitivity of each type of human cone cell to both spectral red and spectral violet, being at the opposite endpoints of the line and at the extremes of the visible spectrum, is very low. (See luminosity function.) Therefore, common purple colors are not highly bright.
The line of purples, a theoretical boundary of chromaticity, is distinct from "purples", a more general denomination of colors, which also refers to less than fully saturated colors (see shades of purple and shades of pink for examples) that form the interior of a triangle between white and the line of purples in the CIE chromaticity diagram.
In color spaces
In 3-dimensional color spaces the line, if present, becomes a 2-dimensional shape. For example, in the CIE XYZ it is a planar sector bounded by black–red and black–violet rays. In systems premised on pigment colors, such as the Munsell and Pantone systems, boundary purples might be absent because the maximally possible lightness of a pigment vanishes when its chromaticity approaches the Line, such that purple pigments near the line are indistinguishable from black.
The RGB color model, although theoretically capable of approximating the colors of the line because it is an additive system, usually practically fails because of the limitations of the light source used. The boundary of sRGB (pictured) runs approximately parallel to the line, connecting the primaries red and (color wheel) blue, and thus purples near the line are absent from the gamut of sRGB. Magenta ink, which is one of CMYK's primaries, is also very distant from the line for the reason explained above. The wide-gamut RGB color space approximates the colors on the line better, but devices capable of displaying colors with this enhanced system are prohibitively expensive for ordinary consumers.
Table of highly saturated purples
Most of the names of the purple colors in the table below do not denominate colors on the line of purples, but instead are slightly less than maximally colorful, i.e. saturated.
Approximations of colors outside sRGB.
| Physical sciences | Basics | Physics |
1431559 | https://en.wikipedia.org/wiki/Core%E2%80%93mantle%20boundary | Core–mantle boundary | The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large low-shear-velocity provinces (LLSVP).
The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the CMB may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field.
D″ region
An approximately 200 km thick layer of the lower mantle directly above the CMB is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1,800 km thick, was renamed D′ (D prime) and the lower part (the bottom 200 km) was named D″. Later it was found that D" is non-spherical. In 1993, Czechowski found that inhomogeneities in D" form structures analogous to continents (i.e. core-continents). They move in time and determine some properties of hotspots and mantle convection. Later research supported this hypothesis.
Seismic discontinuity
A seismic discontinuity occurs within Earth's interior at a depth of about 2,900 km (1,800 mi) below the surface, where there is an abrupt change in the speed of seismic waves (generated by earthquakes or explosions) that travel through Earth. At this depth, primary seismic waves (P waves) decrease in velocity while secondary seismic waves (S waves) disappear completely. S waves shear material and cannot transmit through liquids, so it is thought that the unit above the discontinuity is solid, while the unit below is in a liquid or molten form.
The discontinuity was discovered by Beno Gutenberg, a seismologist who made several important contributions to the study and understanding of the Earth's interior. The CMB has also been referred to as the Gutenberg discontinuity, the Oldham-Gutenberg discontinuity, or the Wiechert-Gutenberg discontinuity. In modern times, however, the term Gutenberg discontinuity or the "G" is most commonly used in reference to a decrease in seismic velocity with depth that is sometimes observed at about 100 km below the Earth's oceans.
| Physical sciences | Tectonics | Earth science |
1431997 | https://en.wikipedia.org/wiki/Chartreuse%20%28color%29 | Chartreuse (color) | Chartreuse (, , ), also known as yellow-green or greenish yellow, is a color between yellow and green. It was named because of its resemblance to the French liqueur green chartreuse, introduced in 1764. Similarly, chartreuse yellow is a yellow color mixed with a small amount of green, named after the drink yellow chartreuse.
During the 2000s, yellow-green, as well as other shades of bright green like lime green, became very popular when various tech companies used it in office decor and other products, and with the popularity and success of the Shrek franchise.
Shades
History and etymology
The name Carthusian is derived from the Chartreuse Mountains in the French Prealps: Bruno of Cologne built his first hermitage in a valley of these mountains. These names were adapted to the English charterhouse, meaning a Carthusian monastery. These monks started producing Chartreuse liqueur in 1737.
In nature
Yellow-green algae, also called Xanthophytes, are a class of algae in the Heterokontophyta division. Most live in fresh water, but some are found in marine and soil habitats. They vary from single-celled flagellates to simple colonial and filamentous forms. Unlike other heterokonts, the plastids of yellow-green algae do not contain fucoxanthin, which is why they have a lighter color.
In popular culture
Traffic safety
Chartreuse yellow is used on traffic safety vests to provide increased visibility for employees working near traffic. The chartreuse yellow background material, together with a retro-reflective satisfy the ANSI 107-2010 standard since 1999. High-visibility clothing ANSI Standards were adopted as an Occupational Safety and Health Act (United States) requirement in 2008.
Film and television
The 1960 Universal film Chartroose Caboose featured a "bright green"-colored train car.
Firefighting
Since about 1973, a sort of fluorescent chartreuse green has been adopted as the color of fire engines in parts of the United States and elsewhere. The use of chartreuse fire engines began when New York ophthalmologist Stephen Solomon produced research claiming that sparkling bright lime-green paint would boost the night-time visibility of emergency vehicles compared to those painted the traditional fire engine red. The reason for this is the Purkinje effect, i.e., the cones do not function as efficiently in dim light, so red objects appear to be black. In Australia this form of chartreuse yellow is also known as "ACT yellow" as this is the color of the fire engines in the Australian Capital Territory.
| Physical sciences | Colors | Physics |
1432127 | https://en.wikipedia.org/wiki/Mode%20%28statistics%29 | Mode (statistics) | In statistics, the mode is the value that appears most often in a set of data values. If is a discrete random variable, the mode is the value at which the probability mass function takes its maximum value (i.e., ). In other words, it is the value that is most likely to be sampled.
Like the statistical mean and median, the mode is a way of expressing, in a (usually) single number, important information about a random variable or a population. The numerical value of the mode is the same as that of the mean and median in a normal distribution, and it may be very different in highly skewed distributions.
The mode is not necessarily unique in a given discrete distribution since the probability mass function may take the same maximum value at several points , , etc. The most extreme case occurs in uniform distributions, where all values occur equally frequently.
A mode of a continuous probability distribution is often considered to be any value at which its probability density function has a locally maximum value. When the probability density function of a continuous distribution has multiple local maxima it is common to refer to all of the local maxima as modes of the distribution, so any peak is a mode. Such a continuous distribution is called multimodal (as opposed to unimodal).
In symmetric unimodal distributions, such as the normal distribution, the mean (if defined), median and mode all coincide. For samples, if it is known that they are drawn from a symmetric unimodal distribution, the sample mean can be used as an estimate of the population mode.
Mode of a sample
The mode of a sample is the element that occurs most often in the collection. For example, the mode of the sample [1, 3, 6, 6, 6, 6, 7, 7, 12, 12, 17] is 6. Given the list of data [1, 1, 2, 4, 4] its mode is not unique. A dataset, in such a case, is said to be bimodal, while a set with more than two modes may be described as multimodal.
For a sample from a continuous distribution, such as [0.935..., 1.211..., 2.430..., 3.668..., 3.874...], the concept is unusable in its raw form, since no two values will be exactly the same, so each value will occur precisely once. In order to estimate the mode of the underlying distribution, the usual practice is to discretize the data by assigning frequency values to intervals of equal distance, as for making a histogram, effectively replacing the values by the midpoints of the
intervals they are assigned to. The mode is then the value where the histogram reaches its peak. For small or middle-sized samples the outcome of this procedure is sensitive to the choice of interval width if chosen too narrow or too wide; typically one should have a sizable fraction of the data concentrated in a relatively small number of intervals (5 to 10), while the fraction of the data falling outside these intervals is also sizable. An alternate approach is kernel density estimation, which essentially blurs point samples to produce a continuous estimate of the probability density function which can provide an estimate of the mode.
The following MATLAB (or Octave) code example computes the mode of a sample:
X = sort(x); % x is a column vector dataset
indices = find(diff([X, realmax]) > 0); % indices where repeated values change
[modeL,i] = max (diff([0, indices])); % longest persistence length of repeated values
mode = X(indices(i));
The algorithm requires as a first step to sort the sample in ascending order. It then computes the discrete derivative of the sorted list and finds the indices where this derivative is positive. Next it computes the discrete derivative of this set of indices, locating the maximum of this derivative of indices, and finally evaluates the sorted sample at the point where that maximum occurs, which corresponds to the last member of the stretch of repeated values.
Comparison of mean, median and mode
Use
Unlike mean and median, the concept of mode also makes sense for "nominal data" (i.e., not consisting of numerical values in the case of mean, or even of ordered values in the case of median). For example, taking a sample of Korean family names, one might find that "Kim" occurs more often than any other name. Then "Kim" would be the mode of the sample. In any voting system where a plurality determines victory, a single modal value determines the victor, while a multi-modal outcome would require some tie-breaking procedure to take place.
Unlike median, the concept of mode makes sense for any random variable assuming values from a vector space, including the real numbers (a one-dimensional vector space) and the integers (which can be considered embedded in the reals). For example, a distribution of points in the plane will typically have a mean and a mode, but the concept of median does not apply. The median makes sense when there is a linear order on the possible values. Generalizations of the concept of median to higher-dimensional spaces are the geometric median and the centerpoint.
Uniqueness and definedness
For some probability distributions, the expected value may be infinite or undefined, but if defined, it is unique. The mean of a (finite) sample is always defined. The median is the value such that the fractions not exceeding it and not falling below it are each at least 1/2. It is not necessarily unique, but never infinite or totally undefined. For a data sample it is the "halfway" value when the list of values is ordered in increasing value, where usually for a list of even length the numerical average is taken of the two values closest to "halfway". Finally, as said before, the mode is not necessarily unique. Certain pathological distributions (for example, the Cantor distribution) have no defined mode at all. For a finite data sample, the mode is one (or more) of the values in the sample.
Properties
Assuming definedness, and for simplicity uniqueness, the following are some of the most interesting properties.
All three measures have the following property: If the random variable (or each value from the sample) is subjected to the linear or affine transformation, which replaces by , so are the mean, median and mode.
Except for extremely small samples, the mode is insensitive to "outliers" (such as occasional, rare, false experimental readings). The median is also very robust in the presence of outliers, while the mean is rather sensitive.
In continuous unimodal distributions the median often lies between the mean and the mode, about one third of the way going from mean to mode. In a formula, median ≈ (2 × mean + mode)/3. This rule, due to Karl Pearson, often applies to slightly non-symmetric distributions that resemble a normal distribution, but it is not always true and in general the three statistics can appear in any order.
For unimodal distributions, the mode is within standard deviations of the mean, and the root mean square deviation about the mode is between the standard deviation and twice the standard deviation.
Example for a skewed distribution
An example of a skewed distribution is personal wealth: Few people are very rich, but among those some are extremely rich. However, many are rather poor.
A well-known class of distributions that can be arbitrarily skewed is given by the log-normal distribution. It is obtained by transforming a random variable having a normal distribution into random variable . Then the logarithm of random variable is normally distributed, hence the name.
Taking the mean μ of to be 0, the median of will be 1, independent of the standard deviation σ of . This is so because has a symmetric distribution, so its median is also 0. The transformation from to is monotonic, and so we find the median for .
When has standard deviation σ = 0.25, the distribution of is weakly skewed. Using formulas for the log-normal distribution, we find:
Indeed, the median is about one third on the way from mean to mode.
When has a larger standard deviation, , the distribution of is strongly skewed. Now
Here, Pearson's rule of thumb fails.
Van Zwet condition
Van Zwet derived an inequality which provides sufficient conditions for this inequality to hold. The inequality
Mode ≤ Median ≤ Mean
holds if
F( Median - ) + F( Median + ) ≥ 1
for all where F() is the cumulative distribution function of the distribution.
Unimodal distributions
It can be shown for a unimodal distribution that the median and the mean lie within (3/5)1/2 ≈ 0.7746 standard deviations of each other. In symbols,
where is the absolute value.
A similar relation holds between the median and the mode: they lie within 31/2 ≈ 1.732 standard deviations of each other:
History
The term mode originates with Karl Pearson in 1895.
Pearson uses the term mode interchangeably with maximum-ordinate. In a footnote he says, "I have found it convenient to use the term mode for the abscissa corresponding to the ordinate of maximum frequency."
| Mathematics | Statistics and probability | null |
1432191 | https://en.wikipedia.org/wiki/Japanese%20encephalitis | Japanese encephalitis | Japanese encephalitis (JE) is an infection of the brain caused by the Japanese encephalitis virus (JEV). While most infections result in little or no symptoms, occasional inflammation of the brain occurs. In these cases, symptoms may include headache, vomiting, fever, confusion and seizures. This occurs about 5 to 15 days after infection.
JEV is generally spread by mosquitoes, specifically those of the Culex type. Pigs and wild birds serve as a reservoir for the virus. The disease occurs mostly outside of cities. Diagnosis is based on blood or cerebrospinal fluid testing.
Prevention is generally achieved with the Japanese encephalitis vaccine, which is both safe and effective. Other measures include avoiding mosquito bites. Once infected, there is no specific treatment, with care being supportive. This is generally carried out in a hospital. Permanent problems occur in up to half of people who recover from JE.
The disease primarily occurs in East and Southeast Asia as well as the Western Pacific. About 3 billion people live in areas where the disease occurs. About 68,000 symptomatic cases occur a year, with about 17,000 deaths. Often, cases occur in outbreaks. The disease was first described in Japan in 1871.
Signs and symptoms
The Japanese encephalitis virus (JEV) has an incubation period of 2 to 26 days. The vast majority of infections are asymptomatic: only 1 in 250 infections develop into encephalitis.
Severe rigors may mark the onset of this disease in humans. Fever, headache, and malaise are other non-specific symptoms of this disease which may last for a period of between 1 and 6 days. Signs that develop during the acute encephalitic stage include neck rigidity, cachexia, hemiparesis, convulsions, and a raised body temperature between . The mortality rate of the disease is around 25% and is generally higher in children under five, the immuno-suppressed, and the elderly. Transplacental spread has been noted. Neurological disorders develop in 40% of those who survive with lifelong neurological defects such as deafness, emotional lability and hemiparesis occurring in those who had central nervous system involvement.
Increased microglial activation following Japanese encephalitis infection has been found to influence the outcome of viral pathogenesis. Microglia are the resident immune cells of the central nervous system (CNS) and have a critical role in host defense against invading microorganisms. Activated microglia secrete cytokines, such as interleukin-1 (IL-1) and tumor necrosis factor-alpha (TNF-α), which can cause toxic effects in the brain. Additionally, other soluble factors such as neurotoxins, excitatory neurotransmitters, prostaglandin, reactive oxygen, and nitrogen species are secreted by activated microglia.
In a murine model of JE, it was found that in the hippocampus and the striatum, the number of activated microglia was more than anywhere else in the brain, closely followed by that in the thalamus. In the cortex, the number of activated microglia was significantly less when compared to other regions of the mouse brain. An overall induction of differential expression of proinflammatory cytokines and chemokines from different brain regions during a progressive Japanese encephalitis infection was also observed.
Although the net effect of the proinflammatory mediators is to kill infectious organisms and infected cells as well as to stimulate the production of molecules that amplify the mounting response to damage, it is also evident that in a non-regenerating organ such as the brain, a dysregulated innate immune response would be deleterious. In JE the tight regulation of microglial activation appears to be disturbed, resulting in an autotoxic loop of microglial activation that possibly leads to bystander neuronal damage. In animals, key signs include infertility and abortion in pigs, neurological disease in horses, and systemic signs including fever, lethargy and anorexia.
Cause
It is a disease caused by the mosquito-borne Japanese encephalitis virus (JEV).
Virology
JEV is a virus from the family Flaviviridae, part of the Japanese encephalitis serocomplex of nine genetically and antigenically related viruses, some of which are particularly severe in horses, and four of which, including West Nile virus, are known to infect humans. The enveloped virus is closely related to the West Nile virus and the St. Louis encephalitis virus. The positive sense single-stranded RNA genome is packaged in the capsid which is formed by the capsid protein. The outer envelope is formed by envelope protein and is the protective antigen. It aids in the entry of the virus into the cell. The genome also encodes several nonstructural proteins (NS1, NS2a, NS2b, NS3, N4a, NS4b, NS5). NS1 is also produced as a secretory form. NS3 is a putative helicase, and NS5 is the viral polymerase. It has been noted that Japanese encephalitis infects the lumen of the endoplasmic reticulum (ER) and rapidly accumulates substantial amounts of viral proteins.
Based on the envelope gene, there are five genotypes (I–V). The Muar strain, isolated from a patient in Malaya in 1952, is the prototype strain of genotype V. Genotype V is the earliest recognized ancestral strain. The first clinical reports date from 1870, but the virus appears to have evolved in the mid-16th century. Complete genomes of 372 strains of this virus have been sequenced as of 2024.
Diagnosis
Japanese encephalitis is diagnosed by commercially available tests detecting JE virus-specific IgM antibodies in serum and/or cerebrospinal fluid, for example by IgM capture ELISA.
JE virus IgM antibodies are usually detectable 3 to 8 days after onset of illness and persist for 30 to 90 days, but longer persistence has been documented. Therefore, positive IgM antibodies occasionally may reflect a past infection or vaccination. Serum collected within 10 days of illness onset may not have detectable IgM, and the test should be repeated on a convalescent sample. Patients with JE virus IgM antibodies should have confirmatory neutralizing antibody testing.
Confirmatory testing in the US is available only at the CDC and a few specialized reference laboratories. In fatal cases, nucleic acid amplification and virus culture of autopsy tissues can be useful. Viral antigens can be shown in tissues by indirect fluorescent antibody staining.
Prevention
Infection with Japanese encephalitis confers lifelong immunity. There are currently three vaccines available: SA14-14-2, IXIARO (IC51, also marketed in Australia, New Zealand as JESPECT and India as JEEV) and ChimeriVax-JE (marketed as IMOJEV). All current vaccines are based on the genotype III virus.
A formalin-inactivated mouse-brain-derived vaccine was first produced in Japan in the 1930s and validated for use in Taiwan in the 1960s and Thailand in the 1980s. The widespread use of vaccines and urbanization has led to control of the disease in Japan and Singapore. The high cost of this vaccine, which is grown in live mice, means that poorer countries could not afford to give it as part of a routine immunization program.
The most common adverse effects are redness and pain at the injection site. Uncommonly, an urticarial reaction can develop about four days after injection. Vaccines produced from mouse brain have a risk of autoimmune neurological complications of around 1 per million vaccinations. However where the vaccine is not produced in mouse brains but in vitro using cell culture there are few adverse effects compared to placebo, the main side effects being headache and myalgia.
The neutralizing antibody persists in the circulation for at least two to three years, and perhaps longer. The total duration of protection is unknown, but because there is no firm evidence for protection beyond three years, boosters are recommended every 11 months for people who remain at risk. Some data are available regarding the interchangeability of other JE vaccines and IXIARO.
Treatment
There is no specific treatment for Japanese encephalitis and treatment is supportive, with assistance given for feeding, breathing or seizure control as required. Raised intracranial pressure may be managed with mannitol. There is no transmission from person to person and therefore patients do not need to be isolated.
A breakthrough in the field of Japanese encephalitis therapeutics is the identification of macrophage receptor involvement in the disease severity. A recent report of an Indian group demonstrates the involvement of monocyte and macrophage receptor CLEC5A in severe inflammatory response in Japanese encephalitis infection of the brain. This transcriptomic study provides a hypothesis of neuroinflammation and a new lead in development of appropriate therapies for Japanese encephalitis.
The effectiveness of intravenous immunoglobulin for Japanese encephalitis is unclear due to a paucity of evidence. Intravenous immunoglobulin for Japanese encephalitis appeared to have no benefit.
Epidemiology
Japanese encephalitis (JE) is the leading cause of viral encephalitis in Asia, with up to 70,000 cases reported annually. Case-fatality rates range from 0.3% to 60% and depend on the population and age. Rare outbreaks in U.S. territories in the Western Pacific have also occurred. Residents of rural areas in endemic locations are at highest risk; Japanese encephalitis does not usually occur in urban areas.
Countries that have had major epidemics in the past, but which have controlled the disease primarily by vaccination, include China, South Korea, Singapore, Japan, Taiwan and Thailand. Other countries that still have periodic epidemics include Vietnam, Cambodia, Myanmar, India, Nepal, and Malaysia. Japanese encephalitis has been reported in the Torres Strait Islands, and two fatal cases were reported in mainland northern Australia in 1998. There were reported cases in Kachin State, Myanmar in 2013. There were 116 deaths reported in Odisha's Malkangiri district of India in 2016.
In 2022, the notable increase in the distribution of the virus in Australia due to climate change became a concern to health officials as the population has limited immunity to the disease and the presence of large numbers of farmed and feral pigs could act as reservoirs for the virus. In February 2022, Japanese encephalitis was detected and confirmed in piggeries in Victoria, Queensland and New South Wales. On 4 March, cases were detected in South Australia. By October 2022, the outbreak in eastern mainland Australia had caused 42 symptomatic human cases of the disease, resulting in seven deaths.
Humans, cattle, and horses are dead-end hosts as the disease manifests as fatal encephalitis. Pigs act as amplifying hosts and have a vital role in the epidemiology of the disease. Infection in swine is asymptomatic, except in pregnant sows when abortion and fetal abnormalities are common sequelae. The most important vector is Culex tritaeniorhynchus, which feeds on cattle in preference to humans. The natural hosts of the Japanese encephalitis virus are birds, not humans, and many believe the virus will never be eliminated. In November 2011, the Japanese encephalitis virus was reported in Culex bitaeniorhynchus in South Korea.
Recently, whole genome microarray research of neurons infected with the Japanese encephalitis virus has shown that neurons play an important role in their own defense against Japanese encephalitis infection. Although this challenges the long-held belief that neurons are immunologically quiescent, an improved understanding of the proinflammatory effects responsible for immune-mediated control of viral infection and neuronal injury during Japanese encephalitis infection is an essential step for developing strategies for limiting the severity of CNS disease.
A number of drugs have been investigated to either reduce viral replication or provide neuroprotection in cell lines or studies in mice. None are currently advocated in treating human patients.
The use of rosmarinic acid, arctigenin, and oligosaccharides with degree of polymerization 6 from Gracilaria sp. or Monostroma nitidum is effective in a mouse model of Japanese encephalitis.
Curcumin has been shown to impart neuroprotection against Japanese encephalitis infection in an in vitro study. Curcumin possibly acts by decreasing cellular reactive oxygen species levels, restoration of cellular membrane integrity, decreasing pro-apoptotic signaling molecules, and modulating cellular levels of stress-related proteins. It has also been shown that the production of infective viral particles from previously infected neuroblastoma cells is reduced, which is achieved by inhibition of the ubiquitin-proteasome system.
Minocycline in mice resulted in marked decreases in the levels of several markers, viral titer, and the level of proinflammatory mediators and also prevented blood–brain barrier damage.
Evolution
It is theorized that the virus may have originated from an ancestral virus in the mid-1500s in the Malay Archipelago region and evolved there into five different genotypes that spread across Asia. The mean evolutionary rate has been estimated to be 4.35 (range: 3.49 to 5.30) nucleotide substitutions per site per year.
Outbreak history
The clinical recognition and recording of Japanese encephalitis (JE) trace back to the 19th century when recurring encephalitis outbreaks were noted during Japan’s summer months. The first clinical case of JE was documented in 1871 in Japan. However, it wasn’t until 1924, during a major outbreak involving over 6,000 cases, that JE’s severity and potential for widespread impact became apparent. Subsequent outbreaks in Japan were recorded in 1927, 1934, and 1935, each contributing to a deeper understanding of the disease and its transmission patterns. The spread of JE extended beyond Japan over the following decades, impacting numerous countries across Asia. On the Korean Peninsula, the first JE cases were reported in 1933, and mainland China documented its initial cases in 1940. The virus reached the Philippines in the early 1950s and continued its westward spread, with Pakistan recording cases in 1983, marking JE’s furthest westward extension. By 1995, JE cases had reached Papua New Guinea and northern Australia (specifically the Torres Strait), representing the virus's southernmost range. According to the World Health Organization (WHO), JE is also endemic to the Western Pacific Islands, but cases are rare, possibly due to an enzootic cycle that does not sustain continuous viral transmission. Epidemics in these islands likely occur only when the virus is introduced from other JE-endemic regions.
| Biology and health sciences | Viral diseases | Health |
1432541 | https://en.wikipedia.org/wiki/Charophyta | Charophyta | Charophyta () is a group of freshwater green algae, called charophytes (), sometimes treated as a division, yet also as a superdivision or an unranked clade. The terrestrial plants, the Embryophyta emerged deep within Charophyta, possibly from terrestrial unicellular charophytes, with the class Zygnematophyceae as a sister group.
With the Embryophyta now cladistically placed in the Charophyte, it is a synonym of Streptophyta. The sister group of the charophytes are the Chlorophyta. In some charophyte groups, such as the Zygnematophyceae or conjugating green algae, flagella are absent and sexual reproduction does not involve free-swimming flagellate sperm. Flagellate sperm, however, are found in stoneworts (Charales) and Coleochaetales, orders of parenchymatous charophytes that are the closest relatives of the land plants, where flagellate sperm are also present in all except the conifers and flowering plants. Fossil stoneworts of early Devonian age that are similar to those of the present day have been described from the Rhynie chert of Scotland. Somewhat different charophytes have also been collected from the Late Devonian (Famennian) Waterloo Farm lagerstätte of South Africa. These include two species each of Octochara and Hexachara, which are the oldest fossils of Charophyte axes bearing in situ oogonia.
The name comes from the genus Chara, but the finding that the Embryophyta actually emerged in them has not resulted in a much more restricted meaning of the Charophyta, namely to a much smaller side branch. This more restricted group corresponds to the Charophyceae.
Description
The Zygnematophyceae, formerly known as the Conjugatophyceae, generally possess two fairly elaborate chloroplasts in each cell, rather than many discoid ones. They reproduce asexually by the development of a septum between the two cell-halves or semi-cells (in unicellular forms, each daughter-cell develops the other semi-cell afresh) and sexually by conjugation, or the fusion of the entire cell-contents of the two conjugating cells. The saccoderm desmids and the placoderm or true desmids, unicellular or filamentous members of the Zygnematophyceae, are dominant in non-calcareous, acid waters of oligotrophic or primitive lakes (e.g. Wastwater), or in lochans, tarns and bogs, as in the West of Scotland, Eire, parts of Wales and of the Lake District.
Klebsormidium, the type of the Klebsormidiophyceae, is a simple filamentous form with circular, plate-like chloroplasts, reproducing by fragmentation, by dorsiventral, biciliate swarmers and, according to Wille, a twentieth-century algologist, by aplanospores. Sexual reproduction is simple and isogamous (the male and female gametes are outwardly indistinguishable).
The Charales (Charophyceae), or stoneworts, are freshwater and brackish algae with slender green or grey stems; the grey colour of many species results from the deposition of lime on the walls, masking the green colour of the chlorophyll. The main stems are slender and branch occasionally. Lateral branchlets occur in whorls at regular intervals up the stem, they are attached by rhizoids to the substrate. The reproductive organs consist of antheridia and oogonia, though the structures of these organs differ considerably from the corresponding organs in other algae. As a result of fertilization, a protonema is formed, from which the sexually reproducing algae develops.
A new terrestrial genus found in sandy soil in the Czech Republic, Streptofilum, may belong in its own class due its unique phylogenetic position. A cell wall is absent, instead the cell membrane consists of many layers of specific scales. It is a short, filamentous and unbranched algae surrounded by a mucilaginous sheath, which often disintegrates to diads and unicells.
Reproduction
The cells in Charophyta algae are all haploid, except during sexual reproduction, where a diploid unicellular zygote is produced. The zygote becomes four new haploid cells through meiosis, which will develop into new algae. In multicellular forms these haploid cells will grow into a gametophyte. In embryophytes (land plants) the zygote will instead give rise to a multicellular sporophyte.
Except from land plants, retention of the zygote is only known from some species in one group of green algae; the coleochaetes. In these species the zygote is corticated by a layer of sterile gametophytic cells. Another similarity is the presence of sporopollenin in the inner wall of the zygote. In at least one species, it receives nourishment from the gametophyte through placental transfer cells.
Classification
Charophyta are complex green algae that form a sister group to the Chlorophyta and within which the Embryophyta emerged. The chlorophyte and charophyte green algae and the embryophytes or land plants form a clade called the green plants or Viridiplantae, that is united among other things by the absence of phycobilins, the presence of chlorophyll a and chlorophyll b, cellulose in the cell wall and the use of starch, stored in the plastids, as a storage polysaccharide. The charophytes and embryophytes share several traits that distinguish them from the chlorophytes, such as the presence of certain enzymes (class I aldolase, Cu/Zn superoxide dismutase, glycolate oxidase, flagellar peroxidase), lateral flagella (when present), and, in many species, the use of phragmoplasts in mitosis. Thus Charophyta and Embryophyta together form the clade Streptophyta, excluding the Chlorophyta.
Charophytes such as Palaeonitella cranii and possibly the yet unassigned Parka decipiens are present in the fossil record of the Devonian. Palaeonitella differed little from some present-day stoneworts.
Cladogram
There is an emerging consensus on green algal relationships, mainly based on molecular data. The Mesostigmatophyceae (including Spirotaenia, and Chlorokybophyceae) are at the base of charophytes (streptophytes). The cladograms below show consensus phylogenetic relationships based on plastid genomes and a new proposal for a third phylum of green plants based on analysis of nuclear genomes.
Mesostigmatophyceae s.l. in the cladograms corresponds to a clade of a narrower circumscription, Mesostigmatophyceae s.s., and a separate class Chlorokybophyceae, as used by AlgaeBase.
The Mesostigmatophyceae are not filamentous, but the other basal charophytes (streptophytes) are.
| Biology and health sciences | Green algae | null |
1432856 | https://en.wikipedia.org/wiki/Mesoarchean | Mesoarchean | The Mesoarchean ( , also spelled Mesoarchaean) is a geologic era in the Archean Eon, spanning , which contains the first evidence of modern-style plate subduction and expansion of microbial life. The era is defined chronometrically and is not referenced to a specific level in a rock section on Earth.
Tectonics
The Mesoarchean era is thought to be the birthplace of modern-style plate subduction, based on geologic evidence from the Pilbara Craton in western Australia. A convergent margin with a modern-style oceanic arc existed at the boundary between West and East Pilbara approximately 3.12 Ga. By 2.97 Ga, the West Pilbara Terrane converged with and accreted onto the East Pilbara Terrane. A supercontinent, Vaalbara, may have existed in the Mesoarchean.
Environmental conditions
Analysis of oxygen isotopes in Mesoarchean cherts has been helpful in reconstructing Mesoarchean surface temperatures. These cherts led researchers to draw an estimate of an oceanic temperature around 55-85°C (131-185 Fahrenheit), while other studies of weathering rates postulate average temperatures below 50°C (122 Fahrenheit).
The Mesoarchean atmosphere contained high levels of atmospheric methane and carbon dioxide, which could be an explanation for the high temperatures during this era. Atmospheric dinitrogen content in the Mesoarchean is thought to have been similar to today, suggesting that nitrogen did not play an integral role in the thermal budget of ancient Earth.
The Pongola glaciation occurred around 2.9 Ga, from which there is evidence of ice extending to a palaeolatitude (latitude based on the magnetic field recorded in the rock) of 48 degrees. This glaciation was likely not triggered by the evolution of photosynthetic cyanobacteria, which likely occurred in the interval between the Huronian glaciations and the Makganyene glaciation.
Early microbial life
Microbial life with diverse metabolisms expanded during the Mesoarchean era and produced gases that influenced early Earth's atmospheric composition. Cyanobacteria produced oxygen gas, but oxygen did not begin to accumulate in the atmosphere until later in the Archean. Small oases of relatively oxygenated water did exist in some nearshore shallow marine environments by this era, however.
| Physical sciences | Geological timescale | Earth science |
1434441 | https://en.wikipedia.org/wiki/Strategy%20%28game%20theory%29 | Strategy (game theory) | In game theory, a move, action, or play is any one of the options which a player can choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship.
The term strategy is typically used to mean a complete algorithm for playing a game, telling a player what to do for every possible situation. A player's strategy determines the action the player will take at any stage of the game. However, the idea of a strategy is often confused or conflated with that of a move or action, because of the correspondence between moves and pure strategies in most games: for any move X, "always play move X" is an example of a valid strategy, and as a result every move can also be considered to be a strategy. Other authors treat strategies as being a different type of thing from actions, and therefore distinct.
It is helpful to think about a "strategy" as a list of directions, and a "move" as a single turn on the list of directions itself. This strategy is based on the payoff or outcome of each action. The goal of each agent is to consider their payoff based on a competitors action. For example, competitor A can assume competitor B enters the market. From there, Competitor A compares the payoffs they receive by entering and not entering. The next step is to assume Competitor B does not enter and then consider which payoff is better based on if Competitor A chooses to enter or not enter. This technique can identify dominant strategies where a player can identify an action that they can take no matter what the competitor does to try to maximize the payoff.
A strategy profile (sometimes called a strategy combination) is a set of strategies for all players which fully specifies all actions in a game. A strategy profile must include one and only one strategy for every player.
Strategy set
A player's strategy set defines what strategies are available for them to play.
A player has a finite strategy set if they have a number of discrete strategies available to them. For instance, a game of rock paper scissors comprises a single move by each player—and each player's move is made without knowledge of the other's, not as a response—so each player has the finite strategy set {rock paper scissors}.
A strategy set is infinite otherwise. For instance the cake cutting game has a bounded continuum of strategies in the strategy set {Cut anywhere between zero percent and 100 percent of the cake}.
In a dynamic game, games that are played over a series of time, the strategy set consists of the possible rules a player could give to a robot or agent on how to play the game. For instance, in the ultimatum game, the strategy set for the second player would consist of every possible rule for which offers to accept and which to reject.
In a Bayesian game, or games in which players have incomplete information about one another, the strategy set is similar to that in a dynamic game. It consists of rules for what action to take for any possible private information.
Choosing a strategy set
In applied game theory, the definition of the strategy sets is an important part of the art of making a game simultaneously solvable and meaningful. The game theorist can use knowledge of the overall problem, that is the friction between two or more players, to limit the strategy spaces, and ease the solution.
For instance, strictly speaking in the Ultimatum game a player can have strategies such as: Reject offers of ($1, $3, $5, ..., $19), accept offers of ($0, $2, $4, ..., $20). Including all such strategies makes for a very large strategy space and a somewhat difficult problem. A game theorist might instead believe they can limit the strategy set to: {Reject any offer ≤ x, accept any offer > x; for x in ($0, $1, $2, ..., $20)}.
Pure and mixed strategies
A pure strategy provides a complete definition of how a player will play a game. Pure strategy can be thought about as a singular concrete plan subject to the observations they make during the course of the game of play. In particular, it determines the move a player will make for any situation they could face. A player's strategy set is the set of pure strategies available to that player.
A mixed strategy is an assignment of a probability to each pure strategy. When enlisting mixed strategy, it is often because the game does not allow for a rational description in specifying a pure strategy for the game. This allows for a player to randomly select a pure strategy. (See the following section for an illustration.) Since probabilities are continuous, there are infinitely many mixed strategies available to a player. Since probabilities are being assigned to strategies for a specific player when discussing the payoffs of certain scenarios the payoff must be referred to as "expected payoff".
Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular pure strategy is selected with probability 1 and every other strategy with probability 0.
A totally mixed strategy is a mixed strategy in which the player assigns a strictly positive probability to every pure strategy. (Totally mixed strategies are important for equilibrium refinement such as trembling hand perfect equilibrium.)
Mixed strategy
Illustration
In a soccer penalty kick, the kicker must choose whether to kick to the right or left side of the goal, and simultaneously the goalie must decide which way to block it. Also, the kicker has a direction they are best at shooting, which is left if they are right-footed. The matrix for the soccer game illustrates this situation, a simplified form of the game studied by Chiappori, Levitt, and Groseclose (2002). It assumes that if the goalie guesses correctly, the kick is blocked, which is set to the base payoff of 0 for both players. If the goalie guesses wrong, the kick is more likely to go in if it is to the left (payoffs of +2 for the kicker and -2 for the goalie) than if it is to the right (the lower payoff of +1 to kicker and -1 to goalie).
This game has no pure-strategy equilibrium, because one player or the other would deviate from any profile of strategies—for example, (Left, Left) is not an equilibrium because the Kicker would deviate to Right and increase his payoff from 0 to 1.
The kicker's mixed-strategy equilibrium is found from the fact that they will deviate from randomizing unless their payoffs from Left Kick and Right Kick are exactly equal. If the goalie leans left with probability g, the kicker's expected payoff from Kick Left is g(0) + (1-g)(2), and from Kick Right is g(1) + (1-g)(0). Equating these yields g= 2/3. Similarly, the goalie is willing to randomize only if the kicker chooses mixed strategy probability k such that Lean Left's payoff of k(0) + (1-k)(-1) equals Lean Right's payoff of k(-2) + (1-k)(0), so k = 1/3. Thus, the mixed-strategy equilibrium is (Prob(Kick Left) = 1/3, Prob(Lean Left) = 2/3).
In equilibrium, the kicker kicks to their best side only 1/3 of the time. That is because the goalie is guarding that side more. Also, in equilibrium, the kicker is indifferent which way they kick, but for it to be an equilibrium they must choose exactly 1/3 probability.
Chiappori, Levitt, and Groseclose try to measure how important it is for the kicker to kick to their favored side, add center kicks, etc., and look at how professional players actually behave. They find that they do randomize, and that kickers kick to their favored side 45% of the time and goalies lean to that side 57% of the time. Their article is well-known as an example of how people in real life use mixed strategies.
Significance
In his famous paper, John Forbes Nash proved that there is an equilibrium for every finite game. One can divide Nash equilibria into two types. Pure strategy Nash equilibria are Nash equilibria where all players are playing pure strategies. Mixed strategy Nash equilibria are equilibria where at least one player is playing a mixed strategy. While Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria. For an example of a game that does not have a Nash equilibrium in pure strategies, see Matching pennies. However, many games do have pure strategy Nash equilibria (e.g. the Coordination game, the Prisoner's dilemma, the Stag hunt). Further, games can have both pure strategy and mixed strategy equilibria. An easy example is the pure coordination game, where in addition to the pure strategies (A,A) and (B,B) a mixed equilibrium exists in which both players play either strategy with probability 1/2.
Interpretations of mixed strategies
During the 1980s, the concept of mixed strategies came under heavy fire for being "intuitively problematic", since they are weak Nash equilibria, and a player is indifferent about whether to follow their equilibrium strategy probability or deviate to some other probability.
Game theorist Ariel Rubinstein describes alternative ways of understanding the concept. The first, due to Harsanyi (1973), is called purification, and supposes that the mixed strategies interpretation merely reflects our lack of knowledge of the players' information and decision-making process. Apparently random choices are then seen as consequences of non-specified, payoff-irrelevant exogenous factors.
A second interpretation imagines the game players standing for a large population of agents. Each of the agents chooses a pure strategy, and the payoff depends on the fraction of agents choosing each strategy. The mixed strategy hence represents the distribution of pure strategies chosen by each population. However, this does not provide any justification for the case when players are individual agents.
Later, Aumann and Brandenburger (1995), re-interpreted Nash equilibrium as an equilibrium in beliefs, rather than actions. For instance, in rock paper scissors an equilibrium in beliefs would have each player believing the other was equally likely to play each strategy. This interpretation weakens the descriptive power of Nash equilibrium, however, since it is possible in such an equilibrium for each player to actually play a pure strategy of Rock in each play of the game, even though over time the probabilities are those of the mixed strategy.
Behavior strategy
While a mixed strategy assigns a probability distribution over pure strategies, a behavior strategy assigns at each information set a probability distribution over the set of possible actions. While the two concepts are very closely related in the context of normal form games, they have very different implications for extensive form games. Roughly, a mixed strategy randomly chooses a deterministic path through the game tree, while a behavior strategy can be seen as a stochastic path.
The relationship between mixed and behavior strategies is the subject of Kuhn's theorem, a behavioral outlook on traditional game-theoretic hypotheses. The result establishes that in any finite extensive-form game with perfect recall, for any player and any mixed strategy, there exists a behavior strategy that, against all profiles of strategies (of other players), induces the same distribution over terminal nodes as the mixed strategy does. The converse is also true.
A famous example of why perfect recall is required for the equivalence is given by Piccione and Rubinstein (1997) with their Absent-Minded Driver game.
Outcome equivalence
Outcome equivalence combines the mixed and behavioral strategy of Player i in relation to the pure strategy of Player i’s opponent. Outcome equivalence is defined as the situation in which, for any mixed and behavioral strategy that Player i takes, in response to any pure strategy that Player I’s opponent plays, the outcome distribution of the mixed and behavioral strategy must be equal. This equivalence can be described by the following formula: (Q^(U(i), S(-i)))(z) = (Q^(β(i), S(-i)))(z), where U(i) describes Player i's mixed strategy, β(i) describes Player i's behavioral strategy, and S(-i) is the opponent's strategy.
Strategy with perfect recall
Perfect recall is defined as the ability of every player in game to remember and recall all past actions within the game. Perfect recall is required for equivalence as, in finite games with imperfect recall, there will be existing mixed strategies of Player I in which there is no equivalent behavior strategy. This is fully described in the Absent-Minded Driver game formulated by Piccione and Rubinstein. In short, this game is based on the decision-making of a driver with imperfect recall, who needs to take the second exit off the highway to reach home but does not remember which intersection they are at when they reach it. Figure [2] describes this game.
Without perfect information (i.e. imperfect information), players make a choice at each decision node without knowledge of the decisions that have preceded it. Therefore, a player’s mixed strategy can produce outcomes that their behavioral strategy cannot, and vice versa. This is demonstrated in the Absent-minded Driver game. With perfect recall and information, the driver has a single pure strategy, which is [continue, exit], as the driver is aware of what intersection (or decision node) they are at when they arrive to it. On the other hand, looking at the planning-optimal stage only, the maximum payoff is achieved by continuing at both intersections, maximized at p=2/3 (reference). This simple one player game demonstrates the importance of perfect recall for outcome equivalence, and its impact on normal and extended form games.
| Mathematics | Game theory | null |
1435586 | https://en.wikipedia.org/wiki/Aldabra%20giant%20tortoise | Aldabra giant tortoise | The Aldabra giant tortoise (Aldabrachelys gigantea) is a species of tortoise in the family Testudinidae and genus Aldabrachelys. The species is endemic to the Seychelles, with the nominate subspecies, A. g. gigantea native to Aldabra atoll. It is one of the largest tortoises in the world. Historically, giant tortoises were found on many of the western Indian Ocean islands, as well as Madagascar, and the fossil record indicates giant tortoises once occurred on every continent and many islands with the exception of Australia and Antarctica.
Many of the Indian Ocean species were thought to be driven to extinction by over-exploitation by European sailors, and they were all seemingly extinct by 1840 with the exception of the Aldabran giant tortoise on the island atoll of Aldabra. Although some remnant individuals of A. g. hololissa and A. g. arnoldi may remain in captivity, in recent times, these have all been reduced as subspecies of A. gigantea.
Description
The carapace of A. gigantea is a brown or tan in color with a high, domed shape. The species has stocky, heavily scaled legs to support its heavy body. The neck of the Aldabra giant tortoise is very long, even for its great size, which helps the animal to exploit tree branches up to a meter from the ground as a food source. Similar in size to the famous Galápagos giant tortoise, its carapace averages in length. Males have an average weight of .
Females are generally smaller than males, with average specimens measuring in carapace length and weighing . Medium-sized specimens in captivity were reported as in body mass. Another study found body masses of up to most commonplace.
Nomenclature and systematics
This species is widely referred to as Aldabrachelys gigantea, but in recent times, attempts were made to use the name Dipsochelys as Dipsochelys dussumieri. After a debate that lasted two years with many submissions, the ICZN eventually decided to conserve the name Testudo gigantea over this recently used name (ICZN 2013). This also affected the genus name for the species, establishing Aldabrachelys gigantea as nomen protectum.
Four subspecies are currently recognized. A trinomial authority in parentheses indicates that the subspecies was originally described in a genus other than Aldabrachelys:
A. g. gigantea , Aldabra giant tortoise from the Seychelles island of Aldabra
A. g. arnoldi , Arnold's giant tortoise from the Seychelles island of Mahé
A. g. daudinii † , Daudin's giant tortoise, from the Seychelles island of Mahé (extinct 1850)
A. g. hololissa , Seychelles giant tortoise, from the Seychelles islands of Cerf, Cousine, Frégate, Mahé, Praslin, Round, and Silhouette
The subspecific name, daudinii, is in honor of French zoologist François Marie Daudin.
Genetic evidence suggests that A. gigantea is most closely related to the extinct giant tortoise Aldabrachelys abrupta from Madagascar, from which it is estimaged to have diverged from approximately 4.5 million years ago.
Range and distribution
The main population of the Aldabra giant tortoise resides on the islands of the Aldabra Atoll in the Seychelles. The atoll has been protected from human influence and is home to some 100,000 giant tortoises, the world's largest population of the animal. Smaller populations of A. gigantea in the Seychelles exist on Frégate Island and in the Sainte Anne Marine National Park (e.g. Moyenne Island), where they are a popular tourist attraction.
Another isolated population of the species resides on the island of Changuu, near Zanzibar. Other captive populations exist in conservation parks in Mauritius and Rodrigues. The tortoises exploit many different kinds of habitat, including grasslands, low scrub, mangrove swamps, and coastal dunes.
Ecology
Habitat
A peculiar habitat has coevolved due to the grazing pressures of the tortoises: "tortoise turf", a comingling of 20+ species of grasses and herbs. Many of these distinct plants are naturally dwarfed and grow their seeds not from the tops of the plants, but closer to the ground to avoid the tortoises' close-cropping jaws.
As the largest animal in its environment, the Aldabra tortoise performs a role similar to that of the elephant. Their vigorous search for food fells trees and creates pathways used by other animals.
Feeding ecology
Primarily herbivores, Aldabra giant tortoises eat grasses, leaves, woody plant stems, and fruit. They occasionally indulge in small invertebrates and carrion, even eating the bodies of other dead tortoises. In captivity, Aldabra giant tortoises are known to consume fruits such as apples and bananas, as well as compressed vegetable pellets. In 2020, a female Aldabra giant tortoise on Fregate Island was observed hunting and eating a juvenile lesser noddy, indicating that the species was in the process of learning to catch birds.
Little fresh water is available for drinking in the tortoises' natural habitat, so they obtain most of their moisture from their food.
The Aldabra giant tortoise has two main varieties of shells, related to their habitat. Specimens living in habitats with food available primarily on the ground have more dome-shaped shells with the front extending downward over the neck. Those living in an environment with food available higher above the ground have more flattened top shells with the front raised to allow the neck to extend upward freely.
Tortoise turf
As the Aldabra giant tortoise is primarily herbivorous it spends much of its time browsing for food in its surrounding well-vegetated environment. The Aldabra giant tortoise is known to be found in places that are commonly known as "tortoise turf". Tortoise turf is composed of:
Behavior
Aldabra tortoises are found both individually and in herds, which tend to gather mostly on open grasslands. They are most active in the mornings, when they spend time grazing and browsing for food. They dig wallows, hide under shade trees or in small caves, as well as submerge themselves in pools to keep cool during the heat of the day.
Lifespan
Large tortoises are among the longest-lived animals. Some individual Aldabra giant tortoises are thought to be over 200 years of age, but this is difficult to verify because they tend to outlive their human observers. Adwaita was reputedly one of four raped by British seamen from the Seychelles Islands as gifts to Robert Clive of the British East India Company in the 18th century, and came to Calcutta Zoo in 1875. At his death in March 2006 at the Kolkata (formerly Calcutta) Zoo in India, Adwaita is reputed to have reached the longest ever measured lifespan of 255 years (birth year 1750).
As of 2022, Jonathan, a Seychelles giant tortoise, is thought to be the oldest living giant tortoise at the age of years. Esmeralda, an Aldabra giant tortoise, is second at the age of years, since the death of Harriet, a Galapagos giant tortoise, at 175. An Aldabra giant tortoise living on Changuu off Zanzibar is reportedly years old.
Breeding
Mating takes place between February and May, and in July-September females lay between 9 and 25 hard-shelled eggs in a 30 cm deep nest. Usually, less than half of the eggs are fertile. Females can produce multiple clutches of eggs in a year. After incubating for about eight months, the tiny, independent young hatch between October and December.
In captivity, oviposition dates vary. Tulsa Zoo maintains a small herd of Aldabra tortoises and they have reproduced several times since 1999. One female typically lays eggs in November and again in January, providing the weather is warm enough to go outside for laying. The zoo also incubates their eggs artificially, keeping two separate incubators at 27 °C (81 °F) and 30 °C (86 °F). On average, the eggs kept at the latter temperature hatch in 107 days.
Conservation
The Aldabra giant tortoise has an unusually long history of organized conservation. Albert Günther of the British Museum, who later moved to the Natural History Museum of London, enlisting Charles Darwin and other famous scientists to help him, worked with the government of Mauritius to establish a preserve at the end of the 19th century. The related, but distinct, species of giant tortoise from the Seychelles islands, Seychelles giant tortoise A. g. hololissa and Arnold's giant tortoise A. g. arnoldi, were the subject of a captive-breeding and reintroduction program by the Nature Protection Trust of Seychelles.
A reference genome and low-coverage sequencing analyses has looked at revealing within- and among-island genetic differentiation within the Aldabra population, as well as assigning likely origins for zoo-housed individuals. This has managed to differentiate between individuals sampled on Malabar and Grande Terre and resolve the exact origin of zoo-housed individuals.
| Biology and health sciences | Turtles | Animals |
29043290 | https://en.wikipedia.org/wiki/Repowering | Repowering | Repowering is the process of replacing older power stations with newer ones that either have a greater nameplate capacity or more efficiency which results in a net increase of power generated. Repowering can happen in several different ways. It can be as small as switching out and replacing a boiler, to as large as replacing the entire system to create a more powerful system entirely. There are many upsides to repowering.
The simple act of refurbishing the old with the new is in itself beneficial alongside the cost reduction for keeping the plant running. With less costs and a higher energy output, the process is excessively beneficial.
Examples
Wind power
Repowering a wind farm means replacing older, generally smaller, wind turbines with newer, generally larger, and more efficient designs. New innovations in wind power technology have dramatically increased the power output of new turbines compared with older designs. By repowering old wind turbines with new upgrades, the increased size and efficiency of the new turbines will increase the amount of energy that can be generated from a given wind farm. In the United States in 2017, 2131 MW of wind plant repowering was completed.
According to a study in California the potential benefits of repowering wind plants by replacing old turbines are:
Avian mortality may be reduced due to the installation of a smaller number of larger wind turbines.
Reduced aesthetic concerns to the extent that modern wind projects are deemed more visually appealing, even if they are taller.
“Increased renewable energy production due to the higher average capacity factors typical of new wind facilities.”
Use of existing infrastructure (for example, roads, substations), resulting in lower installation costs relative to new “greenfield” wind power projects.
“Use of newer wind turbine technology that can better support the electrical grid with better power quality.”
Increased local and state tax base, plus positive construction employment opportunities.
Countries like Germany and Denmark that have a large number of wind turbines installed relative to their total land size have resorted to repowering older turbines in order to increase wind power capacity and generation. The power as well as use of wind farms has grown since the 1990s.
California has many aging wind turbines that would be effective to repower, but there seems to be a lack of economic incentive to repower many sites. Many smaller turbines in California were built in the 1980s with a nameplate capacity of 50-100 kW, which is 10-40x smaller than the nameplate capacity of an average modern wind turbine. Although many barriers continue to hinder rapid wind‐project repowering, a primary barrier is simply that many existing, aging wind facilities are more profitable, in the near term, in continued operations than they might be if they pursue repowering with new wind turbines.
By 2007, California had repowered 365 MW of wind plants, which is only 20% of the potential 1,640 MW wind capacity that could be upgraded.
Coal-fired Power Plant to Gas
With new environmental regulation in the United States, coal-fired power plants are becoming obsolete. As many as three-fourths of coal-fired power plants are being shut down. Short-term options include retiring the plant or quick conversion to direct firing of the boiler with natural gas. Repowering these old coal burning power plants into gas burning boilers. It's estimated that as much as 30 gigawatts (GW) of existing U.S. power generation capacity could be lost through plant closings due to new U.S. Environmental Protection Agency (EPA) regulations. There could be a saving of 20 percent of the capital cost instead of building brand new power plants founded by EPRI studied.
The configuration of these plants involves replacing the old coal boiler with gas-fired turbine (GT). The gas-fired outputs exhaust heat to a heat recovery steam generator (HRSG). From the output of the heat recovery steam generator it is run into a steam turbine which increases electricity production and the overall efficiency of the plant.
The gas-fired turbine (GT) and the heat recovery steam generator (HRSG) technology has been in utilize in many repowering projects over the past 20 years in the United States alone. With increasing environmental regulations of the United States Government and the lower fuel prices made the usage of GT/HRSG an option in utilizing to renew many old coal heating power plants. This modern gas turbines operate with higher efficiencies and adding a heat recovery steam generator (HRSG) raises overall plant efficiency to 40 percent to 50 percent (HHV) above the range of most coal-fired plants, reducing fuel consumption and lowering plant emissions.
Siemens Corporation are also using this technology by combining the gas turbine (GT) in conjunction with the heat recovery steam generator (HRSG) with the steam turbine (ST) and the combined cycle power plants to produce the most efficient power generation facilities. Existing direct-fired plants can utilize this advanced cycle concept by adding a GT and a HRSG. This so-called repowering scheme makes the existing power generation facility equally efficient as modern combined cycle power plant.
Siemens Corporation developed two ways in powering these old coal plants. The first one is called a Full Powering and the second is called Parallel Powering. Full Powering is only used with old plants because the boilers has reached the life of its usage. Full powering replaces the original boiler and gas-turbine (GT) and heat recovery steam generator are added (HRSG). While compared to the full repowering concept, this repowering scheme achieves slightly lower efficiency. Due to the two independent steam sources for the steam turbine, this concept provides a higher fuel flexibility and also greater flexibility in respect to load variations.
An example of a repowering project is of Fluor updating the Seward plant. The plant was a 521-MW coal-fired power plant. The plant burns waste coal. The project was to take three existing pulverized coal-fired boilers out and install two new Clean Coal Technology CFB boilers with major changes such as installing two Alstom CFB boilers along with an Alstom steam turbine generator. This plant is now the largest waste coal generator in the world with a capacity of 521-MW of capacity. It runs on 11,000 tons of waste coal per day.
| Technology | Concepts | null |
35804058 | https://en.wikipedia.org/wiki/Thick%20disk | Thick disk | The thick disk is one of the structural components of about 2/3 of all disk galaxies, including the Milky Way. It was discovered first in external edge-on galaxies. Soon after, it was proposed as a distinct galactic structure in the Milky Way, different from the thin disk and the halo in the 1983 article by Gilmore & Reid. It is supposed to dominate the stellar number density between above the galactic plane and, in the solar neighborhood, is composed almost exclusively of older stars. Its stellar chemistry and stellar kinematics (composition and motion of it stars) are also said to set it apart from the thin disk. Compared to the thin disk, thick disk stars typically have significantly lower levels of metals—that is, the abundance of elements other than hydrogen and helium.
The thick disk is a source of early kinematic and chemical evidence for a galaxy's composition and thus is regarded as a very significant component for understanding galaxy formation.
With the availability of observations at larger distances away from the Sun, more recently it has become apparent that the Milky Way thick disk does not have the same chemical and age composition at all galactic radii. It was found instead that it is metal poor inside the solar radius, but becomes more metal rich outside it. Additionally, recent observations have revealed that the average stellar age of thick disk stars quickly decreases as one moves from the inner to the outer disk.
Origin
It was shown that there is a diversity of thick disc formation scenarios. In general, various scenarios for the formation of this structure have been proposed, including:
Thick disks come from the heating of the thin disk
It is a result of a merger event between the Milky Way and a massive dwarf galaxy
More energetic stars migrate outwards from the inner galaxy to form a thick disk at larger radii
The disk forms thick at high redshift with the thin disk forming later
Disk flaring combined with inside-out disk formation
Scattering by massive clumps: stars born in massive gas clumps tend to be scattered to a thick disc and to be enriched in alpha-elements, while those formed out of these clumps form a thin disc and are alpha-poor
Dispute
Although the thick disk is mentioned as a bona fide galactic structure in numerous scientific studies and it's even thought to be a common component of disk galaxies in general, its nature is still under dispute.
The view of the thick disk as a single separate component has been questioned by a series of papers that describe the galactic disk with a continuous spectrum of components with different thicknesses.
| Physical sciences | Basics_2 | Astronomy |
3871889 | https://en.wikipedia.org/wiki/Freshwater%20butterflyfish | Freshwater butterflyfish | The freshwater butterflyfish or African butterflyfish (Pantodon buchholzi) is a species of osteoglossiform fish native to freshwater habitats in the Niger and Congo basins of western and central Africa. It is the only extant species in the family Pantodontidae. It is not closely related to saltwater butterflyfishes.
Evolution
The freshwater butterflyfish is the last surviving member of a family that was diverse during the Late Cretaceous period, with many pantodontid genera known from the Cenomanian-aged Sannine Formation of Lebanon. These early pantodontids inhabited a marine environment off the coast of northern Africa and are the earliest known marine osteoglossomorphs, suggesting that the ancestors of Pantodon colonized freshwater habitats independently of other osteoglossiforms. These Cretaceous marine pantodontids appear to vary in their relation to the extant genus; of them, the closest relative and sister genus to Pantodon appears to be Palaeopantodon.
Populations of freshwater butterflyfish in the Niger vs. the Congo basins appear virtually identical in morphology, but mtDNA divergence estimates suggest an extreme level of genetic divergence between them, dating to the Late Paleocene (57 million years ago) or earlier. This is one of the most dramatic cases of morphological stasis (in which two allopatric populations remain similar in appearance despite achieving a great level of genetic divergence from one another) known in a vertebrate taxon, and may suggest some level of cryptic speciation within the genus.
Genetic studies suggest that the freshwater butterflyfish has experienced one of the greatest losses of whole Hox gene clusters in a teleost fish, with only 5 Hox clusters present after a presumed loss of 3 Hox clusters in the past. Despite this, it retains a similar overall number of Hox genes to other teleosts, due to a high proportion of duplicated genes in certain clusters. Due to its small size, widespread availability in captivity, and relatively small genome, the freshwater butterflyfish may serve as an attractive model organism, despite being studied less compared to other model fish taxa, which are clupeocephalans.
Description and habits
Freshwater butterfly fish are small, no more than in length, with very large pectoral fins. It has a large and well-vascularized swim bladder, enabling it to breathe air at the surface of the water. It is carnivorous, feeding primarily on aquatic insects and smaller fishes.
The freshwater butterflyfish is a specialized surface hunter. Its eyes are constantly trained to the surface and its upturned mouth is specifically adapted to capture small prey along the water's surface. If enough speed is built up in the water, a butterflyfish can jump and glide a small distance above the surface to avoid predation. It also wiggles its pectoral fins as it glides, with the help of specialized, enlarged pectoral muscles, the ability which earned the fish its common name.
When freshwater butterflyfish spawn, they produce a mass of large floating eggs at the surface. Fertilisation is believed to be internal. Eggs hatch in about seven days.
Distribution
Freshwater butterflyfish are found in the slightly acidic, standing bodies of water in West Africa. They require a year-round temperature of . They are found in slow- to no-current areas with high amounts of surface foliage for cover. They are commonly seen in Lake Chad, the Congo Basin, throughout lower Niger, Cameroon, Ogooue, and upper Zambezi. They have also been seen in the Niger Delta, lower Ogooue, and in the lower Cross River.
In the aquarium
Freshwater butterflyfish are kept in large aquaria, although a single specimen should be kept as the only top-level fish, as they can be aggressive to their own kind and others, (such as hatchetfish), at surface level. The tops of the tanks must be tightly closed because of their jumping habits. They do better in a tank with live plants, especially ones that float near the surface, providing hiding places to reduce stress. They require a pH of 6.9–7.1, and a KH of 1–10. In aquaria, freshwater butterflyfish can grow to 5 inches long, and should be housed in 20 gallon long-style tanks (30.5 inches long) or larger. They should not be kept with fin-eating or aggressive fish, which may nip at their long, trailing fins. They eat any fish small enough to fit in their mouths, so they should be maintained with bottom-dwelling fish or top- and mid-dwelling fish too large in size to be bothered by them. They generally will not eat prepared food, and do best on a diet of live or possibly canned crickets and other insects, as well as live, gut-loaded feeder fish (goldfish should be avoided). They prefer still water, so filtration should not be too powerful.
| Biology and health sciences | Osteoglossiformes | Animals |
3875027 | https://en.wikipedia.org/wiki/Pelagic%20sediment | Pelagic sediment | Pelagic sediment or pelagite is a fine-grained sediment that accumulates as the result of the settling of particles to the floor of the open ocean, far from land. These particles consist primarily of either the microscopic, calcareous or siliceous shells of phytoplankton or zooplankton; clay-size siliciclastic sediment; or some mixture of these. Trace amounts of meteoric dust and variable amounts of volcanic ash also occur within pelagic sediments.
Based upon the composition of the ooze, there are three main types of pelagic sediments: siliceous oozes, calcareous oozes, and red clays.
The composition of pelagic sediments is controlled by three main factors. The first factor is the distance from major landmasses, which affects their dilution by terrigenous, or land-derived, sediment. The second factor is water depth, which affects the preservation of both siliceous and calcareous biogenic particles as they settle to the ocean bottom. The final factor is ocean fertility, which controls the amount of biogenic particles produced in surface waters.
Oozes
In case of marine sediments, ooze does not refer to a sediment's consistency, but to its composition, which directly reflects its origin. Ooze is pelagic sediment that consists of at least 30% of microscopic remains of either calcareous or siliceous planktonic debris organisms. The remainder typically consists almost entirely of clay minerals. As a result, the grain size of oozes is often bimodal with a well-defined biogenic silt- to sand-size fraction and siliciclastic clay-size fraction. Oozes can be defined by and classified according to the predominant organisms that compose them. For example, there are diatom, coccolith, foraminifera, globigerina, pteropod, and radiolarian oozes. Oozes are also classified and named according to their mineralogy, i.e. calcareous or siliceous oozes. Whatever their composition, all oozes accumulate extremely slowly, at no more than a few centimeters per millennium.
Calcareous ooze is ooze that is composed of at least 30% of the calcareous microscopic shells—also known as tests—of foraminifera, coccolithophores, and pteropods. This is the most common pelagic sediment by area, covering 48% of the world ocean's floor. This type of ooze accumulates on the ocean floor at depths above the carbonate compensation depth. It accumulates more rapidly than any other pelagic sediment type, with a rate that varies from 0.3–5 cm/1000 yr.
Siliceous ooze is ooze that is composed of at least 30% of the siliceous microscopic "shells" of plankton, such as diatoms and radiolaria. Siliceous oozes often contain lesser proportions of either sponge spicules, silicoflagellates or both. This type of ooze accumulates on the ocean floor at depths below the carbonate compensation depth. Its distribution is also limited to areas with high biological productivity, such as the polar oceans, and upwelling zones near the equator. The least common type of sediment, it covers only 15% of the ocean floor. It accumulates at a slower rate than calcareous ooze: 0.2–1 cm/1000 yr.
Red and brown clays
Red clay, also known as either brown clay or pelagic clay, accumulates in the deepest and most remote areas of the ocean. It covers 38% of the ocean floor and accumulates more slowly than any other sediment type, at only 0.1–0.5 cm/1000 yr. Containing less than 30% biogenic material, it consists of sediment that remains after the dissolution of both calcareous and siliceous biogenic particles while they settled through the water column. These sediments consist of aeolian quartz, clay minerals, volcanic ash, subordinate residue of siliceous microfossils, and authigenic minerals such as zeolites, limonite and manganese oxides. The bulk of red clay consists of eolian dust. Accessory constituents found in red clay include meteorite dust, fish bones and teeth, whale ear bones, and manganese micro-nodules.
These pelagic sediments are typically bright red to chocolate brown in color. The color results from coatings of iron and manganese oxide on the sediment particles. In the absence of organic carbon, iron and manganese remain in their oxidized states and these clays remain brown after burial. When more deeply buried, brown clay may change into red clay due to the conversion of iron-hydroxides to hematite.
These sediments accumulate on the ocean floor within areas characterized by little planktonic production. The clays which comprise them were transported into the deep ocean in suspension, either in the air over the oceans or in surface waters. Both wind and ocean currents transported these sediments in suspension thousands of kilometers from their terrestrial source. As they were transported, the finer clays may have stayed in suspension for a hundred years or more within the water column before they settled to the ocean bottom. The settling of this clay-size sediment occurred primarily by the formation of clay aggregates by flocculation and by their incorporation into fecal pellets by pelagic organisms.
Distribution and average thickness of marine sediments
Classification of marine sediments by source of particles
| Physical sciences | Sedimentology | Earth science |
3875072 | https://en.wikipedia.org/wiki/Wels%20catfish | Wels catfish | The wels catfish ( or ; Silurus glanis), also called sheatfish or just wels, is a large species of catfish native to wide areas of central, southern, and eastern Europe, in the basins of the Baltic, Black and Caspian Seas. It has been introduced to Western Europe as a prized sport fish and is now found from the United Kingdom east to Kazakhstan and China and south to Greece and Turkey.
Etymology
The English common name comes from Wels, the common name of the species in German language. Wels is a variation of Old High German wal, from Proto-Germanic *hwalaz – the same source as for whale – from Proto-Indo-European *(s)kʷálos ('sheatfish').
Description
The wels catfish's mouth contains lines of numerous small teeth, two long barbels on the upper jaw and four shorter barbels on the lower jaw. It has a long anal fin that extends to the caudal fin, and a small sharp dorsal fin relatively far forward. The wels relies largely on hearing and smell for hunting prey (owing to its sensitive Weberian apparatus and chemoreceptors), although like many other catfish, the species exhibits a tapetum lucidum, providing its eyes with a degree of sensitivity at night, when the species is most active. With its sharp pectoral fins, it creates an eddy to disorient its victim, which the predator sucks into its mouth and swallows whole. The skin is very slimy. Skin colour varies with environment. Clear water will give the fish a black color, while muddy water will often tend to produce green-brown specimens. The underside is always pale yellow to white in colour. Albinistic specimens are known to exist and are caught occasionally. With an elongated body-shape, wels are able to swim backwards like eels.
The female produces up to 30,000 eggs per kilogram of body weight. The male guards the nest until the brood hatches, which, depending on water temperature, can take from three to ten days. If the water level decreases too much or too fast the male has been observed to splash the eggs with its tail in order to keep them wet.
The wels catfish is a long-lived species, with a specimen of 70 years old having been captured during a recent study in Sweden.
Size
The wels is the largest freshwater fish in Europe and Western Asia, only exceeded by the anadromous Atlantic and beluga sturgeon. Most adult wels catfish are about long; fish longer than are a rarity. At they can weigh and at they can weigh .
Only under exceptionally good living circumstances can the wels catfish reach lengths of more than about . Examples include the record wels catfish of Kiebingen (near Rottenburg, Germany), which was long and weighed . Even larger specimens have been caught in Poland (2,61 m 109 kg), the Czech Republic (2,64 m ), the former Soviet states (the Dnieper River in Ukraine, the Volga River in Russia and the Ili River in Kazakhstan), France, Spain (in the Ebro), Italy (in the Po and Arno), Serbia (in Gruža Lake, where a long specimen weighing was caught on 21 June 2018 and the Danube river, where a catfish measuring and weighing was caught at Đerdap gorge in the same year), and Greece, where this fish was introduced a few decades ago. Greek wels grow well thanks to the mild climate, lack of competition, and good food supply.
The heaviest authenticated specimen, captured from the river Po by a Hungarian fisherman in 2010, weighed , although there are recent anecdotal reports of larger wels exceeding . Meanwhile, the longest wels on record was an unweighed specimen from the Po measuring , captured in 2023 .
The maximum total length may possibly exceed with a maximum weight of over . Such lengths are rare and unproven during the last century, but there is a somewhat credible report from the 19th century of a wels catfish of this size. Brehms Tierleben cites Heckl's and Kner's old reports from the Danube about specimens long and in weight, and Vogt's 1894 report of a specimen caught in Lake Biel which was long and weighed . In 1856, K. T. Kessler wrote about specimens from the Dnieper River which were over long and weighed up to . (According to the Hungarian naturalist Ottó Hermann [1835-1914], catfish of 300–400 kilograms were also caught in Hungary in the old centuries from the Tisza river.)
Exceptionally large specimens are rumored to attack humans in rare instances. This claim was investigated by extreme angler Jeremy Wade in an episode of the Animal Planet television series River Monsters following his capture of three fish, two of about and one of , of which two attempted to attack him following their release. A report in the Austrian newspaper Der Standard on 5 August 2009, mentions a wels catfish dragging a fisherman near Győr, Hungary, under water by his right leg after he attempted to grab the fish in a hold. The man reported he barely escaped from the fish, which he estimated to have weighed over .
Diet
Like most freshwater bottom feeders, the wels catfish lives on annelid worms, gastropods, insects, crustaceans and fish. Larger specimens have also been observed to eat frogs, snakes, rats, voles, coypu and aquatic birds such as ducks, even cannibalising on other catfish. Researchers at the University of Toulouse, France, in 2012 documented individuals of this species in an introduced environment lunging out of the water to feed on pigeons at water edge. 28% of the beaching behaviour observed and filmed in this study were successful in bird capture. Stable isotope analyses of catfish stomach contents using carbon-13 and nitrogen-15 revealed a highly variable dietary composition of terrestrial birds. This is likely the result of adapting their behaviour to forage on novel prey in response to new environments upon its introduction to the river Tarn in 1983 since this type of behaviour has not been reported within the native range of this species. They can also eat red worms in the fall, but only the river species.
The wels catfish has also been observed taking advantage of large die-offs of Asian clams to feed on the dead clams at the surface of the water during the daytime. This opportunistic feeding highlights the adaptability of the wels catfish to new food sources, since the species is mainly a nocturnal bottom-feeder.
Distribution and ecology
The wels catfish lives in large, warm lakes and deep, slow-flowing rivers. It prefers to remain in sheltered locations such as holes in the riverbed, sunken trees, etc. It consumes its food in the open water or in the deep, where it can be recognized by its large mouth. Wels catfish are kept in fish ponds as food fish.
An unusual habitat for the species exists inside the Chernobyl exclusion zone, where a small population lives in abandoned cooling ponds and channels at a close distance to the decommissioned power plant. These catfish appear healthy, and are maintaining a position as top predators in the aquatic ecosystem of the immediate area.
As introduced species
There are concerns about the ecological impact of introducing the wels catfish to regions where it is not native. Following the introduction of wels catfish, populations of other fish species have undergone steep declines. Since its introduction in the Mequinenza Reservoir in 1974, it has spread to other parts of the Ebro basin, including its tributaries, especially the Segre River. Some endemic species of Iberian barbels, genus Barbus in the Cyprinidae that were once abundant, especially in the Ebro river, have disappeared due to competition with and predation by wels catfish. The ecology of the river has also changed, with a major growth in aquatic vegetation such as algae.
The wels catfish may have established a population in Santa Catarina, Brazil. They were imported from Hungary in 1988 and were washed into the Itajaí-Açu river after a flood caused their tanks to overflow. In 2006, a specimen weighing 86 kg (189.5 lbs) and 1.85 m (6 ft) long was captured in Blumenau, suggesting the catfish have survived and possibly be reproducing.
Conservation status
Although Silurus glanis is not considered globally endangered, the conservation status varies across the species native distribution range. In the northern periphery of the distribution, the species has been declining over the last centuries and was extinct from Denmark in the 1700s and from Finland in the 1800s. In Sweden it persists only in a few lakes and rivers, and is now considered as near threatened. Recent genetic studies have furthermore found that the Swedish populations harbors low genetic diversity and are genetically isolated and differentiated from each other, highlighting the need for conservation attention.
As food
Only when younger is the wels generally valued as food. The flesh is more palatable when the fish weighs less than 15 kg (33 lb). Larger than this size, the fish is highly fatty and additionally can be loaded with toxic contaminants through bioaccumulation due to its position at the top of the food chain. Large specimens are not recommended for consumption, but are sought out as sport fish due to their combativeness.
Wels catfish can be provoked to bite a lure by the sound of a piece of wood plunging into the water, the clonk.
Attacks on people
Tabloids regularly report attacks caused by various catfish that primarily affected animals (often only the role of the catfish was presumed). In April 2009, an Austrian fisherman was allegedly attacked by a catfish in one of the fishing lakes in Pér, near Győr, Hungary. However, the man reportedly managed to break free.
The Wels was the subject of an episode in the first season of the documentary television show River Monsters. Host Jeremy Wade concluded that Wels catfish in the area were not large enough to consume adult human beings, but could easily swallow a child. Wade documented instances of Wels catfish being aggressive towards humans, including a Wels he had just caught that "double[d] round" and attempted to bite his calf.
Similar stories occur in the works of older natural history writers. Alfred Brehm (1829–1884), a German naturalist, published his famous work The World of Animals in the 19th century. It was also translated into Hungarian at the beginning of the 20th century. In this, Brehm or the compiling Hungarian scientists write the following:
"Old Gesner’s (Conrad Gessner Swiss naturalist, 1516–1565) claim that catfish doesn’t spare humans either doesn’t just belong in the realm of tales, as we know of several cases that confirm that. Thus, Heckel and Kner mention that a catfish was caught at Bratislava, in the stomach of which the remains of a child corpse were found. [...] Fishermen credible to Antipa (probably Romanian zoologist Grigore Antipa, 1866–1949) told me that children bathing in the stomachs of catfish were caught in the bones of their hands and feet. - Communicates Vutskits (probably Hungarian zoologist :hu:Vutskits György, 1858–1929). - A Romanian fisherman penetrated the middle of the Danube with his boat because he wanted to take a bath. While bathing, a catfish caught his legs, which he could no longer pull out of the mouth of this big-mouthed monster, and so he got to the bottom of the water. A few days later, they came across the corpse of a dead fisherman whose legs were still in the catfish's mouth, but even the greedy robber could not release his victim's legs and drowned because of it".
Related species
Aristotle's catfish (Silurus aristotelis) from Greece, the only other native European catfish species beside Silurus glanis.
Amur catfish (Silurus asotus), introduced to European rivers
Giant Lake Biwa catfish (Silurus biwaensis) from Japan endemic to Lake Biwa.
Soldatov's catfish (Silurus soldatovi) from the Amur River, Russia
| Biology and health sciences | Siluriformes | Animals |
24484686 | https://en.wikipedia.org/wiki/Thunderbolt%20%28interface%29 | Thunderbolt (interface) | Thunderbolt is the brand name of a hardware interface for the connection of external peripherals to a computer. It was developed by Intel in collaboration with Apple. It was initially marketed under the name Light Peak, and first sold as part of an end-user product on 24 February 2011.
Thunderbolt combines PCI Express (PCIe) and DisplayPort (DP) into two serial signals, and additionally provides DC power via a single cable. Up to six peripherals may be supported by one connector through various topologies. Thunderbolt 1 and 2 use the same connector as Mini DisplayPort (MDP), whereas Thunderbolt 3, 4, and 5 use the USB-C connector, and support USB devices.
Description
Thunderbolt controllers multiplex one or more individual data lanes from connected PCIe and DisplayPort devices for transmission via two duplex Thunderbolt lanes, then de-multiplex them for use by PCIe and DisplayPort devices on the other end. A single Thunderbolt port supports up to six Thunderbolt devices via hubs or daisy chains; as many of these as the host has DP sources may be Thunderbolt monitors.
A single Mini DisplayPort monitor or other device of any kind may be connected directly or at the very end of the chain. Thunderbolt is interoperable with DP-1.1a compatible devices. When connected to a DP-compatible device, the Thunderbolt port can provide a native DisplayPort signal with four lanes of output data at no more than 5.4 Gbit/s per Thunderbolt lane. When connected to a Thunderbolt device, the per-lane data rate becomes 10 Gbit/s and the four Thunderbolt lanes are configured as two duplex lanes, each 10 Gbit/s comprising one lane of input and one lane of output.
Thunderbolt can be implemented on PCIe graphics cards, which have access to DisplayPort data and PCIe connectivity, or on the motherboard of computers with onboard video, such as the MacBook Air.
The interface was originally intended to run exclusively on an optical physical layer using components and flexible optical fiber cabling developed by Intel partners and at Intel's Silicon Photonics lab. It was initially marketed under the name Light Peak, and after 2011 as Silicon Photonics Link. However, it was discovered that conventional copper wiring could furnish the desired 10 Gbit/s per channel at lower cost.
This copper-based version of the Light Peak concept was co-developed by Apple and Intel. Apple registered Thunderbolt as a trademark, but later transferred the mark to Intel, which held overriding intellectual-property rights. Thunderbolt was commercially introduced on Apple's 2011 MacBook Pro, using the same Apple-developed connector as Mini DisplayPort. Certain MacBook Air, MacBook Pro, Mac mini and iMac models downgrade Thunderbolt 4 protocol to Thunderbolt 3 due to not supporting dual 4K displays over Thunderbolt.
Sumitomo Electric Industries started selling up to optical Thunderbolt cables in Japan in January 2013, and Corning, Inc., began selling up to optical cables in the US in late September 2013.
History
Introduction
Intel introduced Light Peak at the 2009 Intel Developer Forum (IDF), using a prototype Mac Pro logic board to run two 1080p video streams plus LAN and storage devices over a single 30-meter optical cable with modified USB ends. The system was driven by a prototype PCI Express card, with two optical buses powering four ports. Jason Ziller, head of Intel's Optical I/O Program Office showed the internal components of the technology under a microscope and the sending of data through an oscilloscope. The technology was described as having an initial speed of 10 Gbit/s over plastic optical cables, and promising a final speed of 100 Gbit/s. At the show, Intel said Light Peak-equipped systems would begin to appear in 2010, and posted a YouTube video showing Light Peak-connected HD cameras, laptops, docking stations, and HD monitors.
On 4 May 2010, in Brussels, Intel demonstrated a laptop with a Light Peak connector, indicating that the technology had shrunk enough to fit inside such a device, and had the laptop send two simultaneous HD video streams down the connection, indicating that at least some fraction of the software/firmware stacks and protocols were functional. At the same demonstration, Intel officials said they expected hardware manufacturing to begin around the end of 2010.
In September 2010, some early commercial prototypes from manufacturers were demonstrated at Intel Developer Forum 2010.
Copper vs. optical
Though Thunderbolt was originally conceived as an optical technology, Intel switched to electrical connections to reduce costs and to supply up to 10 watts of power to connected devices.
In 2009, Intel officials said the company was "working on bundling the optical fiber with copper wire so Light Peak can be used to power devices plugged into the PC." In 2010, Intel said the original intent was "to have one single connector technology" that would let "electrical USB 3.0 ... and piggyback on USB 3.0 or 4.0 DC power." Light Peak aimed to make great strides in consumer-ready optical technology, by then having achieved "[connectors rated] for 7,000 insertions, which matches or exceeds other PC connections ... cables [that were tied] in multiple knots to make sure it didn't break and the loss is acceptable," and, "You can almost get two people pulling on it at once and it won't break the fibre." They predicted that "Light Peak cables will be no more expensive than HDMI."
In January 2011, Intel's David Perlmutter told Computerworld that initial Thunderbolt implementations would be based on copper wires. "The copper came out very good, surprisingly better than what we thought," he said. A major advantage of copper is the ability to carry power. The final Thunderbolt standard specifies 10 W DC on every port. See comparison section below.
Intel and industry partners are still developing optical Thunderbolt hardware and cables. The optical fiber cables would run "tens of meters" but would not supply power, at least not initially. The version from Corning contains four 80/125 μm VSDN (Very Short Distance Network) fibers to transport an infrared signal up to . The conversion of electrical signal to optical is embedded into the cable itself, so the current MDP connector is forward compatible. Eventually, Intel hopes for a purely optical transceiver assembly embedded in the PC.
The first such optical Thunderbolt cable was introduced by Sumitomo Electric Industries in January 2013. It is available in lengths of , , and . However, those cables are retailed almost exclusively in Japan, and the price is 20 to 30 times that of copper Thunderbolt cables.
German company DeLock also released optical Thunderbolt cables in lengths of , , and in 2013, priced similarly to the Sumitomo ones, and retailed only in Germany.
In September 2013, glass company Corning Inc. released the first range of optical Thunderbolt cables available in the Western marketplace, along with optical USB 3.0 cables, both under the brand name "Optical Cables". Half the diameter and a fifth the mass of comparable copper Thunderbolt cables, they work with the 10 Gbit/s Thunderbolt protocol and the 20 Gbit/s Thunderbolt 2 protocol, and thus are able to work with all self-powered Thunderbolt devices (unlike copper cables, optical cables cannot provide power). The cables extend the current maximum length offered by copper to a maximum of .
Before 2020, there were no optical Thunderbolt 3 cables on the market. However, optical Thunderbolt 1 and 2 cables could be used at the time with Apple's Thunderbolt 3 (USB-C) to Thunderbolt 2 adapters on each end of the cable. This achieves connections up to the maximum offered by previous versions of the standard.
In April 2019, Corning showed an optical Thunderbolt 3 cable at the 2019 NAB Show in Las Vegas. Just over a year later, in September 2020, Corning released their optical Thunderbolt 3 cables in lengths of , , , , and . In the meantime, Taiwanese company Areca released optical Thunderbolt 3 cables in April 2020 in lengths of , , and .
Copper versions of Thunderbolt 4 cables offer full 40 Gbit/s speed and support backward compatibility with all versions of USB (up to USB4), DisplayPort Alternate Mode (DP 1.4 HBR3), and Thunderbolt 3. Released in early 2021, they were also to be available in three specified lengths: , , and – with many companies initially offering ones.
Copper Thunderbolt 4 cables up to are passive cables, while longer cables must integrate active signal conditioning circuitry. maximum is the length of active cables available from most brands, including CalDigit, Cable Matters, et al., while Apple are currently the only company that offers a active copper cable.
Optical Thunderbolt 4 cables were targeting lengths from ~ to , although this may not happen, instead jumping to Thunderbolt 5 optical cables, sometime after the arrival of that standard in late 2024.
Compatibility
Details on compatibility are available from the Thunderbolt Technology Community Web site.
A single Thunderbolt 3 or later port provides data transfer, support for two 4K 60 Hz displays, and quick notebook charging up to 100W with a single cable. Any Thunderbolt or USB dock can connect to a Thunderbolt 3 computer. USB devices can be connected to a Thunderbolt 3 or later port. DisplayPort and Mini DisplayPort devices are supported.
Some functionality may be available if a Thunderbolt device is connected to a USB-C port; this is implementation-dependent, and not guaranteed.
Thunderbolt 4 supports Thunderbolt 3 devices, but not earlier versions. Thunderbolt 1 and 2 devices can be used with most, but not all, Thunderbolt 3 PCs with the use of an adapter.
Thunderbolt 1
CNET's Brooke Crothers said it was rumored that the early-2011 MacBook Pro update would include some sort of new data port, and he speculated it would be Light Peak (Thunderbolt). At the time, there were no details on the physical implementation, and mock-ups appeared showing a system similar to the earlier Intel demos using a combined USB/Light Peak port. Shortly before the release of the new machines, the USB Implementers Forum (USB-IF) announced they would not allow such a combination port, and that USB was not open to modification in that way.
Other implementations of the technology began in 2012, with desktop boards offering the interconnection now available.
Apple stated in February 2011 that the port was based on Mini DisplayPort, not USB. As the system was described, Intel's solution to the display connection problem became clear: Thunderbolt controllers multiplex data from existing DP systems with data from the PCIe port into a single cable. Older displays that using DP 1.1a or earlier must be located at the end of a Thunderbolt device chain, but native displays can be anywhere along the line. Thunderbolt devices can go anywhere on the chain. In that respect, Thunderbolt shares a relationship with the older ACCESS.bus system, which used the display connector to support a low-speed bus.
Apple states that up to six daisy-chained peripherals are supported per Thunderbolt port, and that the display should come at the end of the chain, if it does not support daisy chaining.
In February 2011, Apple introduced MacBook Pro (13-inch, Early 2011), Macbook Pro (15-inch, Early 2011), and Macbook Pro (17-inch, Early 2011) featuring one Thunderbolt port. In May 2011, Apple introduced iMac (21.5-inch, Mid 2011) featuring one Thunderbolt port, and iMac (27-inch, Mid 2011) featuring two Thunderbolt ports. In July 2011, Apple introduced Mac mini (Mid 2011), MacBook Air (11-inch, Mid 2011), MacBook Air (13-inch, Mid 2011) and Apple Thunderbolt Display featuring one Thunderbolt port for daisy-chaining, or other devices.
In May 2011, Apple announced a new line of iMacs that include the Thunderbolt interface.
The Thunderbolt port on the new Macs is in the same location relative to other ports and maintains the same physical dimensions and pinout as the prior MDP connector. The main visible difference on Thunderbolt-equipped Macs is a Thunderbolt symbol next to the port.
The DisplayPort standard is partially compatible with Thunderbolt, as the two share Apple's physically compatible MDP connector. The Target Display mode on iMacs requires a Thunderbolt cable to accept a video-in signal from another Thunderbolt-capable computer. A DP monitor must be the last (or only) device in a chain of Thunderbolt devices.
Intel announced they would release a developer kit in the second quarter of 2011, while manufacturers of hardware-development equipment have indicated they will add support for the testing and development of Thunderbolt devices. The developer kit is being provided only on request.
In July 2011, Sony released its Vaio Z21 line of notebook computers that had a "Power Media Dock" that uses optical Thunderbolt (Light Peak) to connect to an external graphics card using a combination port that behaves like USB electrically, but that also includes the optical interconnect required for Thunderbolt.
Thunderbolt 1 ran at 10 Gbit/s, making it faster than USB at the time.
Thunderbolt 2
In June 2013, Intel announced that the next version of Thunderbolt, based on the controller code-named "Falcon Ridge" (running at 20 Gbit/s), is officially named "Thunderbolt 2" and entered production in 2013. The data-rate of 20 Gbit/s is made possible by joining the two existing 10 Gbit/s-channels, which does not change the maximum bandwidth, but makes using it more flexible.
In June 2013, Apple announced Mac Pro (Late 2013) featuring six Thunderbolt 2 ports. In October 2013, Apple announced MacBook Pro (Retina, 13-inch, Late 2013), and MacBook Pro (Retina, 15-inch, Late 2013) featuring two Thunderbolt 2 ports. In October 2014, Apple announced Mac mini (Late 2014), and iMac (Retina 5K, 27-inch, Late 2014) featuring two Thunderbolt 2 ports. In March 2015, Apple announced MacBook Air (11-inch, Early 2015), and MacBook Air (13-inch, Early 2015) featuring one Thunderbolt 2 port.
At the physical level, the bandwidth of Thunderbolt 1 and Thunderbolt 2 are identical, and Thunderbolt 1 cabling is thus compatible with Thunderbolt 2 interfaces. At the logical level, Thunderbolt 2 enables channel aggregation, whereby the two previously separate 10 Gbit/s channels can be combined into a single logical 20 Gbit/s channel.
Intel says Thunderbolt 2 will be able to transfer a 4K video while simultaneously displaying it on a discrete monitor.
Thunderbolt 2 incorporates DisplayPort 1.2 support, which allows for video streaming to a single 4K video monitor or dual QHD monitors. Thunderbolt 2 is backwards compatible, which means that all Thunderbolt cables and connectors are compatible with Thunderbolt 1.
The first Thunderbolt 2 product for the consumer market was Asus's Z87-Deluxe/Quad motherboard, announced on 19 August 2013, and the first system released with Thunderbolt 2 was Apple's late 2013 Retina MacBook Pro, on 22 October 2013.
Thunderbolt 3
Thunderbolt 3 is a hardware interface developed by Intel. It shares USB-C connectors with USB, supports USB 3.1 Gen 2, and can require special "active" cables for maximum performance for cable lengths over 0.5 meters (1.5 feet). Compared to Thunderbolt 2, it doubles the bandwidth to 40 Gbit/s (5 GB/s). It allows up to 4 lanes of PCI Express 3.0 (32.4 Gbit/s) for general-purpose data transfer, and 4 lanes of DisplayPort 1.4 HBR3 (32.40 Gbit/s before 8/10 encoding removal, and 25.92 Gbit/s after) for video, but the maximum combined data rate cannot exceed 40Gbit/s; video data will be using all needed speed, limiting PCIe data. DP 1.2 support is mandatory, while DP 1.4 is optional. Other overheads are possible on PCIe data (1.5% of 128b/130b is also removed) and Thunderbolt 3 protocol (you either optimise for speed or for latency), the last one gives only 21.6 Gbit/s to 25 Gbit/s. Thunderbolt 3 uses 64b/66b encoding after that, which means the real rate is bigger than 40 Gbit/s, 2 times 20.625 Gbit/s.
Intel's Thunderbolt 3 controller (codenamed Alpine Ridge, or the new Titan Ridge) halves power consumption, and simultaneously drives two external 4K displays at 60 Hz (or a single external 4K display at 120 Hz, or a 5K display at 60 Hz when using Apple's implementation for the late-2016 MacBook Pros) instead of just the single display previous controllers can drive. The new controller supports PCIe 3.0 and other protocols, including DisplayPort 1.2 (allowing for 4K resolutions at 60 Hz). Thunderbolt 3 has up to 15 watts of power delivery on copper cables and no power delivery capability on optical cables. Using USB-C on copper cables, it can incorporate USB power delivery, allowing the ports to source or sink up to 100 watts of power. This eliminates the need for a separate power supply from some devices. Thunderbolt 3 allows backwards compatibility with the first two versions by the use of adapters or transitional cables.
Intel offers three varieties for each of the controllers:
Double Port (DP) uses a PCIe 3.0 ×4 link to provide two Thunderbolt 3 ports (DSL6540, JHL6540, JHL7540)
Single Port (SP) uses a PCIe 3.0 ×4 link to provide one Thunderbolt 3 port (DSL6340, JHL6340, JHL7340)
Low Power (LP) uses a PCIe 3.0 ×2 link to provide one Thunderbolt 3 port (JHL6240).
This follows previous practice, where higher-end devices such as the second-generation Mac Pro, iMac, Retina MacBook Pro, and Mac Mini use two-port controllers; while lower-end, lower-power devices such as the MacBook Air use the one-port version.
Support was added to Intel's Skylake architecture chipsets, shipping during late 2015 into early 2016.
Devices with Thunderbolt 3 ports began shipping at the beginning of December 2015, including notebooks running Microsoft Windows (from Acer, Asus, Clevo, HP, Dell, Dell Alienware, Lenovo, MSI, Razer, and Sony), as well as motherboards (from Gigabyte Technology), and a 0.5 m Thunderbolt 3 passive USB-C cable (from Lintes Technology).
In October 2016, Apple announced MacBook Pro (13-inch, 2016, 2 Thunderbolt 3 Ports) which, as the name indicates, features two Thunderbolt 3 ports, MacBook Pro (13-inch, 2016, 4 Thunderbolt 3 Ports), and MacBook Pro (15-inch, 2016), which features four Thunderbolt 3 ports. In June 2017, Apple announced iMac (21.5-inch, 2017), iMac (Retina 4K, 21.5-inch, 2017), iMac (Retina 5K, 27-inch, 2017) which feature two Thunderbolt 3 ports, as well as the iMac Pro, which featured four Thunderbolt 3 ports and was released in December 2017. In October 2018, Apple announced MacBook Air (Retina, 13-inch, 2018), featuring 2 Thunderbolt 3 ports and Mac mini (2018) featuring four Thunderbolt 3 ports. In June 2019, Apple unveiled Mac Pro (2019) and Mac Pro (Rack, 2019) featuring up to twelve Thunderbolt 3 ports, and Pro Display XDR which features one Thunderbolt 3 port, both released in December 2019. In March 2022, Apple released Studio Display featuring one Thunderbolt 3 port.
On 8 January 2018, Intel announced a product refresh (codenamed Titan Ridge) with "enhanced robustness" and support for DisplayPort 1.4. Intel offers a single port (JHL7340) and double port (JHL7540) version of this host controller and a peripheral controller supporting two Thunderbolt 3 ports (JHL7440). The new peripheral controller can now act as a USB sink (compatible with regular USB-C ports).
The Apple Pro Display XDR, which macOS allows to connect using two HBR3 connections to a Mac, doesn't support Display Stream Compression (DSC). That would be 51.84 Gbit/s, impossible for Thunderbolt 3, but it works because the two 3008×3384 10bpc 60 Hz 648.91 MHz signals of the XDR display only require 38.9 Gbit/s total and Thunderbolt does not transmit the DisplayPort stuffing symbols used to fill the HBR3 bandwidth.
USB4
The USB4 specification was released on 29 August 2019 by USB Implementers Forum, based on the Thunderbolt 3 protocol specification.
It supports 40 Gbit/s (5 GB/s) throughput, is optionally compatible with Thunderbolt 3, and is backwards compatible with USB 3.2 and USB 2.0. The architecture defines a method to share a single high-speed link with multiple end device types dynamically that best serves the transfer of data by type and application.
USB4 supports DisplayPort 2.0 over its alternative mode.
DisplayPort 2.0 can support higher than 8K resolution at 60 Hz losslessly due to new UHBR 10, 13.5, and 20 signaling standards (DSC 1.2 used in DisplayPort 1.4 for that resolution is not lossless) in 8 bit and 8K 60 Hz with 10 bit color and use up to 80 Gbit/s (effective bandwidth 77.37 Gbit/s), which is double the amount available to USB data, because (just as previously in DisplayPort 1.4) it sends almost all the data in one direction (to the monitor) and can thus use all four data lanes at once. Resolutions up to 16K (15360×8640) 60 Hz display with 10 bit Y'CbCr 4:4:4 or RGB are possible.
In November 2020, Apple announced MacBook Air (M1, 2020), MacBook Pro (13-inch, M1, 2020), and Mac mini (M1, 2020) featuring USB4 ports.
USB4 PCIe Mode
USB4 makes the PCIe aspects of Thunderbolt "open source" – PCIe USB devices can be released without Thunderbolt certification. But notably, those devices will not be allowed to use Thunderbolt branding. However, Thunderbolt 4 devices use PCIe Mode with added certification labeling, and promoting backwards compatibility. This means multiple rival devices may use different brandings to accomplish the same task. USB4 PCIe devices can be backwards compatible with Thunderbolt 1–3 but this is not required. USB4 PCIe Mode is not an Alternate Mode like DisplayPort Alternate Mode, and Microsoft requires devices with USB4 to include PCIe support currently, in order to be WHQL/Windows certified PCs.
Thunderbolt 4
Thunderbolt 4 was announced at CES 2020 and the final specification was released in July 2020. The key differences between Thunderbolt 4 and Thunderbolt 3 are a minimum bandwidth requirement of 32 Gbit/s for PCIe link, support for dual 4K displays (DisplayPort 1.4), and Intel VT-d-based direct memory access protection to prevent physical DMA attacks.
Another major improvement is that Thunderbolt 4 supports Thunderbolt Alternate Mode USB hubs ("Multi-port Accessory Architecture"), and not just daisy chaining. Those hubs are backwards compatible with Thunderbolt 3 devices and can be backwards compatible with Thunderbolt 3 hosts (Titan Ridge only; with Alpine Ridge the additional downstream ports get downgraded to USB 3).
The maximum bandwidth remains at 40 Gbit/s, the same as Thunderbolt 3 and four times as fast as USB 3.2 Gen 2x1. Supporting products began arriving in late 2020 and included Tiger Lake mobile processors for Intel Evo notebooks and 8000-series standalone Thunderbolt controllers (codenamed Goshen Ridge for devices and Maple Ridge for hosts).
Thunderbolt 5
On September 12, 2023, Intel previewed Thunderbolt 5 (codenamed Barlow Ridge), aligned to the USB Implementers Forum's (USB-IF) USB4 2.0 specification. It provides symmetric bandwidth of 80 Gbit/s, e.g. for mass-storage devices, double that of Thunderbolt 4, and unidirectional bandwidth of 120 Gbit/s for displays (three times that of Thunderbolt 3 and 4), supporting dual 8K displays at 60 Hz. The minimum required bandwidth remains unchanged from Thunderbolt 4: 32 Gbit/s for PCIe link.
The full specifications cover:
Supporting the latest version of USB4 2.0 80 Gbit/s specification
Two times the total bandwidth of Thunderbolt 4 to 80 Gbit/s, while providing up to three times the bandwidth to 120 Gbit/s for video-intensive uses
Support for DisplayPort 2.1
Two times (64Gbit/s) the PCI Express data-throughput using PCI Express Gen. 4 x4, for faster storage and external graphics
Up to 240 W of charging power downstream
Works with existing passive cables up to via PAM-3
Compatible with previous versions of Thunderbolt, USB, and DisplayPort
Supported by Intel's enabling and certification programs
Intel announced that computers and accessories compatible with Thunderbolt 5 will come out starting in 2024.
In October 2024, Apple announced the Mac Mini (M4 Pro, 2024) and the 14-inch and 16-inch Macbook Pro (M4 Pro/Max, 2024) with three Thunderbolt 5 ports.
Royalty situation
On 24 May 2017, Intel announced that Thunderbolt 3 would become a royalty-free standard to OEMs and chip manufacturers in 2018, as part of an effort to boost the adoption of the protocol. The Thunderbolt 3 specification was later released to the USB-IF on 4 March 2019, making it royalty-free, to be used to form USB4. Intel says it will retain control over certification of all Thunderbolt 3 devices. Intel also states it employs "mandatory certification for all Thunderbolt products".
Before March 2019, there were no AMD chipsets or computers with Thunderbolt support released or announced due to the certification requirements (Intel did not certify non-Intel platforms). However, the YouTuber Wendell Wilson from Level1Techs was able to get Thunderbolt 3 support on an AMD computer with a Threadripper CPU and Titan Ridge add-in card working by modifying the firmware, indicating that the lack of Thunderbolt support on non-Intel systems is not due to any hardware limitations. As of May 2019, it is possible to have Thunderbolt 3 support on AMD using add-in cards without any problems, and motherboards like ASRock X570 Creator already have Thunderbolt 3 ports.
In January 2020 Intel certified ASRock X570 Phantom Gaming ITX/TB3 and now vendors are freely allowed to produce Thunderbolt controller silicon (even though those ASRock motherboards used Intel Titan Ridge).
Asus currently supports Thunderbolt 3 on AMD with the add-in card Thunderboltex 3-TR, being compatible with AMD motherboards and Ryzen 3, 5 (56xx): ROG Strix B550-E Gaming, ROG Strix B550-F Gaming, Prime B550-PLUS, TUF Gaming B550-Plus. The ASUS ProArt B550-Creator has 2 Thunderbolt 4 ports.
GIGABYTE also has pair of certified motherboards, B550 VISION D-P and B550 VISION D, with an Intel Thunderbolt 3 controller.
Peripheral devices
The first Thunderbolt peripheral devices appeared in retail stores only in late 2011, following Apple's release of its first Thunderbolt-equipped computer in early 2011 with MacBook Pro, with the relatively expensive Pegasus R4 (4-drive) and Pegasus R6 (6-drive) RAID enclosures by Promise Technology aimed at the prosumer and professional market, initially offering up to 12 TB of storage, later increased to 18 TB. Sales of these units were hurt by the 2011 floods in Thailand (who manufacture much of the world's supply of hard-drives) resulting in a cut to worldwide hard-drive production and a subsequent driving-up of storage costs, hence the retail price of these Promise units increased in response, contributing to a slower take-up of the devices.
It also took some time for other storage manufacturers to release products: most were smaller devices aimed at the professional market, and focused on speed rather than high capacity. Many storage devices were under 1 TB in size, with some featuring SSDs for faster external-data access rather than standard hard-drives.
Other companies have offered interface products that can route multiple older, usually slower, connections through a single Thunderbolt port. In July 2011, Apple released its Apple Thunderbolt Display, whose gigabit Ethernet and other older connector types made it the first hub of its type. Later, companies such as Belkin, CalDigit, Other World Computing, Matrox, StarTech, and Elgato have all released Thunderbolt docks.
As of late 2012, few other storage devices offering double-digit TB capacity had appeared. Exceptions included Sonnet Technologies' highly priced professional units, and Drobo's 4- and 5-drive enclosures, the latter featuring their own BeyondRAID proprietary data-handling system.
Backwards compatibility with non-Thunderbolt-equipped computers was a problem, as most storage devices featured only two Thunderbolt ports, for daisy-chaining up to six devices from each one. In mid-2012, LaCie, Drobo, and other device makers started to swap out one of the two Thunderbolt ports for a USB 3.0 connection on some of their low-to-mid end products. Later models had the USB 3.0 added in addition to the two Thunderbolt ports, including those from LaCie on their 2big range.
Apple devices
Apple released its first Thunderbolt-equipped computer in early 2011 with MacBook Pro, and have continued to immediately update their devices with newer generations of Thunderbolt as soon as available.
List of Apple devices featuring Thunderbolt ports include:
MacBook Pro (Retina, 13-inch, Late 2012 to Early 2013)
MacBook Pro (Retina, 15-inch, Mid 2012 to Early 2013)
MacBook Pro (17-inch, Early 2011 to Late 2011)
MacBook Pro (15-inch, Early 2011 to Mid 2012)
MacBook Pro (13-inch, Early 2011 to Mid 2012)
MacBook Air (13-inch, Mid 2011 to Early 2014)
MacBook Air (11-inch, Mid 2011 to Early 2014)
Mac Mini (Mid 2011 to Late 2012)
iMac (27-inch, Mid 2011 to Late 2013)
iMac (21.5-inch, Mid 2011 to Mid 2014)
The late 2013 Retina MacBook Pro was the first product to have Thunderbolt 2 ports, following which manufacturers started to update their model offerings to those featuring the newer, faster, 20 Gbit/s connection throughout 2014. Again, among the first was Promise Technology, who released updated Pegasus 2 versions of their R4 and R6 models along with an even larger R8 (8-drive) RAID unit, offering up to 32 TBs of storage. Later, other brands similarly introduced high capacity models with the newer connection type, including SanDisk Professional (with their G-RAID Studio models offering up to 24 TB) and LaCie (with their 5big, and rack mounted 8big models, offering up to 48 TB). LaCie also offering updated designed versions of their 2big mainstream consumer models, up to 12 TB, using new 6 TB hard-drives.
List of Apple devices featuring Thunderbolt 2 ports include:
MacBook Pro (Retina, 15-inch, Late 2013 to Mid 2015)
MacBook Pro (Retina, 13-inch, Late 2013 to Early 2015)
MacBook Air (13-inch, Early 2015 to 2017)
MacBook Air (11-inch, Early 2015)
Mac Mini (Late 2014)
iMac (Retina 4K, 21.5-inch, Late 2015)
iMac (21.5-inch, Late 2015)
iMac (Retina 5K, 27-inch, Late 2014 to Late 2015)
Mac Pro (Late 2013)
Thunderbolt 3 was introduced in late 2015, with several motherboard manufacturers and OEM laptop manufacturers including Thunderbolt 3 with their products. Gigabyte and MSI, large computer component manufacturers, entered the market for the first time with Thunderbolt 3 compatible components.
Dell was the first to include Thunderbolt 3 ports in laptops with their XPS Series and their Dell Alienware range.
Apple first included Thunderbolt 3 on Mac in 2016.
Although Thunderbolt initially had poor hardware support outside of Apple devices, and had been relegated to a niche gadget port, adoption of Thunderbolt 3, which uses the USB-C connector standard, meant wider market acceptance, especially as it later became part of the USB4 standard.
List of Apple devices featuring Thunderbolt 3 ports include:
MacBook Pro (13-inch, Two Thunderbolt 3 ports, 2016 to 2020)
MacBook Pro (13-inch, Four Thunderbolt 3 ports, 2016 to 2020)
MacBook Pro (15-inch, 2016 to 2019)
MacBook Pro (16-inch, 2019)
MacBook Air (Retina, 13-inch, 2018 to 2020)
iMac (Retina 5K, 27-inch, 2017 to 2020)
iMac (Retina 4K, 21.5-inch, 2017 to 2019)
iMac (21.5-inch, 2017)
iMac Pro (2017)
Mac Pro (2019 + Rack, 2019)
Mac Mini (2018)
Pro Display XDR (2019)
Studio Display (2022)
List of Apple devices featuring Thunderbolt 3/USB4 ports include:
MacBook Pro (13-inch, M1, 2020 to M2, 2022)
MacBook Pro (14-inch, M3, 2023)
MacBook Air (13-inch, M1, 2020 to M3, 2024)
MacBook Air (15-inch, M2, 2023 to M3, 2024)
Mac Mini (M1, 2020)
iMac (24-inch, M1, 2021 to M3, 2023)
iPad Pro 11-inch (3rd generation, 2021 to 4th generation, 2022)
iPad Pro 12.9-inch (5th generation, 2021 to 6th generation, 2022)
iPad Pro 11‑inch (M4, 2024)
iPad Pro 13‑inch (M4, 2024)
Apple started to include Thunderbolt 4 on some of their devices, starting in 2021 with the MacBook Pro.
List of Apple devices featuring Thunderbolt 4 ports include:
MacBook Pro (14-inch, M1 Pro/Max, 2021 to M3 Pro/Max, 2023)
MacBook Pro (16-inch, M1 Pro/Max, 2021 to M3 Pro/Max, 2023)
Mac Studio (2022 to 2023)
Mac Mini (2023)
Mac Pro (2023 + Rack, 2023)
iMac (24-inch, M4, 2024)
Mac Mini (M4, 2024)
MacBook Pro (14-inch, M4, 2024)
Apple started to include Thunderbolt 5 on some of their devices, starting in 2024 with the Mac Mini (M4 Pro) and 14-inch/16-inch Macbook Pro (M4 Pro/Max).
List of Apple devices featuring Thunderbolt 5 ports include:
Mac Mini (M4 Pro, 2024)
MacBook Pro (14-inch, M4 Pro/Max, 2024)
MacBook Pro (16-inch, M4 Pro/Max, 2024)
Security vulnerabilities
Vulnerability to DMA attacks
Thunderbolt 3 – like many high-speed expansion buses, including PCI Express, PC Card, ExpressCard, FireWire, PCI, and PCI-X — is potentially vulnerable to a direct memory access (DMA) attack. If users extend the PCI Express bus (the most common high-speed expansion bus in systems ) with Thunderbolt, it allows very low-level access to the computer. An attacker could physically attach a malicious device, which, through its direct and unimpeded access to system memory and other devices, would be able to bypass almost all security measures of the operating system, allowing the attacker to read and write system memory, potentially exposing encryption keys or installing malware. Such attacks have been demonstrated by modifying inexpensive commodity Thunderbolt hardware. The IOMMU virtualization, if present, and configured by the BIOS and the operating system, can close a computer's vulnerability to DMA attacks, but only if the IOMMU can block the DMA access of malicious device. As of 2019, the major OS vendors had not taken into account the variety of ways in which a malicious device could take advantage of complex interactions between multiple emulated peripherals, exposing subtle bugs and vulnerabilities. Some motherboard and UEFI implementations offer Kernel DMA Protection. Intel VT-d-based direct memory access (DMA) protection is a mandatory requirement for Thunderbolt 4 Host Certification.
This vulnerability is not present when Thunderbolt is used as a system interconnection ( supported on OS X Mavericks), because the IP implementation runs on the underlying Thunderbolt low-latency packet-switching fabric, and the PCI Express protocol is not present on the cable. That means that if IPoTB networking is used between a group of computers, there is no threat of such DMA attack between them.
Vulnerability to Option ROM attacks
When a system with Thunderbolt boots, it loads and executes Option ROMs from attached devices. A malicious Option ROM can allow malware to execute before an operating system is started. It can then invade the kernel, log keystrokes, or steal encryption keys. The ease of connecting Thunderbolt devices to portable computers makes them ideal for evil-maid attacks.
Some systems load Option ROMs during firmware updates, allowing the malware in a Thunderbolt device's Option ROM to potentially overwrite the SPI flash ROM containing the system's boot firmware. In February 2015, Apple issued a Security Update to Mac OS X to eliminate the vulnerability of loading Option ROMs during firmware updates, although the system is still vulnerable to Option ROM attacks during normal boots.
Firmware-enforced boot security measures, such as UEFI Secure Boot (which specifies the enforcement of signatures or hash allowlists of Option ROMs) are designed to mitigate this kind of attack.
Vulnerability to data exposure attacks (Thunderspy)
In May 2020, seven major security flaws were discovered in the Thunderbolt protocol, collectively named Thunderspy. They allow a malicious party to access all data stored in a computer, even if the device is locked, password-protected, and has an encrypted hard drive. These vulnerabilities affect all Thunderbolt 1, 2 and 3 ports. The attack requires the computer to be in sleep mode and have a Thunderbolt controller with a writable firmware chip. A well-trained attacker with physical access to the computer ("evil maid") can perform the required steps in 5 minutes. With a malicious firmware, the attacker can covertly disable Thunderbolt security, clone device identities, and proceed to use DMA to extract data. Thunderspy vulnerabilities can largely be mitigated using Kernel DMA Protection, along with traditional anti-intrusion hardware features.
Cables
In June 2011, Apple introduced the first Thunderbolt cable, a , , full-duplex, active cable costing US$49.
In June 2012, Apple began selling a Thunderbolt-to-gigabit Ethernet adapter for . In the third quarter of 2012, other manufacturers started shipping Thunderbolt cables, including cables reaching the length limit, while some storage-enclosure builders began bundling Thunderbolt cables with their devices, rather than making customers buy them separately, as had been standard practice.
In January 2013, Apple reduced the price of their length cable to and added a half-meter cable for .
In Thunderbolt 3's introduction, Intel announced passive USB-C cables would connect Thunderbolt devices at speeds greater than USB 3.1 (though less than active Thunderbolt cables), thereby eliminating the adoption barrier of Thunderbolt active cable costs.
In mid-2016, copper Thunderbolt 3 cables became available at lengths up to . However, on copper required either active cables, or short (initially , later ) passive cables. Passive copper cables exceeding are limited to . Despite that limit, passive cables provide USB 3 () backward compatibility, while active cables support only USB 2.0 (). In April 2020, optical Thunderbolt 3 cables debuted (see Copper vs. optical).
Copper versions of Thunderbolt 4 cables offer full speed and backward compatibility with all versions of USB (up to USB4), DisplayPort Alternate Mode (DP 1.4 HBR3), and Thunderbolt 3. Released in early 2021, they are also all to be available in three specified lengths: , , and – with many companies initially offering lengths. Copper Thunderbolt 4 cables up to are passive cables, while longer cables must integrate active signal conditioning circuitry. Apple are currently the only company that offers a copper cable, whilst other companies maximum length of copper cables are . Optical Thunderbolt 4 cables were targeting lengths from ~ to , although this may not happen, instead jumping to Thunderbolt 5 optical cables, sometime after the arrival of that standard in late 2024.
Controllers
| Technology | User interface | null |
2078218 | https://en.wikipedia.org/wiki/Interchange%20%28road%29 | Interchange (road) | In the field of road transport, an interchange (American English) or a grade-separated junction (British English) is a road junction that uses grade separations to allow for the movement of traffic between two or more roadways or highways, using a system of interconnecting roadways to permit traffic on at least one of the routes to pass through the junction without interruption from crossing traffic streams. It differs from a standard intersection, where roads cross at grade. Interchanges are almost always used when at least one road is a controlled-access highway (freeway) or a limited-access highway (expressway), though they are sometimes used at junctions between surface streets.
Terminology
Note: The descriptions of interchanges apply to countries where vehicles drive on the right side of the road. For left-side driving, the layout of junctions is mirrored. Both North American (NA) and British (UK) terminology is included.
Freeway junction, highway interchange (NA), or motorway junction (UK)
A type of road junction linking one controlled-access highway (freeway or motorway) facility to another, to other roads, or to a rest area or motorway service area. Junctions and interchanges are often (but not always) numbered either sequentially, or by distance from one terminus of the route (the "beginning" of the route).
The American Association of State Highway and Transportation Officials (AASHTO) defines an interchange as "a system of interconnecting roadways in conjunction with one or more grade separations that provides for the movement of traffic between two or more roadways or highways on different levels."
System interchange
A junction that connects multiple controlled-access highways.
Service interchange
A junction that connects a controlled-access facility to a lower-order facility, such as an arterial or collector road.
The mainline is the controlled-access highway in a service interchange, while the crossroad is the lower-order facility that often includes at-grade intersections or roundabouts, which may pass over or under the mainline.
Complete interchange
A junction where all possible movements between highways can be made from any direction.
Incomplete interchange
A junction that is missing at least one movement between highways.
Ramp (NA), or slip road (UK/Ireland)
A short section of road that allows vehicles to enter or exit a controlled-access highway.
Ingressing traffic is entering the highway via an on-ramp or entrance ramp, while egressing traffic is exiting the highway via an off-ramp or exit ramp.
Directional ramp
A ramp that curves toward the desired direction of travel; i.e., a ramp that makes a left turn exits from the left side of the roadway (a left exit).
Semi-directional ramp
A ramp that exits in a direction opposite from the desired direction of travel, then turns toward the desired direction. Most left turn movements are provided by a semi-directional ramp that exits to the right, rather than exiting from the left.
Weaving
An undesirable situation where traffic entering and exiting a highway must cross paths within a limited distance.
History
The concept of the controlled-access highway developed in the 1920s and 1930s in Italy, Germany, the United States, and Canada. Initially, these roads featured at-grade intersections along their length. Interchanges were developed to provide access between these new highways and heavily-travelled surface streets. The Bronx River Parkway and Long Island Motor Parkway were the first roads to feature grade-separations.
Maryland engineer Arthur Hale filed a patent for the design of a cloverleaf interchange on May24, 1915,
though the conceptual roadwork was not realised until a cloverleaf opened on December15, 1929, in Woodbridge, New Jersey, connecting New Jersey Route 25 and Route 4 (now U.S. Route 1/9 and New Jersey Route 35). It was designed by Philadelphia engineering firm Rudolph and Delano, based on a design seen in an Argentinian magazine.
System interchange
A system interchange connects multiple controlled-access highways, involving no at-grade signalised intersections.
Four-legged interchanges
Cloverleaf interchange
A cloverleaf interchange is a four-legged junction where left turns across opposing traffic are handled by non-directional loop ramps.
It is named for its appearance from above, which resembles a four-leaf clover.
A cloverleaf is the minimum interchange required for a four-legged system interchange. Although they were commonplace until the 1970s, most highway departments and ministries have sought to rebuild them into more efficient and safer designs.
The cloverleaf interchange was invented by Maryland engineer Arthur Hale, who filed a patent for its design on May24, 1915.
The first one in North America opened on December15, 1929, in Woodbridge, New Jersey, connecting New Jersey Route25 and Route4 (now U.S. Route1/9 and New Jersey Route35). It was designed by Philadelphia engineering firm Rudolph and Delano based on a design seen in an Argentinian magazine.
The first cloverleaf in Canada opened in 1938
at the junction of Highway 10 and what would become the Queen Elizabeth Way.
The first cloverleaf outside of North America opened in Stockholm on October15, 1935. Nicknamed Slussen, it was referred to as a "traffic carousel" and was considered a revolutionary design at the time of its construction.
A cloverleaf offers uninterrupted connections between two roads but suffers from weaving issues. Along the mainline, a loop ramp introduces traffic prior to a second loop ramp providing access to the crossroad, between which ingress and egress traffic mixes. For this reason, the cloverleaf interchange has fallen out of favour in place of combination interchanges. Some may be half cloverleaf containing ghost ramps which can be upgraded to full cloverleafs if the road is extended. US 70 and US 17 west of New Bern, North Carolina is an example.
Stack interchange
A stack interchange is a four-way interchange whereby a semi-directional left turn and a directional right turn are both available. Usually, access to both turns is provided simultaneously by a single off-ramp. Assuming right-handed driving, to cross over incoming traffic and go left, vehicles first exit onto an off-ramp from the rightmost lane. After demerging from right-turning traffic, they complete their left turn by crossing both highways on a flyover ramp or underpass. The penultimate step is a merge with the right-turn on-ramp traffic from the opposite quadrant of the interchange. Finally, an on-ramp merges both streams of incoming traffic into the left-bound highway. As there is only one off-ramp and one on-ramp (in that respective order), stacks do not suffer from the problem of weaving, and due to the semi-directional flyover ramps and directional ramps, they are generally safe and efficient at handling high traffic volumes in all directions.
A standard stack interchange includes roads on four levels, also known as a 4-level stack, including the two perpendicular highways, and one more additional level for each pair of left-turn ramps. These ramps can be stacked (cross) in various configurations above, below, or between the two interchanging highways. This makes them distinct from turbine interchanges, where pairs of left-turn ramps are separated but at the same level. There are some stacks that could be considered 5-level; however, these remain four-way interchanges, since the fifth level actually consists of dedicated ramps for HOV/bus lanes or frontage roads running through the interchange. The stack interchange between I-10 and I-405 in Los Angeles is a 3-level stack, since the semi-directional ramps are spaced out far enough, so they do not need to cross each other at a single point as in a conventional 4-level stack.
Stacks are significantly more expensive than other four-way interchanges are due to the design of the four levels; additionally, they may suffer from objections of local residents because of their height and high visual impact. Large stacks with multiple levels may have a complex appearance and are often colloquially described as Mixing Bowls, Mixmasters (for a Sunbeam Products brand of electric kitchen mixers), or as Spaghetti Bowls or Spaghetti Junctions (being compared to boiled spaghetti). However, they consume a significantly smaller area of land compared to a cloverleaf interchange.
Combination interchange
A combination interchange (sometimes referred to by the portmanteau, cloverstack) is a hybrid of other interchange designs. It uses loop ramps to serve slower or less-occupied traffic flow, and flyover ramps to serve faster and heavier traffic flows.
If local and express ways serving the same directions and each roadway is connected righthand to the interchange, extra ramps are installed. The combination interchange design is commonly used to upgrade cloverleaf interchanges to increase their capacity and eliminate weaving.
Turbine interchange
The turbine interchange is an alternative four-way directional interchange. The turbine interchange requires fewer levels (usually two or three) while retaining directional ramps throughout. It features right-exit, left-turning ramps that sweep around the center of the interchange in a clockwise spiral. A full turbine interchange features a minimum of 18 overpasses, and requires more land to construct than a four-level stack interchange; however, the bridges are generally short in length. Coupled with reduced maintenance costs, a turbine interchange is a less costly alternative to a stack.
Windmill interchange
A windmill interchange is similar to a turbine interchange, but it has much sharper turns, reducing its size and capacity. The interchange is named for its similar overhead appearance to the blades of a windmill.
A variation of the windmill, called the diverging windmill, increases capacity by altering the direction of traffic flow of the interchanging highways, making the connecting ramps much more direct. There also is a hybrid interchange somewhat like the diverging windmill in which left turn exits merge on the left, but it differs in that the left turn exits use left directional ramps.
Braided interchange
A braided or diverging interchange is a two-level, four-way interchange. An interchange is braided when at least one of the roadways reverses sides. It seeks to make left and right turns equally easy. In a pure braided interchange, each roadway has one right exit, one left exit, one right on-ramp, and one left on-ramp, and both roadways are flipped.
The first pure braided interchange was built in Baltimore at Interstate 95 at Interstate 695; however, the interchange was reconfigured in 2008 to a traditional stack interchange.
Examples
Interstate 65 and Interstate 20/Interstate 59 in Birmingham, Alabama ()
Interstate 196 and U.S. Route 131 in Grand Rapids, Michigan ()
Interstate 77 and Interstate 85 in Charlotte, North Carolina ()
Eastern Ring Road and Southern Ring Branch Road, Riyadh ()
Three-level roundabout
A three-level roundabout interchange features a grade-separated roundabout which handles traffic exchanging between highways.
The ramps of the interchanging highways meet at a roundabout, or rotary, on a separated level above, below, or in the middle of the two highways.
Three-legged interchanges
These interchanges can also be used to make a "linking road" to the destination for a service interchange, or the creation of a new basic road as a service interchange.
Trumpet interchange
Trumpet interchanges may be used where one highway terminates at another highway, and are named as such for to their resemblance to trumpets. They are sometimes called jug handles.
These interchanges are very common on toll roads, as they concentrate all entering and exiting traffic into a single stretch of roadway, where toll plazas can be installed once to handle all traffic, especially on ticket-based tollways. A double-trumpet interchange can be found where a toll road meets another toll road or a free highway. They are also useful when most traffic on the terminating highway is going in the same direction. The turn that is used less often would contain the slower loop ramp.
Trumpet interchanges are often used instead of directional or semi-directional T or Y interchanges because they require less bridge construction but still eliminate weaving.
T and Y interchanges
A full Y interchange (also known as a directional T interchange) is typically used when a three-way interchange is required for two or three highways interchanging in semi-parallel/perpendicular directions, but it can also be used in right-angle case as well. Their connecting ramps can spur from either the right or left side of the highway, depending on the direction of travel and the angle.
Directional T interchanges use flyover/underpass ramps for both connecting and mainline segments, and they require a moderate amount of land and moderate costs since only two levels of roadway are typically used. Their name derives from their resemblance to the capital letter T, depending upon the angle from which the interchange is seen and the alignment of the roads that are interchanging. It is sometimes known as the "New England Y", as this design is often seen in the northeastern United States, particularly in Connecticut.
This type of interchange features directional ramps (no loops, or weaving right to turn left) and can use multilane ramps in comparatively little space. Some designs have two ramps and the "inside" through road (on the same side as the freeway that ends) crossing each other at a three-level bridge. The directional T interchange is preferred to a trumpet interchange because a trumpet requires a loop ramp by which speeds can be reduced, but flyover ramps can handle much faster speeds. The disadvantage of the directional T is that traffic from the terminating road enters and leaves on the passing lane, so the semi-directional T interchange (see below) is preferred.
The interchange of Highway 416 and Highway 417 in Ontario, constructed in the early 1990s, is one of the few directional T interchanges, as most transportation departments had switched to the semi-directional T design.
As with a directional T interchange, a semi-directional T interchange uses flyover (overpass) or underpass ramps in all directions at a three-way interchange. However, in a semi-directional T, some of the splits and merges are switched to avoid ramps to and from the passing lane, eliminating the major disadvantage of the directional T. Semi-directional T interchanges are generally safe and efficient, though they do require more land and are costlier than trumpet interchanges.
Semi-directional T interchanges are built as two- or three-level junctions, with three-level interchanges typically used in urban or suburban areas where land is more expensive. In a three-level semi-directional T, the two semi-directional ramps from the terminating highway cross the surviving highway at or near a single point, which requires both an overpass and underpass. In a two-level semi-directional T, the two semi-directional ramps from the terminating highway cross each other at a different point than the surviving highway, necessitating longer ramps and often one ramp having two overpasses. Highway 412 has a three-level semi-directional T at Highway 407 and a two-level semi-directional T at Highway 401.
Service interchange
Service interchanges are used between a controlled-access route and a crossroad that is not controlled-access. A full cloverleaf may be used as a system or a service interchange.
Diamond interchange
A diamond interchange is an interchange involving four ramps where they enter and leave the freeway at a small angle and meet the non-freeway at almost right angles. These ramps at the non-freeway can be controlled through stop signs, traffic signals, or turn ramps.
Diamond interchanges are much more economical in use of materials and land than other interchange designs, as the junction does not normally require more than one bridge to be constructed. However, their capacity is lower than other interchanges and when traffic volumes are high they can easily become congested.
Double roundabout diamond
A double roundabout diamond interchange, also known as a dumbbell interchange or a dogbone interchange, is similar to the diamond interchange, but uses a pair of roundabouts in place of intersections to join the highway ramps with the crossroad. This typically increases the efficiency of the interchange when compared to a diamond, but is only ideal in light traffic conditions. In the dogbone variation, the roundabouts do not form a complete circle, instead having a teardrop shape, with the points facing towards the center of the interchange. Longer ramps are often required due to line-of-sight requirements at roundabouts.
Partial cloverleaf interchange
A partial cloverleaf interchange (often shortened to the portmanteau, parclo) is an interchange with loops ramps in one to three quadrants, and diamond interchange ramps in any number of quadrants. The various configurations are generally a safer modification of the cloverleaf design, due to a partial or complete reduction in weaving, but may require traffic lights on the lesser-travelled crossroad. Depending on the number of ramps used, they take up a moderate to large amount of land, and have varying capacity and efficiency.
Parclo configurations are given names based on the location of and number of quadrants with ramps. The letter A denotes that, for traffic on the controlled-access highway, the loop ramps are located in advance of (or approaching) the crossroad, and thus provide an onramp to the highway. The letter B indicated that the loop ramps are beyond the crossroad, and thus provide an offramp from the highway. These letters can be used together when opposite directions of travel on the controlled-access highway are not symmetrical, thus a parclo AB features a loop ramp approaching the crossroad in one direction, and beyond the crossroad in the opposing direction, as in the example image.
Diverging diamond interchange
A diverging diamond interchange (DDI) or double crossover diamond interchange (DCD) is similar to a traditional diamond interchange, except the opposing lanes on the crossroad cross each other twice, once on each side of the highway. This allows all highway entrances and exits to avoid crossing the opposite direction of travel and saves one signal phase of traffic lights each.
The first DDIs were constructed in the French communities of Versailles (A13 at D182), Le Perreux-sur-Marne (A4 at N486) and Seclin (A1 at D549), in the 1970s. Despite the fact that such interchanges already existed, the idea for the DDI was "reinvented" around 2000, inspired by the freeway-to-freeway interchange between Interstate 95 and I-695 north of Baltimore. The first DDI in the United States opened on July7, 2009, in Springfield, Missouri, at the junction of Interstate 44 and Missouri Route 13.
Single-point urban interchange
A single-point urban interchange (SPUI) or single-point diamond interchange (SPDI) is a modification of a diamond interchange in which all four ramps to and from a controlled-access highway converge at a single, three-phase traffic light in the middle of an overpass or underpass. While the compact design is safer, more efficient, and offers increased capacity—with three light phases as opposed to four in a traditional diamond, and two left turn queues on the arterial road instead of four—the significantly wider overpass or underpass structure makes them more costly than most service interchanges.
Since single-point urban interchanges can exist in rural areas, such as the interchange of U.S. Route 23 with M-59 in Michigan; the term single-point diamond interchange is considered the correct phrasing.
Single-point interchanges were first built in the early 1970s along U.S. Route 19 in the Tampa Bay area of Florida, including the SR 694 interchange in St. Petersburg and SR 60 in Clearwater.
| Technology | Road infrastructure | null |
2079927 | https://en.wikipedia.org/wiki/Irrigation%20sprinkler | Irrigation sprinkler | An irrigation sprinkler (also known as a water sprinkler or simply a sprinkler) is a device used to irrigate (water) agricultural crops, lawns, landscapes, golf courses, and other areas. They are also used for cooling and for the control of airborne dust. Sprinkler irrigation is the method of applying water in a controlled manner in way similar to rainfall. The water is distributed through a network that may consist of pumps, valves, pipes, and sprinklers.
Irrigation sprinklers can be used for residential, industrial, and agricultural usage. It is useful on uneven land where sufficient water is not available as well as on sandy soil. The perpendicular pipes, having rotating nozzles on top, are joined to the main pipeline at regular intervals.
When water is pressurized through the main pipe it escapes from the rotating nozzles. It gets sprinkled on the crop. In sprinkler or overhead irrigation, water is piped to one more central locations within the field and distributed by overhead high pressure sprinklers or guns.
Types
Industrial
Rotating sprinkler-heads for higher pressures are driven by a ball drive, gear drive, or impact mechanisms. They can be designed to rotate in a full or partial circle.
Rainguns are similar to impact sprinklers, except that they generally operate at very high pressures of and flows of , usually with nozzle diameters in the range of . In addition to irrigation, guns are used for industrial applications such as dust suppression and logging.
Many irrigation sprinklers are buried in the ground along with their supporting plumbing, although above ground and moving sprinklers are also common. Most irrigation sprinklers operate through electric and hydraulic technology and are grouped together in zones that can be collectively turned on and off by actuating a solenoid valve.
Residential
Home lawn sprinklers vary widely in their size, cost, and complexity. They include impact sprinklers, oscillating sprinklers, drip sprinklers, underground sprinkler systems, and portable sprinklers. Permanently installed systems may often operate on timers or other automated processes. They are occasionally installed with retractable heads for aesthetic and practical reasons, reducing damage during lawn mowing. These types of systems usually can be programmed to start automatically on a set time and day each week.
Small portable sprinklers can be placed temporarily on lawns if additional watering is needed or if no permanent system is in place. These are often attached to an outdoor water faucet and are placed for a short period of time. Other systems may be professionally installed permanently in the ground and are attached permanently to a home's plumbing system.
An antique sprinkler developed by Nomad called a 'set-and-forget tractor sprinkler' was used in Australia in the 1950s. Water pressure ensured that the sprinkler moved slowly across a lawn.
Agricultural science
The first use of sprinklers by farmers was some form of home and golf course type sprinklers. These ad hoc systems, while doing the job of the buried pipes and fixed sprinkler heads, interfered with cultivation and were expensive to maintain.
Center-pivot irrigation was invented in 1940 by farmer Frank Zybach, who lived in Strasburg, Colorado.
In the 1950s, Stout-Wyss Irrigation System, a firm based in Portland, Oregon, developed a rolling pipe type irrigation system for farms that has become the most popular type for farmers irrigating large fields. With this system, large wheels attached to the large pipes with sprinkler heads move slowly across the field.
Underground
Underground sprinklers function through means of basic electronic and hydraulic technology. This valve and all of the sprinklers that will be activated by this valve are known as a zone. Upon activation, the solenoid, which sits on top of the valve is magnetized lifting a small stainless steel plunger in its center. By doing this, the activated (or raised) plunger allows water to escape from the top of a rubber diaphragm located in the center of the valve. Water that has been charged and waiting on the bottom of this same diaphragm now has the higher pressure and lifts the diaphragm. This pressurized water is then allowed to escape down stream of the valve through a series of pipes, usually made of PVC (higher pressure commercial systems) or polyethylene pipe (for typically lower pressure residential systems). At the end of these pipes and flush to ground level (typically) are pre measured and spaced out sprinklers. These sprinklers can be fixed spray heads that have a set pattern and generally spray between , full rotating sprinklers that can spray a broken stream of water from , or small drip emitters that release a slow, steady drip of water on more delicate plants such as flowers and shrubs. Use of indigenous materials also recommended.
Health risks
In 2017, it was reported that use of common garden hoses in combination with spray nozzles may generate aerosols containing droplets smaller than , which can be inhaled by nearby people. Water stagnating in a hose between uses, especially when warmed by the sun, can host the growth and interaction of Legionella and free-living amoebae (FLA) as biofilms on the inner surface of the hose. Clinical cases of Legionnaires' disease or Pontiac fever have been found to be associated with inhalation of garden hose aerosols containing Legionella bacteria. The report provides measured microbial densities resulting from controlled hose conditions in order to quantify the human health risks. The densities of Legionella spp. identified in two types of hoses were found to be similar to those reported during legionellosis outbreaks from other causes. It has been proposed to mitigate the risk by draining hoses after use.
Gallery
| Technology | Farm and garden machinery | null |
2081868 | https://en.wikipedia.org/wiki/Meteoric%20iron | Meteoric iron | Meteoric iron, sometimes meteoritic iron, is a native metal and early-universe protoplanetary-disk remnant found in meteorites and made from the elements iron and nickel, mainly in the form of the mineral phases kamacite and taenite. Meteoric iron makes up the bulk of iron meteorites but is also found in other meteorites. Apart from minor amounts of telluric iron, meteoric iron is the only naturally occurring native metal of the element iron (in metallic form rather than in an ore) on the Earth's surface.
Mineralogy
The bulk of meteoric iron consists of taenite and kamacite. Taenite is a face-centered cubic and kamacite a body-centered cubic iron-nickel alloy.
Meteoric iron can be distinguished from telluric iron by its microstructure and perhaps by its chemical composition also, since meteoritic iron contains more nickel and less carbon.
Trace amounts of gallium and germanium in meteoric iron can be used to distinguish different meteorite types. The meteoric iron in stony iron meteorites is identical to the "gallium-germanium group" of the iron meteorites.
Structures
Meteoric iron forms a few different structures that can be seen by etching or in thin sections of meteorites. The Widmanstätten pattern forms when meteoric iron cools and kamacite is exsolved from taenite in the form of lamellas. Plessite is a more fine-grained intergrowth of the two minerals in between the lamella of the Widmanstätten pattern. Neumann lines are fine lines running through kamacite crystals that form through impact-related deformation.
Cultural and historical usage
Before the advent of iron smelting, meteoric iron was the only source of iron metal apart from minor amounts of telluric iron. Meteoric iron was already used before the beginning of the Iron Age to make cultural objects, tools and weapons.
Bronze Age
Many examples of iron working from the Bronze Age have been confirmed to be meteoritic in origin.
In ancient Egypt an iron metal bead was found in a graveyard near Gerzeh that contained 7.5% Ni. Dated to around 3200 BC, geochemical analysis of the Gerzeh iron beads, based on the ratio of nickel to iron and cobalt, confirms that the iron was meteoritic in origin.
Dated to around 2500 BC, an iron dagger from Alaca Höyük was confirmed to be meteoritic in origin through geochemical analysis.
Dated to around 2300 BC, an iron pendant from Umm el-Marra in Syria was confirmed to be meteoritic in origin through geochemical analysis.
Dated to around 1400 BC, an iron axe from Ugarit in Syria was found to be meteoritic in origin.
Dated to around 1400 BC, several iron axes from Shang dynasty China were confirmed to be meteoritic in origin.
Dated to around 1350 BC, an iron dagger, bracelet and headrest from the tomb of Tutankhamun were confirmed to be meteoritic in origin. The Tutankhamun dagger consists of similar proportions of metals (iron, nickel and cobalt) to a meteorite discovered in the area, deposited by an ancient meteor shower.
Dated to around 900 BC, an iron arrowhead from Mörigen in Switzerland was confirmed to be meteoritic in origin.
The Americas
The Inuit used parts of the Cape York meteorite to make lance heads.
Africa
Fragments from the Gibeon meteorite were used for centuries by the Nama people of Namibia.
Asia
There are reports of the use of meteorites for manufacture of various items in Tibet (see Thokcha).
The Iron Man, a purported Tibetan Buddhist statue of Vaiśravaṇa, was likely carved from an ataxite meteorite. It has been speculated that it may be made from a fragment of the Chinga meteorite.
Even after the invention of smelting, meteoric iron was sometimes used where this technology was not available or metal was scarce. A piece of the Cranbourne meteorite was made into a horseshoe around 1854.
Today meteoritic iron is used in niche jewellery and knife production, but most of it is used for research, educational or collecting purposes.
Atmospheric phenomena
Meteoric iron also has an effect on the Earth's atmosphere. When meteorites descend through the atmosphere, outer parts are ablated. Meteoric ablation is the source of many elements in the upper atmosphere. When meteoric iron is ablated, it forms a free iron atom that can react with ozone (O3) to form FeO. This FeO may be the source of the orange spectrographic bands in the spectrum of the upper atmosphere.
| Physical sciences | Minerals | Earth science |
2888531 | https://en.wikipedia.org/wiki/Tree%20squirrel | Tree squirrel | Tree squirrels are the members of the squirrel family (Sciuridae) commonly just referred to as "squirrels". They include more than 100 arboreal species native to all continents except Antarctica and Oceania.
They do not form a single natural, or monophyletic, group; they are variously related to others in the squirrel family, including ground squirrels, flying squirrels, marmots, and chipmunks. The defining characteristic used to determine which species of Sciuridae are tree squirrels is dependent on their habitat rather than their physiology. Tree squirrels live mostly among trees, as opposed to those that live in burrows in the ground or among rocks. An exception is the flying squirrel that also makes its home in trees, but has a physiological distinction separating it from its tree squirrel cousins: special flaps of skin called patagia, acting as glider wings, which allow gliding flight.
The best-known genus of tree squirrels is Sciurus, which includes the eastern gray squirrel of North America (introduced to Great Britain in the 1870s), the red squirrel of Eurasia, and the North American fox squirrel, among many others. Many tree squirrel species have adapted to human-altered environments such as rural farms, suburban backyards and urban parks.
Classification
Current taxonomy, based on genetic data, splits the tree squirrels into several subfamilies. The following genera of the squirrel family are classified as tree squirrels.
Subfamily Ratufinae
Genus Ratufa (Asian giant squirrels)
Subfamily Sciurillinae
Genus Sciurillus (South American pygmy squirrel)
Subfamily Sciurinae
Tribe Sciurini (mostly American tree squirrels)
Genus Microsciurus (American dwarf squirrels)
Genus Rheithrosciurus (Borneo tufted ground squirrel)
Genus Sciurus (Eurasian and American tree squirrels)
Genus Syntheosciurus (Central American mountain squirrel)
Genus Tamiasciurus (American pine squirrels)
Subfamily Callosciurinae (Asian tree squirrels)
Genus Callosciurus (Oriental tree squirrels, introduced into Europe and South America)
Genus Exilisciurus (Asian pygmy squirrels)
Genus Funambulus (Asian palm squirrels, introduced into Australia in the 1920s)
Genus Glyphotes (sculptor squirrel)
Genus Nannosciurus (Asian dwarf squirrel)
Genus Prosciurillus (Sulawesi dwarf squirrels)
Genus Rubrisciurus (Sulawesi giant squirrel)
Genus Sundasciurus (Sunda squirrels)
Genus Tamiops (Asian striped squirrels)
Subfamily Xerinae
Tribe Protoxerini (African tree squirrels)
Genus Epixerus (African palm squirrels)
Genus Funisciurus (rope squirrels)
Genus Heliosciurus (sun squirrels)
Genus Myosciurus (African pygmy squirrel)
Genus Paraxerus (bush squirrels)
Genus Protoxerus (African giant squirrels)
Relationship with humans
Squirrels are generally inquisitive and persistent animals. In residential neighborhoods, they are notorious for circumventing obstacles in order to eat from bird feeders. Although they are expert climbers, and primarily arboreal, some species of squirrels also thrive in urban environments, where they have adapted to humans.
As pets
Squirrels have been kept as pets in Western society at least until the 19th century. Because of their small size and tame nature, they were especially popular with women and the clergy.
As pests
Squirrels are sometimes considered pests because of their propensity to chew on various edible and inedible objects, and their stubborn persistence in trying to get what they want. Their characteristic gnawing trait also aids in maintaining sharp teeth, and because their teeth grow continuously, prevents their over-growth. On occasion, squirrels will chew through plastic and even metal to get to the food.
Tree squirrels may bury food in the ground for later retrieval. Squirrels use their keen sense of smell to search for buried food, but can dig numerous holes in the process. This may become an annoyance to gardeners with strict landscape requirements, especially when the garden contains edibles.
Homeowners in areas with a heavy squirrel population must be vigilant in keeping attics, basements, and sheds carefully sealed to prevent property damage caused by nesting squirrels. A squirrel nest is called a "drey".
Squirrels are a serious fire hazard when they break into buildings. They often treat exposed power cables as tree branches, and gnaw on the electrical insulation. The resulting exposed conductors can short out, causing a fire. For this reason alone, squirrel nests inside buildings cannot be safely ignored. A squirrel nest will also cause problems with noise, excreta, unpleasant odors, and eventual structural damage.
Some homeowners resort to more interesting ways of dealing with this problem, such as collecting and placing fur from pets such as domestic cats and dogs in attics. It is hoped that this fur would indicate to nesting squirrels that a potential predator roams, and will encourage evacuation. Odoriferous repellents, including mothballs and ammonia, are generally ineffective in expelling squirrels from buildings.
Once established in a nest, squirrels ignore fake owls and scarecrows, along with bright flashing lights, loud noises, and ultrasonic or electromagnetic devices. However, squirrels must leave the nest to obtain food and water (usually daily, except in bad weather), affording an opportunity to trap them or exclude them from re-entering.
To discourage chewing on an object, it can be coated or covered with something to make it distasteful: for instance a soft cloth doused with chili pepper paste or powder. Capsaicin and Ro-pel are other forms of repellent. To remain effective, the coating must be reapplied regularly, especially if it is exposed to the weather. Poisoning squirrels can be problematic because of the risks to other animals or children in the building, and because the odor of a dead squirrel in an attic or wall cavity is very unpleasant and persistent.
Trapping is often used to remove squirrels from residential structures. Effective baits include fruit, peanut butter, nuts, seeds and vanilla extract.
An alternative method is to wait until squirrels have left in search of food, and then close up all their access openings, or to install one-way trap doors or a carefully angled pipe. Attempting to get rid of all squirrels in a neighborhood is generally a futile goal; the focus instead should be on physically excluding them from places where they can do damage. There are other humane techniques to remove squirrels from buildings, but removal is ineffective unless steps are taken to prevent them from immediately breaking in again.
Squirrels are often the cause of power outages. They can readily climb a power pole and crawl or run along a power cable. The animals will climb onto power transformers or capacitors looking for food, or a place to cache acorns. If they touch a high voltage conductor and a grounded portion of the enclosure at the same time, they are electrocuted, and often cause a short circuit that shuts down equipment. Squirrels have brought down the high-tech NASDAQ stock market twice and were responsible for a spate of power outages at the University of Alabama. To sharpen their teeth, squirrels will often chew on tree branches or even the occasional live power line. Rubber or plastic plates, or freely rotating sleeves ("squirrel guards") are sometimes used to discourage access to these facilities.
Squirrels otherwise appear to be safe and pose almost zero risk of transmitting rabies.
Squirrels cause economic losses to homeowners, nut growers, and forest managers in addition to damage to electric transmission lines. These losses include direct damage to property, repairs, lost revenue and public relations. While dollar costs of these losses are sometimes calculated for isolated incidents, there is no tracking system to determine the total extent of the losses.
As roadkill and traffic hazards
In regions where squirrels are plentiful, tire-flattened roadkill is a common sight on roadways, especially in the spring and fall, when there is a fresh crop of young rodents. Motorists have caused serious accidents by attempting to swerve or stop to avoid a squirrel in the road. Evasive maneuvers are difficult since squirrels are much more agile and have much quicker reaction times than motorists in heavy vehicles; the majority of vehicular encounters end with no harm to either party.
An effort to mitigate these hazards to both squirrels and humans is the Nutty Narrows Bridge in Longview, Washington, listed on the National Register of Historic Places. It provides a way for squirrels to cross a busy street safely.
As urban wildlife
Tree squirrels are a common type of urban wildlife. They can be trained to be hand-fed, and will take as much food as is available because they cache the surplus. Squirrels living in parks and campuses in cities have learned that humans are typically a ready source of food, either deliberately or from careless disposal of surplus. Some people do "squirrel fishing" as a way of simultaneously playing with and feeding squirrels.
Humans commonly offer various nuts and seeds; however, wildlife rehabilitators in the field have noted that neither raw nor roasted peanuts nor sunflower seeds are healthy for squirrels, because they are deficient in several essential nutrients. This type of deficiency has been found to cause metabolic bone disease, a somewhat common ailment found in malnourished squirrels.
As game
Squirrels are sometimes hunted as game animals, whether for their fur or as food. In the Middle Ages the red squirrel was hunted for its blue-gray winter coat, traditionally called vair, which now lends its name to a heraldic fur. The hairs from squirrel tails are prized in fly fishing when tying fishing flies.
In the US
In many areas of the US, squirrels are still hunted for food, as they were historically. Recipes calling for squirrel even appear in cookbooks, including James Beard's American Cookery and pre-1997 copies of The Joy of Cooking. Squirrel meat can be substituted for rabbit or chicken in many recipes and was an ingredient in the original recipe for Brunswick stew, a popular dish in various parts of the Southern US. Other similar stews were also based on squirrel meat, including burgoo and Southern Illinois chowder.
Although squirrel meat is low in fat content, unlike most game meat it has been found by the American Heart Association to be high in cholesterol.
Squirrels Unlimited host a World Championship Squirrel cook-off each year in Bentonville, Arkansas.
In the UK
For most of the history of the United Kingdom, squirrel has been a meat not commonly eaten, and even scorned by many. In the early 21st century however, the wild squirrel has become a more popular meat to cook with, showing up in restaurants and shops more often in Britain as a fashionable alternative meat. Specifically, Britons are cooking with the invasive gray squirrel, which is praised for its low fat content and the fact that it comes from free range sources. Additionally, the novelty of a meat considered unusual or special has contributed to the spread of squirrel consumption. Due to the difficulty of a clean kill and other factors, the majority of squirrel eaten in the UK is acquired from professional hunters, trappers and gamekeepers.
Some Britons are eating gray squirrel as a direct attempt to help the native red squirrel, which has been dwindling since the 19th-century introduction of the gray squirrel, resulting in dramatic habitat loss for the indigenous red squirrels. This factor was marketed by a national "Save Our Squirrels" campaign that used the slogan, "Save a red, eat a grey!"
Risks of eating
As with other wild game and fish species, the consumption of squirrels that have been exposed to high levels of pollution or toxic waste poses a health risk to humans. In 2007 in the northern New Jersey community of Ringwood, the New Jersey Department of Health and Senior Services issued a warning to anyone who eats squirrel (especially children and those who are pregnant) to limit their consumption after a lead-contaminated squirrel was found near the Ringwood Mines Landfill. Toxic waste had been illegally dumped at this location for many years, before authorities cracked down on this practice in the 1980s.
In 1997, doctors in Kentucky published a paper in the Lancet that considered a possible association between the local tradition of consuming squirrel brains and five cases of Creutzfeldt–Jakob disease, a rare but serious prion-based disorder. The authors posed this as a mere possibility, unconfirmed by either post-mortem analysis of the patients' brain tissue, or identification of a contagious prion agent in squirrels. Nonetheless, the Lancet article generated substantial media coverage, including articles in the New Yorker and New York Times. A 2015 case of CJD in a Pittsburgh man who had eaten squirrel brains played out similarly: the media seized on the patient's unconventional food choice, positing squirrel brains as the source of his disease. The doctor who made the initial report later clarified that he had not meant to assert the squirrel meat was the cause. Analysis of the patient's brain tissue ruled out the possibility of CJD acquired from food. As of 2018, Creutzfeldt–Jakob disease had never been identified in squirrels, and the association between squirrel consumption and CJD remained speculative.
Relationship with trees
The biggest source of food for tree squirrels is tree nuts.
Red squirrels store nuts in a single stash (a midden) that tends to dry out, so the seeds don't take root.
Fox squirrels and gray squirrels bury nuts over a widespread area (scatterhoarding), and often forget them, resulting in new trees (mutualism).
In culture
In the Ramayana, an ancient Sanskrit epic poem, a squirrel assists in constructing a bridge from India to Sri Lanka to help Rama rescue his wife Sita. Rama rewards the squirrel by stroking his back with his three middle fingers, thus giving the Indian palm squirrel the three white stripes that appear on its back. In Norse mythology, the squirrel Ratatoskr is a messenger who scurries up and down the trunk of the world-tree Yggdrasil, carrying malicious gossip and insults back and forth between the dragon Níðhöggr, who sits at the bottom of the tree gnawing on its roots, and the hawk Veðrfölnir, who sits at the top of the tree keeping watch. According to Richard W. Thorington, Jr. and Katie E. Ferrell, this legend may have originated from the red squirrel's habit of giving a "scolding alarm call in response to danger", which some Norsemen may have imagined as insults.
In Irish mythology, the goddess Medb is said to always have a bird perched on one shoulder and a squirrel on the other, serving as her messengers to the sky and the earth respectively. In Europe during the Middle Ages, squirrels were sometimes used in bestiaries as symbols of greed and avarice on account of their storing of nuts, but, in the nineteenth century, British natural history books often praised them as thrifty for this same reason. A myth told by the Ainu people of Japan holds that squirrels are the discarded sandals of the ancestral deity Aioina, possibly because squirrels move in spurts like footsteps. The Kalevala, a Finnish epic poem collected in the nineteenth century but rooted in much older oral tradition, contains references to squirrels, including mention of a white squirrel being born of a virgin.
Literary references to squirrels include the works of Beatrix Potter, Brian Jacques' Redwall series (including Jess Squirrel and numerous other squirrels), Pattertwig in C. S. Lewis' Prince Caspian, Michael Tod's Woodstock Saga of novels featuring squirrel communities in the style of Watership Down, and the Starwife and her subjects from Robin Jarvis's Deptford novels. The title character in Miriam Young's 1964 children's book Miss Suzy is a squirrel.
Anthropomorphic red squirrels were used in British road safety campaigns between the 1950s and 1980s.
An episode of the radio program This American Life called "Squirrel Cop" describes the unintentionally humorous misadventure of a newly hired policeman in trying to remove a frantic squirrel from a homeowner's living room, which results in personal injury and a small fire. First aired in 1998, this episode turned out to be one of the most popular ones of the series, prompting rebroadcasts and a lead position on the two-CD compilation Crimebusters + Crossed Wires: Stories from This American Life.
Albino and white squirrels
One of the ways that squirrels affect human society is inspired by the fascination that people seem to have over local populations of white squirrels (often misidentified as being albino). This manifests itself by the creation of social group communities that form from a commonly shared interest in these rare animals. Other impacts on human society inspired by white squirrels include the creation of organizations that seek to protect them from human predation, and the use of the white squirrel image as a cultural icon.
Although these squirrels are commonly referred to as "albinos", most of them are likely non-albino squirrels that exhibit a rare white fur coloration known as leucism that is as a result of a recessive gene found within certain eastern gray squirrel (Sciurus carolinensis) populations, and so technically they ought to be referred to as white squirrels, instead of albino.
A project run by Untamed Science is seeking to report and document the occurrence of both white squirrels, albinos, and other piebald morphs. Users are encouraged to submit their sightings.
Local pride
Olney, Illinois, known as the "White Squirrel Capital of the World", is home of the world's largest known white squirrel colony. These squirrels have the right of way on all streets in the town, with a $500 fine for hitting one. The Olney Police Department features the image of a white squirrel on its officers' uniform patches.
Along with Olney, there are four other towns in North America that avidly compete with each other to be the official "Home of the White Squirrel", namely: Marionville, Missouri; Brevard, North Carolina; Exeter, Ontario; and Kenton, Tennessee, each of which holds an annual white squirrel festival, among other things designed to promote their claim of "White Squirrel Capital".
A list of white squirrel sightings around the world is maintained by the White Squirrel Research Institute, a group based in Brevard, North Carolina.
Other towns that have reported white squirrel populations in North America (although not necessarily competing to be the "official" white squirrel capital) include Bowling Green, Kentucky; Columbia, Mississippi; DeForest, Wisconsin; Stratford, Connecticut; and some of the snowbelt cities in the Western, Central and Finger Lakes regions of New York State (Buffalo, Rochester, Ithaca and Syracuse). The Trinity Bellwoods neighborhood of Toronto, Ontario is locally known for white squirrel sightings.
Campus populations
In addition to the various towns that boast of their white squirrel populations, a number of university campuses in North America have white squirrels. The University of Texas at Austin is home to a white squirrel population which has spurred the myth of the albino squirrel as a good luck charm. There are many versions of the tale; one of the more popular versions is if one spots the albino squirrel before an exam, they will ace it. The University of North Texas founded the Albino Squirrel Preservation Society in 2001, which has since acquired several "worldwide" chapters. In 2006, the University of North Texas held a student referendum to name their white squirrel as the university's secondary mascot, but the vote was narrowly defeated by the student body. University of Wisconsin - Eau Claire has a significant white squirrel population both on the campus and in other areas of the city of Eau Claire. Michigan Technological University in Houghton, Michigan is home to frequently sighted white squirrels that live on and around the campus. A Facebook group dedicated to these squirrels, called I've Seen the Albino Squirrel of Michigan Tech, was created for people to post photographs and anecdotes of their encounters with the white squirrels, and includes some stories from Michigan Tech alumni that recall seeing white squirrels in Houghton dating back to the 1930s.
In Kentucky, the University of Louisville has established its own chapter of the Albino Squirrel Preservation Society, which maintains contact with its members and interested parties through a Facebook group by that name. The university has an open policy to give away a free t-shirt to anyone who takes a photograph of a white squirrel on campus grounds and brings it to the administration offices.
Other university campuses that have albino squirrel populations include Oberlin College in Ohio, Ohio State University in Columbus, Ohio, Western Kentucky University in Bowling Green, Kentucky (which has had a population of albino squirrels since the 1960s), and Youngstown State University in Youngstown, Ohio.
Michael Stokes, a biology professor at Western Kentucky University, commented that the probable cause for the abundance of white squirrels on university campuses was because they were originally introduced by someone: "We're not sure how they got here, but I'll tell you how it usually happens...When you see them, especially around a college campus or parks, somebody brought them in because they thought it would be neat to have white squirrels around."
Albert Meier, another biology professor at Western Kentucky University, added that: "... white squirrels rarely survive in the wild because they can't easily hide. But on a college campus, they are less likely to be consumed by other animals."
In folklore
A story in which a Nāga shapeshifts into a white or albino squirrel, is killed by a hunter, and is magically transformed into meat equal to 8,000 cartloads figures prominently in the folklore of rocket festival traditions and the origin of Nong Han Kumphawapi Lake in Northeast Thailand.
Red and grey squirrels in the UK
A decline of the red squirrel and the rise of the eastern gray squirrel, an introduced species from North America, has been widely remarked upon in British popular culture. It is mostly regarded as the invading greys driving out the native red species. Evidence also shows that grey squirrels are vectors of the squirrel parapoxvirus for which no vaccine is currently available, and which is deadly to red squirrels but does not seem to affect the non-native host.
Currently, the red squirrel's range has been reduced to the coniferous forests in Scotland, and in England's Formby, the Lake District, Brownsea Island, and the Isle of Wight. The majority of England's red squirrels are found in the county of Northumberland. Special measures are in place to contain and remove any infiltration of grey squirrels into these areas. Though the population has dramatically decreased, they remain listed on the IUCN Red List as Least Concern.
As of 2008, the eastern gray squirrel was regarded as vermin and it was illegal to release any into the wild; any caught could be released only if one applied for and was granted a licence to do so. As of 2015, any caught in Scotland had to be humanely killed.
| Biology and health sciences | Rodents | Animals |
2888579 | https://en.wikipedia.org/wiki/Sciurus | Sciurus | The genus Sciurus ( or ) contains most of the common, bushy-tailed squirrels in North America, Europe, temperate Asia, Central America and South America.
Species
The number of species in the genus is subject to change.
In 2005, Thorington & Hoffman- whose taxonomic interpretation is followed by the IUCN website- accepted 28 species in the genus:
Genus Sciurus
Subgenus Sciurus
Allen's squirrel, Sciurus alleni
Arizona gray squirrel, Sciurus arizonensis
Mexican gray squirrel, Sciurus aureogaster
Eastern gray squirrel, Sciurus carolinensis
Collie's squirrel, Sciurus colliaei
Deppe's squirrel, Sciurus deppei
Japanese squirrel, Sciurus lis
Calabrian black squirrel, Sciurus meridionalis
Mexican fox squirrel, Sciurus nayaritensis
Fox squirrel, Sciurus niger
Peters's squirrel, Sciurus oculatus
Variegated squirrel, Sciurus variegatoides
Eurasian red squirrel, Sciurus vulgaris
Yucatan squirrel, Sciurus yucatanensis
Subgenus Otosciurus
Abert's squirrel, Sciurus aberti
Subgenus Guerlinguetus
Brazilian squirrel (Guianan squirrel), Sciurus aestuans
Yellow-throated squirrel, Sciurus gilvigularis
Red-tailed squirrel, Sciurus granatensis
Bolivian squirrel, Sciurus ignitus
Ingram's squirrel, Sciurus ingrami
Andean squirrel, Sciurus pucheranii
Richmond's squirrel, Sciurus richmondi
Sanborn's squirrel, Sciurus sanborni
Guayaquil squirrel, Sciurus stramineus
Subgenus Tenes
Persian squirrel, Sciurus anomalus
Subgenus Hadrosciurus
Fiery squirrel, Sciurus flammifer
Junín red squirrel, Sciurus pyrrhinus
Subgenus Hesperosciurus
Western gray squirrel, Sciurus griseus
Subgenus Urosciurus
Northern Amazon red squirrel, Sciurus igniventris
Southern Amazon red squirrel, Sciurus spadiceus
In 2015, 15–17 species were left in the genus Sciurus after de Vivo & Carmignotto comprehensively reviewed South American Sciuridae for the first time in many decades and proposed numerous changes; synonymising some species and many subspecies, splitting another species, and naming new species. They followed Joel Asaph Allen's unsatisfying 1914 attempt in splitting the genus Sciurus by raising the South American subgenera to the rank of genus, adding Urosciurus to Hadrosciurus, and splitting the genus Guerlinguetus in three. Their taxonomic treatment might also require Sciurus deppei to be moved to Notosciurus.
A 2020 paper published on the taxonomy of Sciurinae split Sciurus into multiple new genera and elevated several subgenera. The paper included genetic sampling from almost all recognized species and recommends the following species assignments:
Sciurus
Persian squirrel, S. anomalus
Eurasian red squirrel, S. vulgaris
Japanese squirrel, S. lis
Hesperosciurus
Abert's squirrel, H. aberti
Western gray squirrel, H. griseus
Parasciurus
Allen's squirrel, P. alleni
Arizona gray squirrel, P. arizonensis
Mexican fox squirrel, P. nayaritensis
Fox squirrel, P. niger
Peters's squirrel, P. oculatus
Neosciurus
Eastern gray squirrel, N. carolinensis
Echinosciurus
Mexican gray squirrel, E. aureogaster
Collie's squirrel, E. colliaei
Deppe's squirrel, E. deppei
Variegated squirrel, E. variegatoides
Yucatan squirrel, E. yucatanensis
Simosciurus
S. nebouxii
Guayaquil squirrel, S. stramineus
Guerlinguetus
Brazilian squirrel, G. aestuans
G. brasiliensis
Hadrosciurus
Bolivian squirrel, H. ignitus
Northern Amazon red squirrel, H. igniventris
Junín red squirrel, H. pyrrhinus
Southern Amazon red squirrel, H. spadiceus
Additionally, the paper suggests moving Andean squirrel back to subtribe Microsciurina, the dwarf squirrels, and assigns it to the newly described genus Leptosciurus. The paper's findings agree with prior assessments to synonymize Richmond's squirrel into Red-tailed squirrel and reassigns the Red-tailed squirrel into the previously monotypic Asian genus Syntheosciurus, also in Microsciurina. The paper did not include genetic sampling or taxonomic suggestions for gilvigularis, meridionalis, sanborni, or flammifer.
| Biology and health sciences | Rodents | Animals |
2889448 | https://en.wikipedia.org/wiki/Powder%20coating | Powder coating | Powder coating is a type of coating that is applied as a free-flowing, dry powder. Unlike conventional liquid paint, which is delivered via an evaporating solvent, powder coating is typically applied electrostatically and then cured under heat or with ultraviolet light. The powder may be a thermoplastic or a thermosetting polymer. It is usually used to create a thick, tough finish that is more durable than conventional paint. Powder coating is mainly used for coating of metal objects, particularly those subject to rough use. Advancements in powder coating technology like UV-curable powder coatings allow for other materials such as plastics, composites, carbon fiber, and medium-density fibreboard (MDF) to be powder coated, as little heat or oven dwell time is required to process them.
History, properties, and uses of powder coating
The powder coating process was invented around 1945 by Daniel Gustin and received US Patent 2538562 in 1945. This process coats an object electrostatically and then cures it with heat, creating a finish harder and tougher than conventional paint. Originally used on metal manufactures, such as household appliances, aluminium extrusions, drum hardware, automobile parts, and bicycle frames, the practice of powder coating has been expanded to allow finishing of other materials.
Because powder coating does not have a liquid carrier, it can produce thicker coatings than conventional liquid coatings without running or sagging, and powder coating produces minimal appearance differences between horizontally coated surfaces and vertically coated surfaces. Further, because no carrier fluid evaporates away, the coating process emits few volatile organic compounds (VOC). Finally, several powder colors can be applied before all are cured together, allowing color blending and special bleed effects in a single layer.
While it is relatively easy to apply thick coatings that cure to smooth, texture-free coating, it is not as easy to apply smooth thin films. As the film thickness is reduced, the film becomes more and more orange peeled in texture because of the particle size and glass transition temperature (Tg) of the powder.
Most powder coatings have a particle size in the range of 2 to 50 μm, a softening temperature Tg around 80 °C, and a melting temperature around 150 °C, and are cured at around 200 °C for a minimum of 10 minutes to 15 minutes (exact temperatures and times may depend on the thickness of the item being coated). For such powder coatings, film build-ups of greater than 50 μm may be required to obtain an acceptably smooth film. The surface texture which is considered desirable or acceptable depends on the end product. Many manufacturers prefer to have a certain degree of orange peel since it helps to hide metal defects that have occurred during manufacture, and the resulting coating is less prone to showing fingerprints.
There are very specialized operations that apply powder coatings of less than 30 μm or with a Tg below 40 °C in order to produce smooth thin films. One variation of the dry powder coating process, the Powder Slurry process, combines the advantages of powder coatings and liquid coatings by dispersing very fine powders of 1–5 μm sized particles into water, which then allows very smooth, low-film-thickness coatings to be produced.
For small-scale jobs, "rattle can" spray paint is less expensive and complex than powder coating. At the professional scale, the capital expense and time required for a powder coat gun, booth and oven are similar to those for a spray gun system. Powder coatings do have a major advantage in that the overspray can be recycled. However, if multiple colors are being sprayed in a single spray booth, this may limit the ability to recycle the overspray.
Advantages over other coating processes
Powder coatings contain no solvents and release little or no amount of volatile organic compounds (VOC) into the atmosphere. Thus, there is no need for finishers to buy costly pollution control equipment. Companies can comply more easily and economically with environmental regulations, such as those issued by the U.S. Environmental Protection Agency.
Powder coatings can produce much thicker coatings than conventional liquid coatings without running or sagging.
Powder coated items generally have fewer appearance differences than liquid coated items between horizontally coated surfaces and vertically coated surfaces.
A wide range of speciality effects are easily accomplished using powder coatings that would be impossible to achieve with other coating processes.
Curing time is significantly faster with powder coatings compared to liquid coatings especially when using ultraviolet cured powder Coatings or advanced low bake thermosetting powders.
Types of powder coating
There are three main categories of powder coatings: thermosets, thermoplastics, and UV curable powder coatings. Thermoset powder coatings incorporate a cross-linker into the formulation.
Most common cross-linkers are solid epoxy resins in so-called hybrid powders in mixing ratios of 50/50, 60/40 and 70/30 (polyester resin/ epoxy resin) for indoor applications and triglycidyl isocyanurate (TGIC) in a ratio of 93/7 and β-hydroxy alkylamide (HAA) hardener in 95/5 ratio for outdoor applications. When the powder is baked, it reacts with other chemical groups in the powder to polymerize, improving the performance properties. The chemical cross-linking for hybrids and TGIC powders—representing the major part of the global powder coating market—is based on the reaction of organic acid groups with an epoxy functionality; this carboxy–epoxy reaction is thoroughly investigated and well understood, by addition of catalysts the conversion can be accelerated and curing schedule can be triggered in time and/or temperature. In the powder coating industry it is common to use catalyst masterbatches where 10–15% of the active ingredient is introduced into a polyester carrier resin as matrix. This approach provides the best possible even dispersion of a small amount of a catalyst over the bulk of the powder. Concerning the cross-linking of the TGIC-free alternative based on HAA hardeners, there is no known catalyst available.
For special applications like coil coatings or clear coats it is common to use glycidylesters as hardener component, their cross-linking is based on the carboxy–epoxy chemistry too. A different chemical reaction is used in so-called polyurethane powders, where the binder resin carries hydroxyl functional groups that react with isocyanate groups of the hardener component. The isocyanate group is usually introduced into the powder in blocked form where the isocyanate functionality is pre-reacted with ε-caprolactame as blocking agent or in form of uretdiones, at elevated temperatures (deblocking temperature) the free isocyanate groups are released and available for the cross-linking reaction with hydroxyl functionality.
In general all thermosetting powder formulations contain next to the binder resin and cross-linker additives to support flow out and levelling and for degassing. Common is the use of flow promoter where the active ingredient—a polyacrylate—is absorbed on silica as carrier or as masterbatch dispersed in a polyester resin as matrix. Vast majority of powders contain benzoin as degassing agent to avoid pinholes in final powder coating film.
The thermoplastic variety does not undergo any additional actions during the baking process as it flows to form the final coating. UV-curable powder coatings are photopolymerisable materials containing a chemical photoinitiator that instantly responds to UV light energy by initiating the reaction that leads to crosslinking or cure. The differentiating factor of this process from others is the separation of the melt stage before the cure stage. UV-cured powder will melt in 60 to 120 seconds when reaching a temperature 110 °C and 130 °C. Once the melted coating is in this temperature window, it is instantly cured when exposed to UV light.
The most common polymers used are polyester, polyurethane, polyester-epoxy (known as hybrid), straight epoxy (fusion bonded epoxy) and acrylics.
Production
The polymer granules are mixed with hardener, pigments and other powder ingredients in an industrial mixer, such as a turbomixer
The mixture is heated in an extruder
The extruded mixture is rolled flat, cooled and broken into small chips
The chips are milled and sieved to make a fine powder
Methodology
The powder coating process involves three basic steps: part preparation or the pre-treatment, the powder application, and curing.
Part preparation processes and equipment
Removal of oil, dirt, lubrication greases, metal oxides, welding scale etc. is essential prior to the powder coating process. It can be done by a variety of chemical and mechanical methods. The selection of the method depends on the size and the material of the part to be powder coated, the type of impurities to be removed and the performance requirement of the finished product. Some heat-sensitive plastics and composites have low surface tensions and plasma treating can be necessary to improve powder adhesion.
Chemical pre-treatments involve the use of phosphates or chromates in submersion or spray application. These often occur in multiple stages and consist of degreasing, etching, de-smutting, various rinses and the final phosphating or chromating of the substrate and new nanotechnology chemical bonding. The pre-treatment process both cleans and improves bonding of the powder to the metal. Recent additional processes have been developed that avoid the use of chromates, as these can be toxic to the environment. Titanium, zirconium and silanes offer similar performance against corrosion and adhesion of the powder.
In many high end applications, the part is electrocoated following the pretreatment process, and subsequent to the powder coating application. This has been particularly useful in automotive and other applications requiring high end performance characteristics.
Another method of preparing the surface prior to coating is known as abrasive blasting or sandblasting and shot blasting. Blast media and blasting abrasives are used to provide surface texturing and preparation, etching, finishing, and degreasing for products made of wood, plastic, or glass. The most important properties to consider are chemical composition and density; particle shape and size; and impact resistance.
Silicon carbide grit blast medium is brittle, sharp, and suitable for grinding metals and low-tensile strength, non-metallic materials. Plastic media blast equipment uses plastic abrasives that are sensitive to substrates such as aluminum, but still suitable for de-coating and surface finishing. Sand blast medium uses high-purity crystals that have low-metal content. Glass bead blast medium contains glass beads of various sizes.
Cast steel shot or steel grit is used to clean and prepare the surface before coating. Shot blasting recycles the media and is environmentally friendly. This method of preparation is highly efficient on steel parts such as I-beams, angles, pipes, tubes and large fabricated pieces.
Different powder coating applications can require alternative methods of preparation such as abrasive blasting prior to coating. The online consumer market typically offers media blasting services coupled with their coating services at additional costs.
A recent development for the powder coating industry is the use of plasma pretreatment for heat-sensitive plastics and composites. These materials typically have low-energy surfaces, are hydrophobic, and have a low degree of wetability which all negatively impact coating adhesion. Plasma treatment physically cleans, etches, and provides chemically active bonding sites for coatings to anchor to. The result is a hydrophilic, wettable surface that is amenable to coating flow and adhesion.
Powder application processes
The most common way of applying the powder coating to metal objects is to spray the powder using an electrostatic gun, or corona gun. The gun imparts a negative charge to the powder, which is then sprayed towards the grounded object by mechanical or compressed air spraying and then accelerated toward the workpiece by the powerful electrostatic charge. There is a wide variety of spray nozzles available for use in electrostatic coating. The type of nozzle used will depend on the shape of the workpiece to be painted and the consistency of the paint. The object is then heated, and the powder melts into a uniform film, and is then cooled to form a hard coating. It is also common to heat the metal first and then spray the powder onto the hot substrate. Preheating can help to achieve a more uniform finish but can also create other problems, such as runs caused by excess powder.
Another type of gun is called a tribo gun, which charges the powder by the triboelectric. In this case, the powder picks up a positive charge while rubbing along the wall of a Teflon tube inside the barrel of the gun. These charged powder particles then adhere to the grounded substrate. Using a tribo gun requires a different formulation of powder than the more common corona guns. Tribo guns are not subject to some of the problems associated with corona guns, however, such as back-ionization and the Faraday cage effect.
Powder can also be applied using specifically adapted electrostatic discs.
Another method of applying powder coating, named as the fluidized bed method, is by heating the substrate and then dipping it into an aerated, powder-filled bed. The powder sticks and melts to the hot object. Further heating is usually required to finish curing the coating. This method is generally used when the desired thickness of coating is to exceed 300 micrometres. This is how most dishwasher racks are coated.
Electrostatic fluidized bed coating
Electrostatic fluidized bed application uses the same fluidizing technique as the conventional fluidized bed dip process but with much more powder depth in the bed. An electrostatic charging medium is placed inside the bed so that the powder material becomes charged as the fluidizing air lifts it up. Charged particles of powder move upward and form a cloud of charged powder above the fluid bed. When a grounded part is passed through the charged cloud the particles will be attracted to its surface. The parts are not preheated as they are for the conventional fluidized bed dip process.
Electrostatic magnetic brush (EMB) coating
A coating method for flat materials that applies powder with a roller, enabling relatively high speeds and accurate layer thickness between 5 and 100 micrometres. The base for this process is conventional copier technology. It is currently in use in some coating applications and looks promising for commercial powder coating on flat substrates (steel, aluminium, MDF, paper, board) as well as in sheet to sheet and/or roll to roll processes. This process can potentially be integrated in an existing coating line.
Curing
Thermoset
When a thermosetting powder is exposed to elevated temperature, it begins to melt, flows out, and then chemically reacts to form a higher-molecular-weight polymer in a network-like structure. This cure process, called crosslinking, requires a certain temperature for a certain length of time in order to reach full cure and establish the full film properties for which the material was designed.
The architecture of the polyester resin and type of curing agent have a major impact on crosslinking.
Common powders cure at object temperature for 10 minutes. In European and Asian markets, a curing schedule of for 10 minutes has been the industrial standard for decades, but is nowadays shifting towards a temperature level of at the same curing time. Advanced hybrid systems for indoor applications are established to cure at a temperature level of preferably for applications on medium-density fiberboards (MDF); outdoor durable powders with triglycidyl isocyanurate (TGIC) as hardener can operate at a similar temperature level, whereas TGIC-free systems with β-hydroxy alkylamides as curing agents are limited to approx. .
The low-temperature bake approach results in energy savings, especially in cases where coating of massive parts are task of the coating operation. The total oven residence time needs to be only 18–19 min to completely cure the reactive powder at .
A major challenge for all low-temperature cures is to optimize simultaneously reactivity, flow-out (aspect of the powder film) and storage stability. Low-temperature-cure powders tend to have less color stability than their standard bake counterparts because they contain catalysts to augment accelerated cure. HAA polyesters tend to overbake yellow more than do TGIC polyesters.
The curing schedule may vary according to the manufacturer's specifications. The application of energy to the product to be cured can be accomplished by convection cure ovens, infrared cure ovens, or by laser curing process. The latter demonstrates significant reduction of curing time.
UV cure
Ultraviolet (UV)-cured powder coatings have been in commercial use since the 1990s and were initially developed to finish heat-sensitive medium density fiberboard (MDF) furniture components. This coating technology requires less heat energy and cures significantly faster than thermally-cured powder coatings. Typical oven dwell times for UV curable powder coatings are 1–2 minutes with temperatures of the coating reaching 110–130 °C. The use of UV LED curing systems, which are highly energy efficient and do not generate IR energy from the lamp head, make UV-cured powder coating even more desirable for finishing a variety of heat-sensitive materials and assemblies. An additional benefit for UV-cured powder coatings is that the total process cycle, application to cure, is faster than other coating methods.
Removing powder coating
Methylene chloride and acetone are generally effective at removing powder coating. Most other organic solvents (thinners, etc.) are completely ineffective. Recently, the suspected human carcinogen methylene chloride is being replaced by benzyl alcohol with great success. Powder coating can also be removed with abrasive blasting. 98% sulfuric acid commercial grade also removes powder coating film. Certain low grade powder coats can be removed with steel wool, though this might be a more labor-intensive process than desired.
Powder coating can also be removed by a burning off process, in which parts are put into a large high-temperature oven with temperatures typically reaching an air temperature of 300–450 °C. The process takes about four hours and requires the parts to be cleaned completely and re-powder coated. Parts made with a thinner-gauge material need to be burned off at a lower temperature to prevent the material from warping.
Market
According to a market report prepared in August 2016 by Grand View Research, Inc., the powder coating industry includes Teflon, anodizing and electro-plating. The global powder coatings market is expected to reach US$16.55 billion by 2024. Increasing use of powder coatings for aluminum extrusion used in windows, door frames, building facades, kitchen, bathroom and electrical fixtures will fuel industry expansion. Rising construction spending in various countries including China, the U.S., Mexico, Qatar, UAE, India, Vietnam, and Singapore will fuel growth over the forecast period. Increasing government support for eco-friendly and economical products will stimulate demand over the forecast period. General industries were the prominent application segment and accounted for 20.7% of the global volume in 2015. The global market is predicted to be 20 billion dollars by 2027.
Increasing demand for tractors in the U.S., Brazil, Japan, India, and China is expected to augment the use of powder coatings on account of its corrosion protection, excellent outdoor durability, and high-temperature performance. Moreover, growing usage in agricultural equipment, exercise equipment, file drawers, computer cabinets, laptop computers, cell phones, and electronic components will propel industry expansion.
| Technology | Metallurgy | null |
2889934 | https://en.wikipedia.org/wiki/Placodontia | Placodontia | Placodonts ("tablet teeth") are an extinct order of marine reptiles that lived during the Triassic period, becoming extinct at the end of the period. They were part of Sauropterygia, the group that includes plesiosaurs. Placodonts were generally between in length, with some of the largest measuring long.
The first specimen was discovered in 1830. They have been found throughout central Europe, North Africa, the Middle East and China.
Palaeobiology
The earliest forms, like Placodus, which lived in the early to middle Triassic, resembled barrel-bodied lizards superficially similar to the marine iguana of today, but larger. In contrast to the marine iguana, which feeds on algae, the placodonts ate molluscs and so their teeth were flat and tough to crush shells. In the earliest periods, their size was probably enough to keep away the top sea predators of the time: the sharks. However, as time passed, other kinds of carnivorous reptiles began to colonize the seas, such as ichthyosaurs and nothosaurs, and later placodonts developed bony plates on their backs to protect their bodies while feeding. By the Late Triassic, these plates had grown so much that placodonts of the time, such as Henodus and Placochelys, resembled the sea turtles of the modern day more than their ancestors without bony plates. Other placodonts, like Psephoderma, developed plates as well, but in a different articulated manner that resembled the carapace of horseshoe crabs more than those of sea turtles. All these adaptations can be counted as perfect examples of convergent evolution, as placodonts were not related to any of these animals.
Because of their dense bone and heavy armour plating, these creatures would have been too heavy to float in the ocean and would have used a lot of energy to reach the water surface. For this reason, and because of the type of sediment found accompanying their fossils, it is suggested that they lived in shallow waters and not in deep oceans.
Their diet consisted of marine bivalves, brachiopods, and other invertebrates. They were notable for their large, flat, often protruding teeth, which they used to crush the molluscs and brachiopods that they hunted on the sea bed (another way in which they were similar to walruses). The palate teeth were adapted for this durophagous diet, being extremely thick and large enough to crush thick shell.
Henodus, however, differs from other placodonts in having developed unique baleen-like denticles, which alongside features of the hyoid and jaw musculature suggest that it was a filter feeder. Recent comparisons to Atopodentatus suggest that it was a herbivore as well, bearing a similar broad jaw shape, albeit it obtained plant matter through filter-feeding it from the substrates. The group was once believed to be restricted to the western Tethys, but the discovery of Sinocyamodus xinpuensis in China overturned this view.
Classification
Class Reptilia
Superorder Sauropterygia
Order Placodontia
Genus Atopodentatus?
Genus Pararcus
Superfamily Placodontoidea
Family Paraplacodontidae
Genus Paraplacodus
Family Placodontidae
Genus Placodus
Superfamily Cyamodontoidea
Genus Sinocyamodus
Genus Psephosaurus
Family Henodontidae
Genus Henodus
Genus Parahenodus
Family Cyamodontidae
Genus Cyamodus
Genus Protenodontosaurus
Family Placochelyidae
Genus Glyphoderma
Genus Placochelys
Genus Psephosauriscus
Genus Psephochelys
Genus Psephoderma
Additionally, the name Placodontiformes was erected for the clade that includes Palatodonta and Placodontia. Palatodonta, from the early Middle Triassic of the Netherlands, was a marine sauropterygian that was very similar to placodonts, but Palatodonta has teeth that are small and pointed instead of broad and flat.
The clade Helveticosauroidea was previously considered to be a basal superfamily of placodonts with the sole member Helveticosaurus. However, it is now thought that Helveticosaurus was not a placodont but possibly an unusual member of the Archosauromorpha.
Phylogeny
The cladogram below follows the result found by Rainer Schoch and Hans-Dieter Sues in 2015.
| Biology and health sciences | Prehistoric marine reptiles | Animals |
2889996 | https://en.wikipedia.org/wiki/Triplet%20oxygen | Triplet oxygen | Triplet oxygen, 3O2, refers to the S = 1 electronic ground state of molecular oxygen (dioxygen). Molecules of triplet oxygen contain two unpaired electrons, making triplet oxygen an unusual example of a stable and commonly encountered diradical: it is more stable as a triplet than a singlet. According to molecular orbital theory, the electron configuration of triplet oxygen has two electrons occupying two π molecular orbitals (MOs) of equal energy (that is, degenerate MOs). In accordance with Hund's rules, they remain unpaired and spin-parallel, which accounts for the paramagnetism of molecular oxygen. These half-filled orbitals are antibonding in character, reducing the overall bond order of the molecule to 2 from the maximum value of 3 that would occur when these antibonding orbitals remain fully unoccupied, as in dinitrogen. The molecular term symbol for triplet oxygen is 3Σ.
Spin
The s = spins of the two electrons in degenerate orbitals gives rise to 2 × 2 = 4 independent spin states in total. Exchange interaction splits these into a singlet state (total spin S = 0) and a set of 3 degenerate triplet states (S = 1). In agreement with Hund's rules, the triplet states are energetically more favorable, and correspond to the ground state of the molecule with a total electron spin of S = 1. Excitation to the S = 0 state results in much more reactive, metastable singlet oxygen.
Lewis structure
Because the molecule in its ground state has a non-zero spin magnetic moment, oxygen is paramagnetic; i.e., it can be attracted to the poles of a magnet. Thus, the Lewis structure O=O with all electrons in pairs does not accurately represent the nature of the bonding in molecular oxygen. However, the alternative structure •O–O• is also inadequate, since it implies single bond character, while the experimentally determined bond length of 121 pm is much shorter than the single bond in hydrogen peroxide (HO–OH) which has a length of 147.5 pm. This indicates that triplet oxygen has a higher bond order. Molecular orbital theory must be used to correctly account for the observed paramagnetism and short bond length simultaneously. Under a molecular orbital theory framework, the oxygen-oxygen bond in triplet dioxygen is better described as one full σ bond plus two π half-bonds, each half-bond accounted for by two-center three-electron (2c-3e) bonding, to give a net bond order of two (1+2×), while also accounting for the spin state (S = 1). In the case of triplet dioxygen, each 2c-3e bond consists of two electrons in a πu bonding orbital and one electron in a πg antibonding orbital to give a net bond order contribution of .
The usual rules for constructing Lewis structures must be modified to accommodate molecules like triplet dioxygen or nitric oxide that contain 2c-3e bonds. There is no consensus in this regard; Pauling has suggested the use of three closely spaced collinear dots to represent the three-electron bond (see illustration).
Observation in liquid state
A common experimental way to observe the paramagnetism of dioxygen is to cool it down into the liquid phase. When poured between the poles of strong magnets that are close together the liquid oxygen can be suspended. Or a magnet can pull the stream of liquid oxygen as it is poured. The net magnetic moment of the total electron spin provides an explanation of these observations.
Reaction
The unusual electron configuration prevents molecular oxygen from reacting directly with many other molecules, which are often in the singlet state. Triplet oxygen will, however, readily react with molecules in a doublet state to form a new radical.
Conservation of spin quantum number would require a triplet transition state in a reaction of triplet oxygen with a closed shell (a molecule in a singlet state). The extra energy required is sufficient to prevent direct reaction at ambient temperatures with all but the most reactive substrates, e.g. white phosphorus. At higher temperatures or in the presence of suitable catalysts the reaction proceeds more readily. For instance, most flammable substances are characterised by an autoignition temperature at which they will undergo combustion in air without an external flame or spark.
| Physical sciences | Group 16 | Chemistry |
934744 | https://en.wikipedia.org/wiki/Nevus | Nevus | Nevus (: nevi) is a nonspecific medical term for a visible, circumscribed, chronic lesion of the skin or mucosa. The term originates from nævus, which is Latin for "birthmark"; however, a nevus can be either congenital (present at birth) or acquired. Common terms, including mole, birthmark, and beauty mark, are used to describe nevi, but these terms do not distinguish specific types of nevi from one another.
Classification
The term nevus is applied to a number of conditions caused by neoplasias and hyperplasias of melanocytes, as well as a number of pigmentation disorders, both hypermelanotic (containing increased melanin, the pigment responsible for skin color) and hypomelanotic (containing decreased melanin). Suspicious skin moles which are multi-colored or pink may be a finding in skin cancer.
Increased melanin
Usually acquired
Melanocytic nevus
Melanocytic nevi can be categorized based on the location of melanocytic cells
Junctional: epidermis
Intradermal: dermis
Compound: epidermis and dermis
Atypical (dysplastic) nevus: This type of nevus must be diagnosed based on histological features. Clinically, atypical nevi are characterized by variable pigmentation and irregular borders.
Becker's nevus
Blue nevus (rarely congenital): A classic blue nevus is usually smaller than 1 cm, flat, and blue-black in color.
Hori's nevus
Nevus spilus (speckled lentiginous nevus): This lesion includes dark speckles within a tan-brown background.
Pigmented spindle cell nevus
Spitz nevus
Zosteriform lentiginous nevus
Usually congenital
Congenital melanocytic nevus
These nevi are often categorized based on size, however, the lesions usually grow in proportion to the body over time, so the category may change over an individual's life. This categorization is important because large congenital melanocytic nevi are associated with an increased risk of melanoma, a serious type of skin cancer.
Small: <1.5 cm
Medium: 1.5–19.9 cm
Large: ≥ 20 cm
Nevus of Ito
Nevus of Ota
Decreased melanin
Acquired
Nevus anemicus
Congenital
Nevus depigmentosus
Additional types of nevi do not involve disorders of pigmentation or melanocytes. These additional nevi represent hamartomatous proliferations of the epithelium, connective tissue, and vascular malformations.
Epidermal nevi
These nevi represent excess growth of specific cells types found in the skin, including those that make up oil and sweat glands.
Verrucous epidermal nevus
Nevus sebaceous
Nevus comedonicus
Eccrine nevus
Apocrine nevus
Connective tissue nevi
Connective tissue nevi represent abnormalities of collagen in the dermis, the deep layer of the skin.
Collagenoma
Elastoma
Vascular nevi
These nevi represent excess growth of blood vessels, including capillaries.
Nevus simplex (nevus flammeus nuchae), also known as a stork bite or salmon patch.
Intramucosal nevi
An intramucosal nevus is a nevus within the mucosa as found in for example the mouth and genital areas. In the mouth, they are found most frequently on the hard palate. They are typically light brown and dome-shaped. Intramucosal nevi account for 64% of all reported case of oral nevi.
Diagnosis
Nevi are typically diagnosed clinically with the naked eye or using dermatoscopy. More advanced imaging tests are available for distinguishing melanocytic nevi from melanoma, including computerized dermoscopy and image analysis. The management of nevi depends on the type of nevus and the degree of diagnostic uncertainty. Some nevi are known to be benign, and may simply be monitored over time. Others may warrant more thorough examination and biopsy for histopathological examination (looking at a sample of skin under a microscope to detect unique cellular features). For example, a clinician may want to determine whether a pigmented nevus is a type of melanocytic nevus, dysplastic nevus, or melanoma as some of these skin lesions pose a risk for malignancy. The ABCDE criteria (asymmetry, border irregularity, color variegation, diameter > 6 mm, and evolution) are often used to distinguish nevi from melanomas in adults, while modified criteria (amelanosis, bleeding or bumps, uniform color, small diameter or de novo, and evolution) can be used when evaluating suspicious lesions in children. In addition to histopathological examination, some lesions may also warrant additional tests to aid in diagnosis, including special stains, immunohistochemistry, and electron microscopy. Typically, the nevi that exist since childhood are harmless.
Differential diagnoses
Hypermelanotic nevi must be differentiated from other types of pigmented skin lesions, including:
Lentigo simplex
Solar lentigo
Café au lait macule
Ink-spot lentigo
Mucosal melanotic macule
Mongolian spot (dermal melanocytosis)
Management
The management of a nevus depends on the specific diagnosis, however, the options for treatment generally include the following modalities:
Observation
Destruction
Chemical peels
Cryotherapy
Dermabrasion
Electrodessication
Laser ablation
Surgery
The decision to observe or treat a nevus may depend on a number of factors, including cosmetic concerns, irritative symptoms (e.g., pruritus), ulceration, infection, and concern for potential malignancy.
Syndromes
The term nevus is included in the names of multiple dermatologic syndromes:
Basal cell nevus syndrome
Blue rubber bleb nevus syndrome
Dysplastic nevus syndrome
Epidermal nevus syndrome
Linear nevus sebaceous syndrome
| Biology and health sciences | Symptoms and signs | Health |
934777 | https://en.wikipedia.org/wiki/Tui%20na | Tui na | Tui na (; ) is a form of alternative medicine similar to shiatsu. As a branch of traditional Chinese medicine, it is often used in conjunction with acupuncture, moxibustion, fire cupping, Chinese herbalism, tai chi or other Chinese internal martial arts, and qigong.
Background
Tui na is a hands-on body treatment that uses Chinese Daoist principles in an effort to bring the eight principles of traditional Chinese medicine into balance. The practitioner may brush, knead, roll, press, and rub the areas between each of the joints, known as the eight gates, to attempt to open the body's defensive qi (wei qi) and get the energy moving in the meridians and the muscles. Techniques may be gentle or quite firm. The name comes from two of the actions: tui means "to push" and na means "to lift and squeeze." Other strokes include shaking and tapotement. The practitioner can then use a range of motion, traction, and the stimulation of acupressure points. These techniques are claimed to aid in the treatment of both acute and chronic musculoskeletal conditions, as well as many non-musculoskeletal conditions.
As with many other traditional Chinese medical practices, different schools vary in their approach to the discipline. In traditional Korean medicine it is known as chu na (), and it is related also to Japanese massage or anma and its derivatives shiatsu and sekkotsu. In the West, tui na is taught as a part of the curriculum at some acupuncture schools.
Efficacy
A collaborative study between researchers in China and Germany concluded that the use of Tui na techniques can be a safe, low-cost method to reduce back and neck pain.
| Biology and health sciences | Alternative and traditional medicine | Health |
935178 | https://en.wikipedia.org/wiki/Trypanosoma | Trypanosoma | Trypanosoma is a genus of kinetoplastids (class Trypanosomatidae), a monophyletic group of unicellular parasitic flagellate protozoa. Trypanosoma is part of the phylum Euglenozoa. The name is derived from the Ancient Greek trypano- (borer) and soma (body) because of their corkscrew-like motion. Most trypanosomes are heteroxenous (requiring more than one obligatory host to complete life cycle) and most are transmitted via a vector. The majority of species are transmitted by blood-feeding invertebrates, but there are different mechanisms among the varying species. Trypanosoma equiperdum is spread between horses and other equine species by sexual contact. They are generally found in the intestine of their invertebrate host, but normally occupy the bloodstream or an intracellular environment in the vertebrate host.
Trypanosomes infect a variety of hosts and cause various diseases, including the fatal human diseases sleeping sickness, caused by Trypanosoma brucei, and Chagas disease, caused by Trypanosoma cruzi.
The mitochondrial genome of the Trypanosoma, as well as of other kinetoplastids, known as the kinetoplast, is made up of a highly complex series of catenated circles and minicircles and requires a cohort of proteins for organisation during cell division.
History
In 1841, Gabriel Valentin found flagellates that today are included in Trypanoplasma in the blood of trout.
The genus (T. sanguinis) was named by Gruby in 1843, after parasites in the blood of frogs.
In 1903, David Bruce identified the protozoan parasite and the tsetse fly vector of African trypanosomiasis.
Taxonomy
A number of different methods demonstrate that the traditional Trypanosoma genus is not monophyletic, with the biflagellate Bodonida nested within. The American and African trypanosomes constitute distinct clades, implying that the major human disease agents T. cruzi (cause of Chagas' disease) and T. brucei (cause of African sleeping sickness) are not closely related to each other.
Phylogenetic analyses suggest an ancient split between a branch containing all Salivarian trypanosomes and a branch containing all non-Salivarian lineages. The latter branch in turn splits into a clade containing bird, reptilian and the Stercorarian trypanosomes infecting mammals, and a clade with a branch of fish trypanosomes and a branch of reptilian or amphibian lineages.
Salivarians are trypanosomes of the subgenera of Duttonella, Trypanozoon, Pycnomonas and Nannomonas, which are passed to the vertebrate recipient in the saliva of the tsetse fly (Glossina spp.). Antigenic variation is a characteristic shared by the Salivaria, which has been particularly well-studied in T. brucei. The Trypanozoon subgenus contains the species Trypanosoma brucei, T. rhodesiense and T. equiperdum. The subgenus Duttonella contains the species T. vivax. Nannomonas contains T. congolense.
Stercorians are trypanosomes passed to the recipient in the feces of insects from the subfamily Triatominae (most importantly Triatoma infestans). This group includes Trypanosoma cruzi, T. lewisi, T. melophagium, T. nabiasi, T. rangeli, T. theileri, T. theodori. The subgenus Herpetosoma contains the species T. lewisi.
The subgenus Schizotrypanum contains T. cruzi and a number of bat trypanosomes. The bat species include Trypanosoma cruzi marinkellei, Trypanosoma dionisii, Trypanosoma erneyi, Trypanosoma livingstonei and Trypanosoma wauwau. Other related species include Trypanosoma conorhini and Trypanosoma rangeli.
Evolution
The ancestor of modern trypanosomes absorbed a green alga around one billion years ago and co-opted some of its genetic material. This has resulted in modern trypanosomes such as T. brucei containing essential genes for the breakdown of sugars that are most closely related to plants. This difference may be used as the target of therapies.
The relationships between the species have not been worked out to date. It has been suggested that T. evansi arose from a clone of T. equiperdum which lost its maxicircles. It has also been proposed that T. evansi should be classified as a subspecies of T. brucei.
It has been shown that T. equiperdum has emerged at least once in Eastern Africa and T. evansi at two independent occasions in Western Africa.
Selected species
Species of Trypanosoma include the following:
T. ambystomae. in amphibians
T. antiquus, extinct (Fossil in Miocene amber)
T. avium, which infects birds and blackflies
T. bennetti, which infects birds and biting midges
T. boissoni, in elasmobranch
T. brucei, which causes sleeping sickness in humans and nagana in cattle
T. cruzi, which causes Chagas disease in humans
Trypanosoma culicavium, which infects birds and mosquitoes
T. congolense, which causes nagana in ruminant livestock, horses and a wide range of wildlife
T. equinum, in South American horses, transmitted via Tabanidae,
T. equiperdum, which causes dourine or covering sickness in horses and other Equidae, it can be spread through coitus.
T. evansi, which causes one form of the disease surra in certain animals including camels (a single case report of human infection in 2005 in India was successfully treated with suramin)
T. everetti, in birds
T. hosei, in amphibians
T. irwini, in koalas
T. lewisi, in rats
T. melophagium, in sheep, transmitted via Melophagus ovinus
T. parroti, in amphibians
T. percae, in the species Perca fluviatilis
T. phedinae
T. rangeli, believed to be nonpathogenic to humans
T. rotatorium, in amphibians
T. rugosae, in amphibians
T. sergenti, in amphibians
T. simiae, which causes nagana in pigs. Its main reservoirs are warthogs and bush pigs
T. sinipercae, in fishes
T. suis, which causes a different form of surra
T. theileri, a large trypanosome infecting ruminants and transmitted by a variety of vectors including tabanids and mosquitoes
T. thomasbancrofti, an avian trypanosome with culicine mosquito vector
T. triglae, in marine teleosts
T. tungarae, in frogs
T. vivax, which causes the disease nagana, mainly in West Africa, although it has spread to South America
Hosts, life cycle and morphologies
Two different types of trypanosomes exist, and their life cycles are different, the salivarian species and the stercorarian species.
Stercorarian trypanosomes infect insects, most often the triatomid kissing bug, by developing in the posterior gut followed by release into the feces and subsequent depositing on the skin of the vertebrate host. The organism then penetrates and can disseminate throughout the body. Insects become infected when taking a blood meal.
Salivarian trypanosomes develop in the anterior gut of insects, most importantly the Tsetse fly, and infective organisms are inoculated into the host by the insect bite before it feeds.
As trypanosomes progress through their life cycle they undergo a series of morphological changes as is typical of trypanosomatids. The life cycle often consists of the trypomastigote form in the vertebrate host and the trypomastigote or promastigote form in the gut of the invertebrate host. Intracellular lifecycle stages are normally found in the amastigote form. The trypomastigote morphology is unique to species in the genus Trypanosoma.
Meiosis
Evidence has been obtained for meiosis in T. cruzi, and for genetic exchange. T. brucei is able to undergo meiosis within the salivary glands of its tsetse fly host, and meiosis is considered to be an intrinsic part of the T. brucei developmental cycle. An adaptive benefit of meiosis for T. crucei and T. brucei may be the recombinational repair of DNA damages that are acquired in the hostile environment of their respective hosts.
| Biology and health sciences | Excavata | Plants |
935979 | https://en.wikipedia.org/wiki/Tesla%20%28unit%29 | Tesla (unit) | The tesla (symbol: T) is the unit of magnetic flux density (also called magnetic B-field strength) in the International System of Units (SI).
One tesla is equal to one weber per square metre. The unit was announced during the General Conference on Weights and Measures in 1960 and is named in honour of Serbian-American electrical and mechanical engineer Nikola Tesla, upon the proposal of the Slovenian electrical engineer France Avčin.
Definition
A particle, carrying a charge of one coulomb (C), and moving perpendicularly through a magnetic field of one tesla, at a speed of one metre per second (m/s), experiences a force with magnitude one newton (N), according to the Lorentz force law. That is,
As an SI derived unit, the tesla can also be expressed in terms of other units. For example, a magnetic flux of 1 weber (Wb) through a surface of one square meter is equal to a magnetic flux density of 1 tesla. That is,
Expressed only in SI base units, 1 tesla is:
where A is ampere, kg is kilogram, and s is second.
Additional equivalences result from the derivation of coulombs from amperes (A), :
the relationship between newtons and joules (J), :
and the derivation of the weber from volts (V), :
Electric vs. magnetic field
In the production of the Lorentz force, the difference between electric fields and magnetic fields is that a force from a magnetic field on a charged particle is generally due to the charged particle's movement, while the force imparted by an electric field on a charged particle is not due to the charged particle's movement. This may be appreciated by looking at the units for each. The unit of electric field in the MKS system of units is newtons per coulomb, N/C, while the magnetic field (in teslas) can be written as N/(C⋅m/s). The dividing factor between the two types of field is metres per second (m/s), which is velocity. This relationship immediately highlights the fact that whether a static electromagnetic field is seen as purely magnetic, or purely electric, or some combination of these, is dependent upon one's reference frame (that is, one's velocity relative to the field).
In ferromagnets, the movement creating the magnetic field is the electron spin (and to a lesser extent electron orbital angular momentum). In a current-carrying wire (electromagnets) the movement is due to electrons moving through the wire (whether the wire is straight or circular).
Conversion to non-SI units
One tesla is equivalent to:
For the relation to the units of the magnetising field (ampere per metre or oersted), see the article on permeability.
Multiples
Examples
The following examples are listed in the ascending order of the magnetic-field strength.
(31.869 μT) – strength of Earth's magnetic field at 0° latitude, 0° longitude
(40 μT) – walking under a high-voltage power line
(5 mT) – the strength of a typical refrigerator magnet
0.3 T – the strength of solar sunspots
1 T to 2.4 T – coil gap of a typical loudspeaker magnet
1.5 T to 3 T – strength of medical magnetic resonance imaging systems in practice, experimentally up to 17 T
4 T – strength of the superconducting magnet built around the CMS detector at CERN
5.16 T – the strength of a specially designed room temperature Halbach array
8 T – the strength of LHC magnets
11.75 T – the strength of INUMAC magnets, largest MRI scanner
13 T – strength of the superconducting ITER magnet system
14.5 T – highest magnetic field strength ever recorded for an accelerator steering magnet at Fermilab
16 T – magnetic field strength required to levitate a frog (by diamagnetic levitation of the water in its body tissues) according to the 2000 Ig Nobel Prize in Physics
17.6 T – strongest field trapped in a superconductor in a lab as of July 2014
20 T - strength of the large scale high temperature superconducting magnet developed by MIT and Commonwealth Fusion Systems to be used in fusion reactors
27 T – maximal field strengths of superconducting electromagnets at cryogenic temperatures
35.4 T – the current (2009) world record for a superconducting electromagnet in a background magnetic field
45 T – the current (2015) world record for continuous field magnets
97.4 T – strongest magnetic field produced by a "non-destructive" magnet
100 T – approximate magnetic field strength of a typical white dwarf star
1200 T – the field, lasting for about 100 microseconds, formed using the electromagnetic flux-compression technique
109 T – Schwinger limit above which the electromagnetic field itself is expected to become nonlinear
108 – 1011 T (100 MT – 100 GT) – magnetic strength range of magnetar neutron stars
| Physical sciences | Electromagnetism | null |
937029 | https://en.wikipedia.org/wiki/Arabian%20oryx | Arabian oryx | The Arabian oryx or white oryx (Oryx leucoryx) is a medium-sized antelope with a distinct shoulder bump, long, straight horns, and a tufted tail. It is a bovid, and the smallest member of the genus Oryx, native to desert and steppe areas of the Arabian Peninsula. The Arabian oryx was extinct in the wild by the early 1970s, but was saved in zoos and private reserves, and was reintroduced into the wild starting in 1980.
In 1986, the Arabian oryx was classified as endangered on the IUCN Red List, and in 2011, it was the first animal to revert to vulnerable status after previously being listed as extinct in the wild. It is listed in CITES Appendix I. In 2016, populations were estimated at 1,220 individuals in the wild, including 850 mature individuals, and 6,000–7,000 in captivity worldwide.
Etymology
The taxonomic name Oryx leucoryx is from the Greek (gazelle or antelope) and (white). The Arabian oryx is also called the white oryx in English, in Hebrew, and is known as , and in Arabic.
Taxonomy
The name "oryx" was introduced by Peter Simon Pallas in 1767 for the common eland as Antilope oryx. He also scientifically described the Arabian oryx as Oryx leucoryx, giving its range as "Arabia, and perhaps Libya". In 1816, Henri Marie Ducrotay de Blainville subdivided the antelope group, adopted Oryx as a genus name, and changed the species name Antilope oryx to Oryx gazella. In 1826, Martin Lichtenstein confused matters by transferring the name Oryx leucoryx to the scimitar oryx, now Oryx dammah. The Zoological Society of London obtained the first living individual in Europe in 1857. Not realizing this might be the Oryx leucoryx of previous authors, John Edward Gray proposed calling it Oryx beatrix after Princess Beatrice of the United Kingdom. Oldfield Thomas renamed the scimitar oryx as Oryx algazal in 1903 and gave the Arabian oryx its original name.
Description
The Arabian oryx' coat is an almost luminous white, the undersides and legs are brown, and black stripes occur where the head meets the neck, on the forehead, on the nose, and going from the horn down across the eye to the mouth. Both sexes have long, straight or slightly curved, ringed horns which are . It stands between tall at the shoulder and typically weighs between .
Distribution and habitat
Historically, the Arabian oryx probably ranged throughout most of the Middle East. In the early 1800s, they could still be found in the Sinai, Palestine, the Transjordan, much of Iraq, and most of the Arabian Peninsula. During the 19th and early 20th centuries, their range was pushed back towards Saudi Arabia, and by 1914, only a few survived outside that country. A few were reported in Jordan into the 1930s, but by the mid-1930s, the only remaining populations were in the Nafud Desert in northwestern Saudi Arabia and the Rub' al Khali in the south.
In the 1930s, Arabian princes and oil company clerks started hunting Arabian oryxes with automobiles and rifles. Hunts grew in size, and some were reported to employ as many as 300 vehicles. By the middle of the 20th century, the northern population was effectively extinct. The last Arabian oryx in the wild before reintroduction was reported in 1972.
Arabian oryxes prefer to range in gravel deserts or hard sand, where their speed and endurance will protect them from most predators and hunters on foot. In the sand deserts in Saudi Arabia, they used to be found in the hard sand areas of the flats between the softer dunes and ridges.
Arabian oryxes have been reintroduced to Oman, Saudi Arabia, Israel, the United Arab Emirates, Syria, and Jordan. A small population was introduced on Hawar Island, Bahrain, and large semi-managed populations at several sites in Qatar and the UAE. The total reintroduced population is now estimated to be around 1,000. This puts the Arabian oryx well over the threshold of 250 mature individuals needed to qualify for endangered status. However, the majority of the population is concentrated in Saudi Arabia.
Behaviour and ecology
Arabian oryxes rest during the heat of the day. They can detect rainfall and move towards it, meaning they have huge ranges; a herd in Oman can range over . Packs are of mixed sex and usually contain between 2 and 15 animals, though herds of up to 100 have been reported. Arabian oryxes are generally not aggressive toward one another, which allows herds to exist peacefully for some time.
Feeding
The diets of the Arabian oryx consist mainly of grasses, but they eat a large variety of vegetation, including buds, herbs, fruit, tubers and roots. Herds of Arabian oryxes follow infrequent rains to eat the new plants that grow afterwards. They can go for several weeks without water. In Oman, it primarily eats grasses of the genus Stipagrostis, flowers from Stipagrostis plants appeared highest in crude protein and water, while leaves seemed a better food source with other vegetation.
Behavior
When the Arabian oryx is not wandering its habitat or eating, it digs shallow depressions in the soft ground under shrubs or trees for resting. They can detect rainfall from a distance and follow in the direction of fresh plant growth. The number of individuals in a herd can vary greatly (up to 100 have been reported occasionally), but the average is 10 or fewer individuals. Bachelor herds do not occur, and single territorial males are rare. Herds establish a straightforward hierarchy that involves all females and males above the age of about seven months. Arabian oryxes tend to maintain visual contact with other herd members, with subordinate males taking positions between the main body of the herd and the outlying females. If separated, males will search areas where the herd last visited, settling into a solitary existence until the herd's return. Where water and grazing conditions permit, male Arabian oryxes establish territories. Bachelor males are solitary. A dominance hierarchy is created within the herd by posturing displays, which avoid the danger of serious injury their long, sharp horns could potentially inflict. Males and females use their horns to defend the sparse territorial resources against interlopers.
Adaptations for desert environments
The Arabian oryx changes its physiology and behaviour at different times of the year to increase survival during times when food and water are in limited supply. During the summer, when droughts are common in the desert environments where it lives, the Arabian oryx will drastically reduce its minimal fasting metabolic rate by lying completely inactive beneath shade trees during the day and ranging over smaller areas at night to forage. By letting its body temperature rise during the heat of the day, it uses less evaporative cooling and retains more body water, and at night, the cool night air lowers its temperature back to the normal range. The oryx’s arterial blood temperature is partly powered by a network of small arterial vessels with a large surface area called the rete mirabile, which branches from the two carotid arteries to the brain and allows for heat exchange between warm arterial blood and the cooler blood in the sinus cavities. Because of these changes in behaviour and physiology, it was shown that Arabian oryx can reduce their urine volume, faecal water loss, and resting metabolic rate by at least 50%.
Wolves are the Arabian oryx's only predator. In captivity and safe conditions in the wild, Arabian oryxes have a lifespan of up to 20 years. In periods of drought, though, their life expectancy may be significantly reduced by malnutrition and dehydration. Other causes of death include fights between males, snakebites, disease, and drowning during floods.
Importance to humans
The Arabian oryx is the national animal of Jordan, Oman, the United Arab Emirates, Bahrain, and Qatar.
The Arabian oryx is also the namesake of several businesses on the Arabian peninsula, notably Al Maha Airways and Al Maha Petroleum.
In the King James Version of the Bible, the word re’em is translated as 'unicorn'. In Modern Hebrew, the name , meaning white oryx, is used in error for the scimitar-horned oryxes living in the sanctuary Yotvata Hai Bar near Eilat. The scimitar oryx is called . The Arabian name ri'ïm is the equivalent of the Hebrew name , also meaning white oryx, suggesting a borrowing from the Early Modern Era.
A Qatari oryx named "Orry" was chosen as the official games mascot for the 2006 Asian Games in Doha, and is shown on tailfins of planes belonging to Middle Eastern airline Qatar Airways.
Unicorn myth
The myth of the one-horned unicorn may be based on oryxes that have lost one horn. Aristotle and Pliny the Elder held that the oryx was the unicorn's "prototype". From certain angles, the oryx may seem to have one horn rather than two, and given that its horns are made from hollow bone that cannot be regrown, if an Arabian oryx were to lose one of its horns, for the rest of its life, it would have only one.
Another source for the concept may have originated from the translation of the Hebrew word into Greek as μονόκερως, , in the Septuagint. In Psalm 22:21, the word karen, meaning horn, is written in singular. The Roman Catholic Vulgata and the Douay-Rheims Bible translated as rhinoceros; other translations are names for a wild bull, wild oxen, buffalo, or gaur, but in some languages, a word for unicorn is maintained. The Arabic translation is the correct choice etymologically, meaning 'white oryx'.
Conservation
The Phoenix Zoo and the Fauna and Flora Preservation Society of London (now Fauna and Flora International), with financial help from the World Wildlife Fund, are credited with saving the Arabian oryx from extinction. In 1962, these groups started the first captive-breeding herd in any zoo, at the Phoenix Zoo, sometimes referred to as "Operation Oryx". Starting with nine animals, the Phoenix Zoo has had over 240 successful births. From Phoenix, Arabian oryxes were sent to other zoos and parks to start new herds.
In 1968, Sheikh Zayed bin Sultan Al Nahyan of Abu Dhabi, out of concern for the land's wildlife, particularly ungulates such as the Arabian oryx, founded the Al Ain Zoo to conserve them.
Arabian oryxes were hunted to extinction in the wild by 1972. By 1980, the number of Arabian oryxes in captivity had increased to the point that Arabian oryx reintroduction was started. The first release, to Oman, was attempted with Arabian oryxes from the San Diego Wild Animal Park. Although numbers in Oman have declined, there are now wild populations in Saudi Arabia and Israel, as well. One of the largest populations is found in Mahazat as-Sayd Protected Area, a large, fenced reserve in Saudi Arabia, covering more than .
On June 28, 2007, Oman's Arabian Oryx Sanctuary was the first site ever to be removed from the UNESCO World Heritage List. UNESCO's reason for this was the Omani government's decision to open 90% of the site to oil prospecting. The Arabian oryx population on the site has been reduced from 450 in 1996 to only 65 in 2007. Now, fewer than four breeding pairs are left on the site.
In June 2011, the Arabian oryx was relisted as vulnerable by the IUCN Red List. The IUCN estimated there were more than 1,200 Arabian oryx in the wild 2016, with 6,000–7,000 held in captivity worldwide in zoos, preserves, and private collections. Some of these are in large, fenced enclosures (free-roaming), including those in Syria (Al Talila), Bahrain, Qatar, and the UAE. This is the first time the IUCN has reclassified a species as vulnerable after it had been listed as extinct in the wild. The Arabian oryx is also listed in CITES Appendix I.
| Biology and health sciences | Bovidae | Animals |
937446 | https://en.wikipedia.org/wiki/Articulated%20bus | Articulated bus | An articulated bus, also referred to as a slinky bus, bendy bus, tandem bus, vestibule bus, stretch bus, or an accordion bus, is an articulated vehicle, typically a motor bus or trolleybus, used in public transportation. It is usually a single-decker, and comprises two or more rigid sections linked by a pivoting joint (articulation) enclosed by protective bellows inside and outside and a cover plate on the floor. This allows a longer legal length than rigid-bodied buses, and hence a higher passenger capacity (94–120), while still allowing the bus to maneuver adequately.
Due to their high passenger capacity, articulated buses are often used as part of bus rapid transit schemes, and can include mechanical guidance. Articulated buses are typically long, in contrast to standard rigid buses at long. The common arrangement of an articulated bus is to have a forward section with two axles leading a rear section with a single axle, with the driving axle mounted on either the front or the rear section. Some articulated buses have a steering arrangement on the rearmost axle which turns slightly in opposition to the front steering axle, allowing the vehicle to negotiate tighter turns, similar to hook-and-ladder fire trucks operating in city environments. A less common variant of the articulated bus is the bi-articulated bus, where the vehicle has two trailer sections rather than one. Such vehicles have a capacity of around 200 people, and a length of about ; as such, they are used almost exclusively on high-capacity, high-frequency arterial routes and on bus rapid transit services.
History
First example of the articulated bus appeared in Milan (Italy) in 1937. In 1938, Twin Coach built an articulated bus for the city of Baltimore; this bus, which had four axles on a long body, was only articulated in the vertical direction to accommodate steep grades. 15 examples of the "Super Twin" were built in 1948, but it was not developed further. According to contemporary coverage, the Super Twin had a capacity of 58 seated and 120 total, with a weight of .
In Budapest, the first prototypes of the (named for its 180-passenger capacity) were shown in 1961. There is an ongoing exhibition in Budapest at the Hungarian Technical and Transportation Museum in 2010 with the title "The articulated bus is 50 years old." The Ikarus 180 went into limited production in 1963, and entered serial production in 1966; the Ikarus 180 was discontinued in 1973 when its successor, the Ikarus 280 was released.
In the mid-1960s, AC Transit in California pioneered the American use of a modern articulated bus, operating the experimental commuter coach "XMC 77" (based on Continental Trailways' Bus & Car Co. Super Golden Eagle model) on some of its transbay lines. The XMC-77, which AC Transit dubbed the "Freeway Train", was originally built in 1958, purchased by the District in October 1965, and made its debut run for Line N on 14 March 1966; passengers on the inaugural run were presented with special souvenir tickets. XMC-77 was later exhibited to the public at various locations in the East Bay and the Transbay Terminal. It offered seats for 77 passengers (finished in brown and orange) and an observation lounge, complete with a card table to seat a quartet. The long coach stood high and was powered by a Cummins engine with an output of . Engineering for the XMC-77 was carried out by the local firm of DeLeuw Cather & Co.
In the United States, articulated buses were imported from Europe and deployed in the late 1970s and early 1980s. During this time, rising operating costs led to public takeovers of transit systems, and the pressure to reduce labor (driver) costs in turn meant transporting more passengers in a single vehicle. King County Metro and Caltrans led a Pooled Purchase Consortium, formed in 1976, which later awarded a contract to the AM General/MAN joint venture responsible for assembling MAN SG 220 (from Germany) articulated buses in America. Contemporaneously, Crown entered an agreement with Ikarus to produce the Crown-Ikarus 286, coupling American-made powertrains with the Hungarian Ikarus 280 chassis.
Articulated buses have also been used in Australia, Austria (Gräf & Stift), Italy, Germany (Gaubschat, Emmelmann, Göppel, Duewag, Vetter), Canada (LFS Articulated), Hungary (Ikarus), Poland (Jelcz AP02), Romania (DAC 117 UD). The first modern British "bendy buses" (as they are referred to in the UK and Canada) were built by Leyland-DAB and used in the city of Sheffield in the 1980s. They were subsequently withdrawn from service because they proved to be expensive to maintain.
Advantages and disadvantages
Advantages
The main benefits of an articulated bus over the double-decker bus are rapid simultaneous boarding and disembarkation through more and larger doors, increased stability arising from a lower centre of gravity, smaller frontal area giving less air resistance than double decker buses thus better fuel efficiency, often a smaller turning radius, higher maximum service speed, the ability to pass under low bridges, and improved accessibility for people with disabilities and the elderly.
Disadvantages
In some circumstances of urban operation (such as in areas with narrow streets and tight turns), articulated buses may also be involved in significantly more accidents than conventional buses. Estimates for London's articulated buses put their involvement in accidents involving pedestrians at over five times the rate of all other buses, and over twice as high for accidents involving cyclists. In a period when articulated buses made up approximately 5% of the London bus fleet, they were involved in 20% of all bus-related deaths, statistics which eventually led to their replacement. However, these safety statistics may be partly skewed due to the buses having been used on the busiest routes in the most crowded areas of the city, making them look worse than the buses they were being compared with. Other commentators have described these types of buses as 'flaccid and unappealing', preferring the sturdiness of their rigid-body counterparts. The last disadvantage of an articulated bus is that it requires a specially trained driver in some cases.
Use
An articulated bus is a long vehicle and usually requires a specially trained driver, as maneuvering (particularly reversing) can be difficult. The trailer section of a "puller" bus can be subject to unusual centripetal forces, which many people can find uncomfortable, although this is not an issue with "pushers". Nonetheless, the articulated bus is highly successful in Budapest, Hungary, where the BKV city transit company has been operating more than 1000 of them every day since the early 1970s. The Hungarian company Volán also runs hundreds of articulated buses on intercity lines.
Europe
Articulated buses have been used in most European countries for many years. Articulated buses became popular in mainland Europe due to their increased capacity compared with regular buses. In many cities, lower railway bridge clearances have precluded the use of double-deck vehicles, which have never achieved great popularity there. Overhead wires for trams, trolleybuses, etc. are not really relevant issues, as the minimum normal clearance above road level is standard across the EU and is well in excess of the height of a double-deck vehicle—otherwise, many freight vehicles would encounter severe problems in the course of normal operation.
Malta
From 3 July 2011 to 28 August 2013, articulated Mercedes Citaro buses purchased from London were used in Malta by the company Arriva on a number of routes across the country. A number of serious engine fires resulted in their withdrawal from service, and they have also been responsible for causing an increase in traffic congestion and accidents involving pedestrians and cyclists.
United Kingdom
Until 1980, articulated buses were illegal on the UK's roads. A 1979 experiment by South Yorkshire Passenger Transport Executive with buses manufactured by MAN and Leyland-DAB led to a change in the law, but the experiment was abandoned in 1981 because double-decker buses were generally considered less expensive both to purchase and to operate. The cost and weight of the strengthened deck framing and staircase of a double-decker was lower than the cost and weight of the additional axle(s) and coupling mechanism of an articulated bus. Modern technology has reduced the weight disadvantage, and the benefits of a continuous low floor allowing easier access plus additional entrance doors for smoother loading have led to reconsideration of the use of articulated buses in the UK.
In London, articulated buses were used on some routes from 2001 until 2011, but they were not a success. Boris Johnson, former Mayor of London, promised in the run-up to the mayoral election of 2008 to rid the city of the controversial buses and replaced them with double deckers.
Elsewhere in the UK, they are generally operated on particular routes in order to increase passenger numbers, rather than across entire networks. With unsupervised "open boarding" through three doors and the requirement for pre-purchase of tickets, levels of fare-dodging on the new vehicles were found to be at least three times higher than on conventional buses where entry of passengers is monitored by the driver or conductor. The only way of checking for free riders was to use large teams of ticket inspectors to swamp the bus and inspect all tickets while keeping the doors closed, meanwhile delaying the further progress of the bus. Since the articulated buses were tending to serve areas of relative deprivation it is suspected that this was a contributory factor in Transport for London (TfL) turning against the concept.
Many of the articulated buses from London went on to serve with regional operators. Aside from limited use in regional cities, articulated buses may now be found at airports as park and ride shuttles.
A batch of 9 Mercedes Benz Citaros currently run on First Aberdeen routes 1 and 2, and 5 others run with First York on York Park and ride services 2 and 3 but are being phased out by more modern Wright Streetdecks and Optare Metrodeckets
In 2020, twenty one brand new Mercedes Benz Citaros entered service at Stansted Airport; the Mercedes Benz Citaro is the only articulated bus available in the United Kingdom at present.
The last public Wright Eclipse Fusion bendy buses ran on 26 March 2023 on service 888 between Luton airport and Luton airport parkway station, the service being replaced by the Luton DART monorail service.
Asia
China
In Asia, many major Chinese cities had fleets of articulated buses prior to the late 1990s. Some of these fleets have since been replaced by single-section units except in a few cities, namely Beijing, Shanghai, and Hangzhou. In the 2000s, a surge in BRT construction has reintroduced or re-purposed the articulated bus fleets for rapid transit usage in cities like Changzhou, Chengdu, Dalian, Guangzhou, Jinan, Kunming, Xiamen, Yancheng, Zaozhuang, and Zhengzhou.
Indonesia
Indonesia first operated articulated buses in 1993, when Jakarta's bus company PPD began to operate Ikarus articulated busses from Hungary on several busy lines. Later, the company also imported Chinese-made articulated buses. PPD dominated Jakarta city bus service until 2004, when Transjakarta was established; it operates one of the longest BRT systems in the world. Transjakarta has been using articulated buses manufactured by Scania for some of their busiest routes since 2015. Prior to Scania buses' introduction, there were Chinese-made Huanghai, Zhong Tong, Yutong, Ankai, and local-made INKA Inobus and AAI Komodo buses in service since 2010.
Israel
In Israel, the use of articulated buses—commonly called accordion bus, אוטובוס אקורדיון—is widespread, particularly in Gush Dan and Jerusalem, the two great urban centers of the country, as well as in Haifa (for the Metronit BRT system) and other cities such as Beersheba. The long buses are considered reliable and useful, and have been in service in Israel since the mid-1970s. During the Israeli–Palestinian conflict, such buses were often targeted by Palestinians and suicide bombers during rush hours, since a crowded long bus can contain more than 100 passengers.
Macau
In Macau, China, Transmac (Transportes Urbanos de Macau S.A.R.L.) imported a Yutong ZK6180HGH 18-meter articulated bus model, and put it into operation on 6 January 2018 following multiple tests and adjustments. The bus operated on route 51 and 25BS during peak hours. It also served route 25AX before typhoon wipha/on golden weeks in 2019 and also route 26S after the annual international firework shows. In 2023, Transmac imported two 18-meter Higer KLQ6186GHEV extended range electric articulated bus model that runs on the same route 51, 25BS and 26S as it's predecessor, first put into operation on 21 January 2024 as a after-show shuttle in Macau. The old ZK6180HGH is now used for non-franchised services, such as event shuttles or casino employers shuttles, as the government tends to fade out all diesel buses from franchised services.
Singapore
In Singapore, articulated buses were first introduced in 1996 by Trans-Island Bus Services (now SMRT Buses) with the Mercedes-Benz O405G buses (bodied in Hispano Carrocera (MK1/MK2), Hispano Habit and Volgren CR221). In 2015, SMRT introduced 40 MAN NG363F A24 buses to replace the first batch of O405Gs, while the subsequent batches were replaced by double deck buses issued by the Land Transport Authority. All Hispano Habit-bodied O405Gs have been retired from service as of December 2020 as part of Land Transport Authority's policy of a 100% wheelchair-accessible bus fleet.
Singapore Bus Services (SBS, present-day SBS Transit) introduced one Duple Metsec-bodied Volvo B10MA and one Volgren-bodied Mercedes Benz O405G in 1996 and 1997 respectively to evaluate the suitability of articulated buses for high-capacity single deck bus operations. The trial was however unsuccessful and SBS stuck to the use of 12-metre double deck "Superbuses". The two articulated buses were eventually sold off to Bayes Coachlines of Dairy Flat, Auckland in New Zealand in March 2006. SBS Transit only began to operate articulated buses again from March 2018 when ten ex-SMRT MAN A24 buses were transferred to SBS Transit in batches by the Land Transport Authority as part of the Seletar Bus Package under the Bus Contracting Model.
In March 2021, Tower Transit took over 1 ex-SMRT MAN A24 bus from SMRT Buses in preparation for the takeover of Sembawang-Yishun Bus Package. Tower Transit took over more units as part of the transition in September 2021.
Taiwan
Articulated buses were first used in Taiwan in 2014 as the Taichung BRT. The BRT system was abolished a year later, and the articulated buses run as regular buses along the same route.
Vietnam
In Vietnam, articulated bus service was first introduced and operated on 16 October 2010, by Transerco in Hanoi. It was added to the route 07 from My Dinh Bus Station to Noi Bai Airport as a test run. The bus was part of the Hanoi Ecotrans project subsidized by the EU and Ile de France. It was a Mercedes Euro II Galaxy which was first manufactured in December 2003 by Mercedes-Benz Vietnam and was previously used for SEA Games in Hanoi. The bus was painted yellow instead of traditional white-yellow-red (from top to bottom) and had two ticket sellers onboard instead of one. The bus received positive reviews from passengers but the bus no longer operates in Hanoi; route 07 is now served by Daewoo BC095 buses.
North America
United States
Articulated buses are commonplace in US urban centers such as Albuquerque, Austin, Baltimore, Boston, Chicago, Cleveland, Denver, Honolulu, Indianapolis, Los Angeles, West Palm Beach, Miami, Minneapolis-St.Paul, New York City, Newark, Orange County (California), Orlando, Philadelphia, Phoenix, Pittsburgh, Portland (Oregon), Rochester (New York), San Diego, San Francisco, Seattle, Washington, D.C., and Westchester County (New York). In Eugene, Lane Transit District uses articulated buses on some high-traffic routes, as well as on their Emerald Express (EmX) Bus Rapid Transit Service. In Vancouver, Washington, C-Tran (Washington) uses articulated buses on their BRT service, The Vine (bus rapid transit).
Canada
In Canada, they are used in Brampton, Calgary, Durham Region, Edmonton, Gatineau, Halifax, Hamilton, London, Longueuil, Mississauga, Montreal, Niagara Region, Ottawa, Quebec City, Regina, Saskatoon, St. Albert, Toronto, York Region, Metro Vancouver and Winnipeg.
Mexico
Articulated buses in Mexico are usually used on BRT lines, such as Mexico City's Metrobús, Guadalajara's Macrobús, Monterrey's Ecovía and León's Optibús.
South America
In South America, they are used in Quito, São Paulo, Santiago, Curitiba, Barranquilla, Cali, Bucaramanga, Pereira, Cartagena, Medellín and Bogotá.
Oceania
Australia
The first articulated bus in Australia operated in Canberra in the Australian Capital Territory in 1974. They remain in service for Transport Canberra serving both rapid and feeder routes.
In Adelaide, articulated buses are used on the O-Bahn Busway, reaching speeds of 100 km/h. The first articulated buses to use it were the Mercedes-Benz O305G buses; however, three MAN SG280H buses are also equipped for O-Bahn use. In recent years, it has proven problematic to find suitable low-floor articulated buses to replace the 1984-manufactured Mercedes buses, because the design of the O-Bahn track unfortunately precludes the use of most modern articulated buses. Sydney has seen the operation of articulated buses for many years. Currently it operates a fleet of various models with eighty Volvo B12BLEA buses joining the Sydney Buses fleet in 2005 and 2006, increasing capacity along many of the busy corridors. A number of prototype vehicles were delivered in 2008 & 2009 to operate on Sydney Buses' first Metrobus route, the M10 from Leichhardt to Kingsford and Maroubra Junction. The buses feature different chassis, body types, and internal layouts. The articulated Volvo B12BLEA buses are fully wheelchair-accessible, air-conditioned, and have visual and audible next-stop passenger information systems installed. The buses feature air-conditioning, large electronic destination displays and cloth seating. Additionally, each bus features a stepless entry, which will assist less-mobile passengers. Flip-up seats in the front part of the bus allow easy accommodation for passengers in wheelchairs and with strollers and prams. In 2009-2010 150 new Volvo B12BLEA articulated buses have been introduced into the Sydney Buses fleet, many of these part of the expanded Metrobus program.
Design
Doors
Articulated buses typically feature at least two and sometimes three curbside doors for passengers. Because the articulation joint is close behind the middle axle, usually four potential passenger door positions are possible on a single-articulated bus:
Between the windshield and front axle (A)
Between the front axle and middle axle (B)
Between the articulation joint and rear axle (X)
Between the rear axle and rear bulkhead (Y)
Powertrain
Articulated buses can be of "pusher" or "puller" configuration. Very few companies specialise in manufacturing the articulated section for the buses. One that does is ATG Autotechnik GmbH in Siek near Hamburg.
Puller
In most puller articulated buses, the engine is mounted under the floor between the front and middle axles, and only the middle axle is powered. Some consider this an outdated design, as it prohibits floor levels lower than approximately , and can produce passenger discomfort due to high noise and vibration levels. On the other hand, they can be used in very narrow or severely potholed streets. This type of bus also performs better in snowy or icy conditions, as the thrust from the driving wheels does not cause the vehicle to jackknife. Newer models such as the Van Hool AG300 feature low floors while maintaining the puller design by placing the engine block off-center opposite to the second door. Also, the unpowered rear axle is much simpler and carries no engine weight, facilitating the installation of counter-steering mechanisms to further decrease the turning radius.
A typical puller model is the Hungarian-made Ikarus 280, the articulated version of the Ikarus 260, of which 60,993 buses were manufactured between 1973 and 2002, mostly for Soviet bloc customers. (This type accounted for two-thirds of the articulated buses built in the 1970s.) Puller-type articulated buses are built in less numbers, but are still available in Scandinavia and South America. Examples being the Volvo B9S and Volvo B12M.
Pusher
The pusher bus needs a damping system in the joint to reduce the risk of jack-knifing and fishtailing. This was developed by the FFG Fahrzeugwerkstätten Falkenried in Germany. The production cost of the pusher bus was lower than that of a puller bus. The puller bus was a completely different construction compared to a solo bus which was often fabricated by external body construction firms due to the lower production numbers compared to solo buses. The pusher concept enabled the bus manufacturer to simply join a forward and a rear part of a solo bus and build the articulated bus completely in-house. This reduced the production cost.
In pusher buses, only the rear axle is powered by a rear-mounted internal combustion engine, and the longitudinal stability of the vehicle is maintained by active hydraulics mounted under the turntable. This modern system makes it possible to build buses without steps and having low floors along their entire length, which simplifies access for passengers with limited mobility.
Modern low-floor pusher articulated buses also tend to suffer from suspension problems because their wheels lack sufficient travel to enable them to absorb typical road surface unevenness. This also leads to passenger discomfort and relatively rapid disintegration of the vehicle's superstructure.
Makers of pusher-type articulated buses include Mercedes-Benz, New Flyer Industries, MAN, Volvo and Scania. The Renault PR 180 and PR 180.2 (articulated versions of the PR 100 and PR 100.2) were a special variation of the pusher design in which both the middle and the rear axles were driven, with a driveshaft passing through the turntable between the two driving axles.
Alternative power
Although the majority of articulated buses utilise diesel engines for their motive power, a number of operators (primarily outside North America and by LACMTA) have adopted compressed natural gas (CNG) power in order to reduce pollution. Many other transit authorities in the United States and Canada are adopting articulated buses that are diesel-electric hybrids, such as the New Flyer DE60LF. There are also articulated trolleybuses, which use catenary cables to power electric traction motors. Electric articulated trolleybuses principally operate in hilly locations like Mexico City, San Francisco, Seattle, and Vancouver, B.C., where the steep grades preclude the use of combustion engines for motive power.
The New Flyer Xcelsior Charge NG battery-electric articulated buses are equipped with traction motors on both the middle and rear axles; the middle axle uses in-wheel motors.
Articulation
Super-articulated buses
Super articulated bus are similar to the normal articulated buses, but they have a fourth tag axle next to the rear axle, increasing the rated load. Typical capacity ranges up to 200 passengers and the length increases from . Examples include the Mercedes-Benz Citaro CapaCity, MAN Lion's City GXL (A43), Mercedes-Benz O500 DA, BYD D11, and Scania K IA.
Bi-articulated buses
Since the late 1980s, the concept of the articulated bus has been extended further with the addition of a second trailer section that extends the bus almost to tram length and capacity, to create a bi-articulated bus, also called a triple bus. The Chinese manufacturer Zhejiang Youngman (Jinhua Neoplan) has developed the JNP6280G bi-articulated bus, deemed the "world's largest", which will be used in Beijing.
Bi-articulated buses are still rare, having been trialled and rejected in some places. Because of their length they have a role on very high-capacity routes, or as a component of a bus rapid transit scheme. Major examples of bi-articulated buses playing a major role in bus rapid transit can be found in Curitiba, Bogota, Mexico City, Quito, etc. Volvo is a major supplier of puller-type bi-articulated buses in these markets. In the Netherlands, bi-articulated buses came into service in Utrecht (2002) and Groningen (2014) on busy routes.
Decks
Most articulated buses use a single passenger level or deck.
Double-decker articulated buses
A few attempts have been made to design a double-decker articulated bus. NEOPLAN Bus GmbH built a handful of Neoplan Jumbocruisers between 1975 and 1992. In these models, only the upper deck allows movement between the two sections, so each section has its own doors and set of stairs.
Driver licensing
In some countries of the European Union, as well as in Canada, an articulated bus can be driven with the same license used to drive a rigid bus (D in Europe), while a bus towing a normal trailer requires a bus + trailer (D+E) license.
There is some confusion as to how the U.S. treats articulated buses, but the general agreement in professional circles a large misconception seems to be that they're treated as combination vehicles, and therefore requires that drivers hold a Class A commercial driver license (CDL) with a passenger (P) endorsement, which requires the passage of both a written and driven exam. However, based on guidelines from the Federal Motor Carrier Safety Administration (FMCSA), an articulated motorcoach (bus) driver is required to possess a Class B CDL. The driven exam can be completed in the Class B bus the driver wishes to operate. It is common for restrictions to be issued on the driver's license disallowing them to operate higher classes of vehicles than they tested on. A common restriction is a driver with a Class B CDL that performs their P endorsement test on a class B bus, resulting in their license bearing the "no class A passenger vehicle" restriction, which is notated with an 'M', in addition to their P endorsement. These restrictions can be removed through further testing at a later date.
In the UK it is only necessary to hold a D licence on articulated buses where the driven axle is in the rear section.
As the front cannot be driven without the rear, for licensing purposes they are not considered to have a trailer necessitating the E entitlement. However, special training is needed for bi-articulated buses.
In popular culture
In a campaign associated with the Transformers: Revenge of the Fallen film, a 2014 Transformers character was created using parts from an articulated bus with the actual vehicle as its intended alternate form and dubbed "Bendy-Bus Prime."
Examples of articulated buses
AKSM-333
BYD K11M
Chavdar B14-20
Chavdar 141
Classic
Crown-Ikarus 286
BMC Procity 18M
DAC 117UD
De Simon IS.2
Ikarbus IK-5B
Ikarbus IK-160
Ikarbus IK-201
Ikarbus IK-218
Ikarus 280
Ikarus 417/435/435T
Inbus AID 280FT
Irisbus Citelis 18
Iveco Urbanway 18
Karosa B 741
Karosa B 941
Karosa B 961
Karosa C 744
Karosa C 943
Leyland-DAB articulated bus
LiAZ-6213
MAN Lion's City G
MAZ-105
MAZ-205
MAZ-215
Mercedes-Benz Citaro G
Mercedes-Benz O500UA/UDA
NABI BRT
NABI LFW
NABI SFW
Neobus 405 GZ
Neobus Citta LEA
Neoplan AN460
Neoplan Centroliner
New Flyer Xcelsior
Nova Bus LFS Artic
Orion-Ikarus 286
Otokar Kent C Articulated
Renault PR180.2
Rocar DAC 217E
Sanos S 200
Scania Citywide LFA]]
Škoda 15Tr
Škoda 25Tr Irisbus
Škoda 27Tr Solaris
Škoda 31Tr SOR
Škoda 33Tr SOR
Škoda 35Tr Iveco
Škoda 706 RTO-K
Solaris Urbino 18
SOR NB 18
SOR NS 18
TAM 260A180 M
TAM 272A180 M
TEDOM C 18
Van Hool AG300
Volvo 7700
Volvo 7900
Volvo 8700LEA
Volvo B5LH
Volvo B7LA
Volvo B8RLEA
Volvo B9LA
Volvo B9S
Volvo B10LA
Volvo B10MA
Volvo B12BLEA
VDL Citea SLFA
Wright Eclipse Fusion
Wright Solar Fusion
| Technology | Motorized road transport | null |
937971 | https://en.wikipedia.org/wiki/Endemism | Endemism | Endemism is the state of a species being found only in a single defined geographic location, such as an island, state, nation, country or other defined zone; organisms that are indigenous to a place are not endemic to it if they are also found elsewhere. For example, the Cape sugarbird is found exclusively in southwestern South Africa and is therefore said to be endemic to that particular part of the world. An endemic species can also be referred to as an endemism or, in scientific literature, as an endemite. Similarly, many species found in the Western ghats of India are examples of endemism.
Endemism is an important concept in conservation biology for measuring biodiversity in a particular place and evaluating the risk of extinction for species. Endemism is also of interest in evolutionary biology, because it provides clues about how changes in the environment cause species to undergo range shifts (potentially expanding their range into a larger area, or becoming extirpated from an area they once lived), go extinct, or diversify into more species.
The extreme opposite of an endemic species is one with a cosmopolitan distribution, having a global or widespread range.
A rare alternative term for a species that is endemic is "precinctive", which applies to species (and other taxonomic levels) that are restricted to a defined geographical area. Other terms that sometimes are used interchangeably, but less often, include autochthonal, autochthonic, and indigenous; however, these terms do not reflect the status of a species that specifically belongs only to a determined place.
Etymology
History of the concept
The word endemic is from Neo-Latin endēmicus, from Greek ἔνδημος, éndēmos, "native". Endēmos is formed of en meaning "in", and dēmos meaning "the people". The word entered the English language as a loan word from French endémique, and originally seems to have been used in the sense of diseases that occur at a constant amount in a country, as opposed to epidemic diseases, which are exploding in cases. The word was used in biology in 1872 to mean a species restricted to a specific location by Charles Darwin.
The less common term 'precinctive' has been used by some entomologists as the equivalent of 'endemic'. Precinctive was coined in 1900 by David Sharp when describing the Hawaiian insects, as he was uncomfortable with the fact that the word 'endemic' is often associated with diseases. 'Precinctive' was first used in botany by Vaughan MacCaughey in Hawaii in 1917.
Overview
A species is considered to be endemic to the area where it is found naturally, to the exclusion of other areas; presence in captivity or botanical gardens does not disqualify a species from being endemic. In theory, the term "endemic" could be applied on any scale; for example, the cougar is endemic to the Americas, and all known life is endemic to Earth. However, endemism is normally used only when a species has a relatively small or restricted range. This usage of "endemic" contrasts with "cosmopolitan." Endemics are not necessarily rare; some might be common where they occur. Likewise, not all rare species are endemics; some may have a large range but be rare throughout this range.
Origins
The evolutionary history of a species can lead to endemism in multiple ways. Allopatric speciation, or geographic speciation, is when two populations of a species become geographically separated from each other and as a result develop into different species. In isolated areas where there is little possibility for organisms to disperse to new places, or to receive new gene flow from outside, the rate of endemism is particularly high. For example, many endemic species are found on remote islands, such as Hawaii, the Galápagos Islands and Socotra. Populations on an island are isolated, with little opportunity to interbreed with outside populations, which eventually causes reproductive isolation and separation into different species. Darwin's finches in the Galápagos archipelago are examples of species endemic to islands. Similarly, isolated mountainous regions like the Ethiopian Highlands, or large bodies of water far from other lakes, like Lake Baikal, can also have high rates of endemism.
Endemism can also be created in areas which act as refuges for species during times of climate change like ice ages. These changes may have caused species to become repeatedly restricted to regions with unusually stable climate conditions, leading to high concentrations of endemic species in areas resistant to climate fluctuations. Endemic species that used to exist in a much larger area, but died out in most of their range, are called paleoendemic, in contrast to neoendemic species, which are new species that have not dispersed beyond their range. The ginkgo tree, Ginkgo biloba, is one example of a paleoendemic species.
In many cases, biological factors, such as low rates of dispersal or returning to the spawning area (philopatry), can cause a particular group of organisms to have high speciation rates and thus many endemic species. For example, cichlids in the East African Rift Lakes have diversified into many more endemic species than the other fish families in the same lakes, possibly due to such factors. Plants that become endemic on isolated islands are often those which have a high rate of dispersal and are able to reach such islands by being dispersed by birds. While birds are less likely to be endemic to a region based on their ability to disperse via flight, there are over 2,500 species which are considered endemic, meaning that the species is restricted to an area less than .
Microorganisms were traditionally not believed to form endemics. The hypothesis 'everything is everywhere', first stated in Dutch by Lourens G.M. Baas Becking in 1934, describes the theory that the distribution of organisms smaller than 2 mm is cosmopolitan where habitats occur that support their growth.
Subtypes and definitions
Endemism can reflect a wide variety of evolutionary histories, so researchers often use more specialized terms that categorize endemic species based upon how they came to be endemic to an area. Different categorizations of endemism also capture the uniqueness and irreplaceability of biodiversity hotspots differently and impact how those hotspots are defined, affecting how resources for conservation are allocated.
The first subcategories were first introduced by Claude P. E. Favager and Juliette Contandriopoulis in 1961: schizoendemics, apoendemics and patroendemics. Using this work, Ledyard Stebbins and Jack Major then introduced the concepts of neoendemics and paleoendemics in 1965 to describe the endemics of California. Endemic taxa can also be classified into autochthonous, allochthonous, taxonomic relicts and biogeographic relicts.
Paleoendemism refers to species that were formerly widespread but are now restricted to a smaller area. Neoendemism refers to species that have recently arisen, such as through divergence and reproductive isolation or through hybridization and polyploidy in plants, and have not dispersed beyond a limited range.
Paleoendemism is more or less synonymous with the concept of a 'relict species': a population or taxon of organisms that were more widespread or more diverse in the past. A 'relictual population' is a population that currently occurs in a restricted area, but whose original range was far wider during a previous geologic epoch. Similarly, a 'relictual taxon' is a taxon (e.g. species or other lineage) that is the sole surviving representative of a formerly diverse group.
The concept of phylogenetic endemism has also been used to measure the relative uniqueness of the species endemic to an area. In measurements that incorporate phylogenetic endemism, branches of the evolutionary tree are weighted by how narrowly they are distributed. This captures not only the total number of taxa endemic to the area (taxonomic endemism), but also how distant those species are from their living relatives.
Schizoendemics, apoendemics and patroendemics can all be classified as types of neoendemics. Schizoendemics arise from a wider distributed taxon that has become reproductively isolated without becoming (potentially) genetically isolated – a schizoendemic has the same chromosome count as the parent taxon it evolved from. An apoendemic is a polyploid of the parent taxon (or taxa in the case of allopolyploids), whereas a patroendemic has a lower, diploid chromosome count than the related, more widely distributed polyploid taxon. Mikio Ono coined the term 'aneuendemics' in 1991 for species that have more or fewer chromosomes than their relatives due to aneuploidy.
Pseudoendemics are taxa that have possibly recently evolved from a mutation. Holoendemics is a concept introduced by Richardson in 1978 to describe taxa that have remained endemic to a restricted distribution for a very long time.
In a 2000 paper, Myers and de Grave further attempted to redefine the concept. In their view, everything is endemic, even cosmopolitan species are endemic to Earth, and earlier definitions restricting endemics to specific locations are wrong. Thus the subdivisions neoendemics and paleoendemics are without merit regarding the study of distributions, because these concepts consider that an endemic has a distribution limited to one place. Instead, they propose four different categories: holoendemics, euryendemics, stenoendemics and rhoendemics. In their scheme cryptoendemics and euendemics are further subdivisions of rhoendemics. In their view, a holoendemic is a cosmopolitan species. Stenoendemics, also known as local endemics, have a reduced distribution and are synonymous with the word 'endemics' in the traditional sense, whereas euryendemics have a larger distribution -both these have distributions that are more or less continuous. A rhoendemic has a disjunct distribution. Where this disjunct distribution is caused by vicariance, in a euendemic the vicariance was geologic in nature, such as the movement of tectonic plates, but in a cryptoendemic the disjunct distribution was due to the extinction of the intervening populations. There is yet another possible situation that can cause a disjunct distribution, where a species is able to colonize new territories by crossing over areas of unsuitable habitat, such as plants colonizing an island – this situation they dismiss as extremely rare and do not devise a name for. Traditionally, none of Myers and de Grave's categories would be considered endemics except stenoendemics.
Environments
Some environments are particularly conducive to the development of endemic species, either because they allow the persistence of relict taxa that were extirpated elsewhere, or because they provide mechanisms for isolation and opportunities to fill new niches.
Soil
Serpentine soils act as 'edaphic islands' of low fertility and these soils lead to high rates of endemism. These soils are found in the Balkan Peninsula, Turkey, Alps, Cuba, New Caledonia, South Africa, Zimbabwe, the North American Appalachians, and scattered distribution in California, Oregon, and Washington and elsewhere. For example, Mayer and Soltis considered the widespread subspecies Streptanthus glandulosus subsp. glandulosus which grows on normal soils, to be a paleoendemic, whereas closely related endemic forms of S. glandulosus occurring on serpentine soil patches are neoendemics which recently evolved from subsp. glandulosus.
Caves
Obligate cave-dwelling species, known as troglobites, are often endemic to small areas, even to single individual caves, because cave habitats are by nature restricted, isolated, and fragmented. A high level of adaptation to a cave environment limits an organism's ability to disperse, since caves are often not connected to each other. One hypothesis for how closely related troglobite species could become isolated from one another in different caves is that their common ancestor may have been less restricted to cave habitats. When climate conditions became unfavorable, the ancestral species was extirpated from the surface, but some populations survived in caves, and diverged into different species due to lack of gene flow between them.
Islands
Isolated islands commonly develop a number of endemics. Many species and other higher taxonomic groups exist in very small terrestrial or aquatic islands, which restrict their distribution. The Devil's Hole pupfish, Cyprinodon diabolis, has its whole native population restricted to a spring that is 20 x 3 meters, in Nevada's Mojave Desert. This 'aquatic island' is connected to an underground basin; however, the population present in the pool remains isolated.
Other areas very similar to the Galapagos Islands of the Pacific Ocean exist and foster high rates of endemism. The Socotra Archipelago of Yemen, located in the Indian Ocean, has seen a new endemic species of parasitic leech, Myxobdella socotrensis, appear. This species is restricted to freshwater springs, where it may attach to and feed upon native crabs.
Mountains
Mountains can be seen as 'sky islands': refugia of endemics because species that live in the cool climates of mountain peaks are geographically isolated. For example, in the Alpes-Maritimes department of France, Saxifraga florulenta is an endemic plant that may have evolved in the Late Miocene and could have once been widespread across the Mediterranean Basin.
Volcanoes also tend to harbor a number of endemic species. Plants on volcanoes tend to fill a specialized ecological niche, with a very restrictive range, due to the unique environmental characteristics. The Kula Volcano, one of the fourteen volcanoes in Turkey, is home to 13 endemic species of plants.
Conservation
Endemics might more easily become endangered or extinct because they are already restricted in distribution. This puts endemic plants and animals at greater risk than widespread species during the rapid climate change of this century. Some scientists claim that the presence of endemic species in an area is a good method to find geographical regions that can be considered priorities for conservation. Endemism can thus be studied as a proxy for measuring biodiversity of a region.
The concept of finding endemic species that occur in the same region to designate 'endemism hotspots' was first proposed by Paul Müller in a 1973 book. According to him, this is only possible where 1.) the taxonomy of the species in question is not in dispute; 2.) the species distribution is accurately known; and 3.) the species have relatively small distributional ranges.
In a 2000 article, Myers et al. used the standard of having more than 0.5% of the world's plant species being endemic to the region to designate 25 geographical areas of the world as biodiversity hotspots.
In response to the above, the World Wildlife Fund has split the world into a few hundred geographical 'ecoregions'. These have been designed to include as many species as possible that only occur in a single ecoregion, and these species are thus 'endemics' to these ecoregions. Since plenty of these ecoregions have a high prevalence of endemics existing within them, many National Parks have been formed around or within them to further promote conservation. The Caparaó National Park was formed in the Atlantic Forest, a biodiversity hotspot located in Brazil, in order to help protect valuable and vulnerable species.
Other scientists have argued that endemism is not an appropriate measure of biodiversity, because the levels of threat or biodiversity are not actually correlated to areas of high endemism. When using bird species as an example, it was found that only 2.5% of biodiversity hotspots correlate with endemism and the threatened nature of a geographic region. A similar pattern had been found regarding mammals, Lasioglossum bees, Plusiinae moths, and swallowtail butterflies in North America: these different groups of taxa did not correlate geographically with each other regarding endemism and species richness. Especially using mammals as flagship species proved to be a poor system of identifying and protecting areas of high invertebrate biodiversity. In response to this, other scientists again defended the concept by using WWF ecoregions and reptiles, finding that most reptile endemics occur in WWF ecoregions with high biodiversity.
Other conservation efforts for endemics include keeping captive and semi-captive populations in zoological parks and botanical gardens. These methods are ex situ ("off-site") conservation methods. The use of such methods may not only offer refuge and protection for individuals of declining or vulnerable populations, but it may also allow biologists valuable opportunities to research them as well.
| Biology and health sciences | Ecology | Biology |
938004 | https://en.wikipedia.org/wiki/Lancelet | Lancelet | The lancelets ( ), also known as amphioxi (: amphioxus ), consist of 32 described species of "fish-like" benthic filter feeding chordates in the subphylum Cephalochordata, class Leptocardii, and family Branchiostomatidae.
Lancelets diverged from other chordates during or prior to the Cambrian period. A number of fossil chordates have been suggested to be closely related to lancelets, including Pikaia and Cathaymyrus from the Cambrian and Palaeobranchiostoma from the Permian, but their close relationship to lancelets has been doubted by other authors. Molecular clock analysis suggests that modern lancelets probably diversified much more recently, during the Cretaceous or Cenozoic.
Zoologists are interested in them because they provide evolutionary insight into the origins of vertebrates. Lancelets contain many organs and organ systems that are homologous to those of modern fish, but in a more primitive form. Therefore, they provide a number of examples of possible evolutionary exaptation. For example, the gill-slits of lancelets are used for feeding only, and not for respiration. The circulatory system carries food throughout their body, but does not have red blood cells or hemoglobin for transporting oxygen.
Lancelet genomes hold clues about the early evolution of vertebrates: by comparing genes from lancelets with the same genes in vertebrates, changes in gene expression, function and number as vertebrates evolved can be discovered. The genome of a few species in the genus Branchiostoma have been sequenced: B. floridae, B. belcheri, and B. lanceolatum.
In Asia, lancelets are harvested commercially as food for humans. In Japan, amphioxus (B. belcheri) has been listed in the registry of "Endangered Animals of Japanese Marine and Fresh Water Organisms".
Ecology
Habitat
Adult amphioxus typically inhabit the seafloor, burrowing into well-ventilated substrates characterized by a soft texture and minimal organic content. While various species have been observed in different types of substrate, such as fine sand, coarse sand, and shell deposits, most exhibit a distinct preference for coarse sand with low levels of fine particles. For instance, Branchiostoma nigeriense along the west coast of Africa, Branchiostoma caribaeum in Mississippi Sound and along the coast from South Carolina to Georgia, B. senegalense in the offshore shelf region off North West Africa, and B. lanceolatum along the Mediterranean coast of southern France all demonstrate this preference (Webb and Hill, 1958; Webb, 1958; Boschung and Gunter, 1962; Cory and Pierce, 1967; Gosselck and Spittler, 1979; Caccavale et al., 2021b; Desdevises et al., 2011). However, Branchiostoma floridae from Tampa Bay, Florida, appears to be an exception to this trend, favoring fine sand bottoms instead (Stokes and Holland, 1996a; Stokes, 1996).
All amphioxus species exhibit gonochorism, with only rare instances of hermaphroditism reported in Branchiostoma lanceolatum and B. belcheri. In these cases, a small number of female gonads were observed within male individuals, typically ranging from 2 to 5 gonads out of a total of 45–50. An extraordinary occurrence of complete sex reversal was documented in B. belcheri, where a female amphioxus raised in laboratory conditions underwent a transformation into a male (Zhang et al., 2001).
Feeding
Their habitat preference reflects their feeding method: they only expose the front end to the water and filter-feed on plankton by means of a branchial ciliary current that passes water through a mucous sheet. Branchiostoma floridae is capable of trapping particles from microbial to small phytoplankton size, while B. lanceolatum preferentially traps bigger particles (>4 μm).
Reproduction and spawning
Lancelets are gonochoric animals, i.e. having two sexes, and they reproduce via external fertilization. They only reproduce during their spawning season, which varies slightly between species — usually corresponding to spring and summer months. All lancelets species spawn shortly after sunset, either synchronously (e.g. Branchiostoma floridae, about once every two weeks during spawning season) or asynchronously (Branchiostoma lanceolatum, gradual spawning through the season).
Nicholas and Linda Holland were the first researchers to describe a method of obtaining amphioxus embryos by induction of spawning in captivity and in vitro fertilization. Spawning can be artificially induced in the lab by electric or thermal shock.
History
Taxonomy
The first representative organism of the group to be described was Branchiostoma lanceolatum. It was described by Peter Simon Pallas in 1774 as molluscan slugs in the genus Limax. It was not until 1834 that Gabriel Costa brought the phylogenetic position of the group closer to the agnathan vertebrates (hagfish and lampreys), including it in the new genus Branchiostoma (from the Greek, branchio = "gills", stoma = "mouth"). In 1836, Yarrell renamed the genus as Amphioxus (from the Greek: "pointed on both sides"), now considered an obsolete synonym of the genus Branchiostoma. Today, the term "amphioxus" is still used as a common name for the Amphioxiformes, along with "lancelet", especially in the English language.
All living lancelets are all placed in the family Branchiostomatidae, class Leptocardii, and subphylum Cephalochordata. The family was first named by Charles Lucien Bonaparte in 1846, though he used the incorrect spelling "Branchiostomidae". One year previously, Johannes Müller had introduced the name Leptocardii as a subclass.
Finally, the subphylum name Cephalochordata is attributed to Ernst Haeckel (1866). At the taxonomic rank of order, lancelets are sometimes placed in the order Amphioxi Bonaparte, 1846, Amphioxiformes Berg, 1937, or Branchiostomiformes Fowler, 1947. Another name sometimes used for high-ranked taxa for the lancelets is Acrania Haeckel, 1866.
Anatomy
Observations of amphioxus anatomy began in the middle of the 19th century. First, the adult then the embryonic anatomy were described.
Alexander Kowalevsky first described the key anatomical features of the adult amphioxus (hollow dorsal nerve tube, endostyle, segmented body, postanal tail). De Quatrefages first completely described the nervous system of amphioxus. Other important contributions to amphioxus adult anatomy were given by Heinrich Rathke and John Goodsir.
Kowalevsky also released the first complete description of amphioxus embryos, while Schultze and Leuckart were the first to describe the larvae. Other important contributions to amphioxus embryonic anatomy were given by Hatschek, Conklin and later by Tung (experimental embryology).
Anatomy
The larvae are extremely asymmetrical, with the mouth and anus on the left side, and the gill slits on the right side. Organs associated with the pharynx are positioned either exclusively on the left or on the right side of the body. In addition, segmented muscle blocks and parts of the nervous system are asymmetrical. After metamorphosis the anatomy becomes more symmetrical, but some asymmetrical traits are still present also as adults, such as the nervous system and the location of the gonads which are found on the right side in Asymmetron and Epigonichthys (in Branchiostoma gonads develop on both sides of body).
Depending on the exact species involved, the maximum length of lancelets is typically . Branchiostoma belcheri and B. lanceolatum are among the largest. Except for the size, the species are very similar in general appearance, differing mainly in the number of myotomes and the pigmentation of their larvae. They have a translucent, somewhat fish-like body, but without any paired fins or other limbs. A relatively poorly developed tail fin is present, so they are not especially good swimmers. While they do possess some cartilage material stiffening the gill slits, mouth, and tail, they have no true complex skeleton.
Nervous system and notochord
In common with vertebrates, lancelets have a hollow nerve cord running along the back, pharyngeal slits and a tail that runs past the anus. Also like vertebrates, the muscles are arranged in blocks called myomeres.
Unlike vertebrates, the dorsal nerve cord is not protected by bone but by a simpler notochord made up of a cylinder of cells that are closely packed in collagen fibers to form a toughened rod. The lancelet notochord, unlike the vertebrate spine, extends into the head. This gives the subphylum, Cephalochordata, its name (, kephalē means 'head'). The fine structure of the notochord and the cellular basis of its adult growth are best known for the Bahamas lancelet, Asymmetron lucayanum
The nerve cord is only slightly larger in the head region than in the rest of the body, so that lancelets do not appear to possess a true brain. However, developmental gene expression and transmission electron microscopy indicate the presence of a diencephalic forebrain, a possible midbrain, and a hindbrain. Recent studies involving a comparison with vertebrates indicate that the vertebrate thalamus, pretectum, and midbrain areas jointly correspond to a single, combined region in the amphioxus, which has been termed di-mesencephalic primordium (DiMes).
Visual system
Lancelets have four known kinds of light-sensing structures: Joseph cells, Hesse organs, an unpaired anterior eye and lamellar body, all of which utilize opsins as light receptors. All of these organs and structures are located in the neural tube, with the frontal eye at the front, followed by the lamellar body, the Joseph cells, and the Hesse organs.
Joseph cells and Hesse organs
Joseph cells are bare photoreceptors surrounded by a band of microvilli. These cells bear the opsin melanopsin. The Hesse organs (also known as dorsal ocelli) consist of a photoreceptor cell surrounded by a band of microvilli and bearing melanopsin, but half enveloped by a cup-shaped pigment cell. The peak sensitivity of both cells is ~470 nm (blue).
Both the Joseph cells and Hesse organs are in the neural tube, the Joseph cells forming a dorsal column, the Hesse organs in the ventral part along the length of the tube. The Joseph cells extend from the caudal end of the anterior vesicle (or cerebral vesicle) to the boundary between myomeres three and four, where the Hesse organs begin and continue nearly to the tail.
Frontal eye
The frontal eye consists of a pigment cup, a group of photoreceptor cells (termed Row 1), three rows of neurons (Rows 2–4), and glial cells. The frontal eye, which expresses the PAX6 gene, has been proposed as the homolog of vertebrate paired eyes,or the pineal eye on vertebrates, the pigment cup as the homolog of the RPE (retinal pigment epithelium), the putative photoreceptors as homologs of vertebrate rods and cones, and Row 2 neurons as homologs of the retinal ganglion cells.
The pigment cup is oriented concave dorsally. Its cells contain the pigment melanin.
The putative photoreceptor cells, Row 1, are arranged in two diagonal rows, one on either side of the pigment cup, symmetrically positioned with respect to the ventral midline. The cells are flask-shaped, with long, slender ciliary processes (one cilium per cell). The main bodies of the cells lie outside of the pigment cup, while the cilia extend into the pigment cup before turning and exiting. The cells bear the opsin c-opsin 1, except for a few which carry c-opsin 3.
The Row 2 cells are serotonergic neurons in direct contact with Row 1 cells. Row 3 and 4 cells are also neurons. Cells of all four rows have axons that project into the left and right ventrolateral nerves. For Row 2 neurons, axon projections have been traced to the tegmental neuropil. The tegmental neuropil has been compared with locomotor control regions of the vertebrate hypothalamus, where paracrine release modulates locomotor patterns such as feeding and swimming.
Fluorescent proteins
Lancelets naturally express green fluorescent proteins (GFP) inside their oral tentacles and near the eye spot. Depending on the species, it can also be expressed in the tail and gonads, though this is only reported in the Asymmetron genus. Multiple fluorescent protein genes have been recorded in lancelet species throughout the world. Branchiostoma floridae alone has 16 GFP-encoding genes. However, the GFP produced by lancelets is more similar to GFP produced by copepods than jellyfish (Aequorea victoria).
It is suspected GFP plays multiple roles with lancelets such as attracting plankton towards their mouth. Considering that lancelets are filter feeders, the natural current would draw nearby plankton into the digestive tract. GFP is also expressed in larvae, signifying it may be used for photoprotection by converting higher energy blue light to less harmful green light.
The fluorescent proteins from lancelets have been adapted for use in molecular biology and microscopy. The yellow fluorescent protein from Branchiostoma lanceolatum exhibits unusually high quantum yield (~0.95). It has been engineered into a monomeric green fluorescent protein known as mNeonGreen, which is the brightest known monomeric green or yellow fluorescent protein.
Feeding and digestive system
Lancelets are passive filter feeders, spending most of the time half-buried in sand with only their frontal part protruding. They eat a wide variety of small planktonic organisms, such as bacteria, fungi, diatoms, and zooplankton, and they will also take detritus. Little is known about the diet of the lancelet larvae in the wild, but captive larvae of several species can be maintained on a diet of phytoplankton, although this apparently is not optimal for Asymmetron lucayanum.
Lancelets have oral cirri, thin tentacle-like strands that hang in front of the mouth and act as sensory devices and as a filter for the water passing into the body. Water passes from the mouth into the large pharynx, which is lined by numerous gill-slits. The ventral surface of the pharynx contains a groove called the endostyle, which, connected to a structure known as Hatschek's pit, produces a film of mucus. Ciliary action pushes the mucus in a film over the surface of the gill slits, trapping suspended food particles as it does so. The mucus is collected in a second, dorsal groove, known as the epipharyngeal groove, and passed back to the rest of the digestive tract. Having passed through the gill slits, the water enters an atrium surrounding the pharynx, then exits the body via the atriopore.
Both adults and larvae exhibit a "cough" reflex to clear the mouth or throat of debris or items too large to swallow. In larvae the action is mediated by the pharyngeal muscles while in the adult animal it is accomplished by atrial contraction.
The remainder of the digestive system consists of a simple tube running from the pharynx to the anus. The hepatic caecum, a single blind-ending caecum, branches off from the underside of the gut, with a lining able to phagocytize the food particles, a feature not found in vertebrates. Although it performs many functions of a liver, it is not considered a true liver but a homolog of the vertebrate liver.
Other systems
Lancelets have no respiratory system, breathing solely through their skin, which consists of a simple epithelium. Despite the name, little if any respiration occurs in the "gill" slits, which are solely devoted to feeding. The circulatory system does resemble that of primitive fish in its general layout, but is much simpler, and does not include a heart. There are no blood cells, and no hemoglobin.
The excretory system consists of segmented "kidneys" containing protonephridia instead of nephrons, and quite unlike those of vertebrates. Also unlike vertebrates, there are numerous, segmented gonads.
Model organism
Lancelets became famous in the 1860s when Ernst Haeckel began promoting them as a model for the ancestor of all vertebrates. By 1900, lancelets had become a model organism. By the mid-20th century they had fallen out of favor for a variety of reasons, including a decline of comparative anatomy and embryology, and due to the belief that lancelets were more derived than they appeared, e.g., the profound asymmetry in the larval stage. More recently, the fundamental symmetric and twisted development of vertebrates is the topic of the axial twist theory. According to this theory, there is a deep agreement between the vertebrates and cephalochordates, and even all chordates.
With the advent of molecular genetics lancelets are once again regarded as a model of vertebrate ancestors, and are used again as a model organism.
As a result of their use in science, methods of keeping and breeding lancelets in captivity have been developed for several of the species, initially the European Branchiostoma lanceolatum, but later also the West Pacific Branchiostoma belcheri and Branchiostoma japonicum, the Gulf of Mexico and West Atlantic Branchiostoma floridae and the circumtropical (however, genetic evidence suggest the Atlantic and Indo-Pacific populations should be recognized as separate) Asymmetron lucayanum. They can reach an age of up to 7–8 years.
As human food
The animals are edible and harvested in some parts of the world. They are eaten both fresh, tasting like herring, and as a food additive in dry form after being roasted in oil. When their gonads start to ripen in the spring it affects their flavor, making them taste bad during their breeding season.
Phylogeny and taxonomy
The lancelets were traditionally seen as the sister lineage to the vertebrates; in turn, these two groups together (sometimes called Notochordata) were considered the sister group to the Tunicata (also called Urochordata and including sea squirts). Consistent with this view, at least ten morphological features are shared by lancelets and vertebrates, but not tunicates. Newer research suggests this pattern of evolutionary relationship is incorrect. Extensive molecular phylogenetic analysis has shown convincingly that the Cephalochordata is the most basal subphylum of the chordates, with tunicates being the sister group of the vertebrates. This revised phylogeny of chordates suggests that tunicates have secondarily lost some of the morphological characters that were formerly considered to be synapomorphies (shared, derived characters) of vertebrates and lancelets. Lancelets have turned out to be among the most genetically diverse animals sequenced to date, due to high rates of genetic changes like exon shuffling and domain combination.
Among the three extant (living) genera, Asymmetron is basal. Molecular clock studies have come to different conclusions on their divergence, with some suggesting that Asymmetron diverged from other lancelets more than 100 million years ago while others have suggested that it occurred about 46 million years ago. According to the younger estimation, Branchiostoma and Epigonichthys have been estimated to have diverged from each other about 38.3 million years ago. Despite this deep separation, hybrids between Asymmetron lucayanum and Branchiostoma floridae are viable (among the deepest split species known to be able to produce such hybrids).
The following are the species recognised by WoRMS. Other sources recognize about thirty species. It is likely that currently unrecognized cryptic species remain.
Class Leptocardii
Family Branchiostomatidae Bonaparte 1846
Genus Asymmetron Andrews 1893 [Amphioxides Gill 1895]
Asymmetron inferum Nishikawa 2004
Asymmetron lucayanum Andrews 1893 (Sharptail lancelet)
Genus Branchiostoma Costa 1834 non Newport 1845 non Banks 1905 [Amphioxus Yarrell 1836; Limax Pallas 1774 non Linnaeus 1758 non Férussac 1819 non Martyn 1784; Dolichorhynchus Willey 1901 non Mulk & Jairajpuri 1974]
Branchiostoma africae Hubbs 1927
Branchiostoma arabiae Webb 1957
Branchiostoma bazarutense Gilchrist 1923
Branchiostoma belcheri (Gray 1847) (Belcher's lancelet)
Branchiostoma bennetti Boschung & Gunter 1966 (Mud lancelet)
Branchiostoma bermudae Hubbs 1922
Branchiostoma californiense Andrews 1893 (Californian lancelet)
Branchiostoma capense Gilchrist 1902
Branchiostoma caribaeum Sundevall 1853 (Caribbean lancelet)
Branchiostoma elongatum (Sundevall 1852)
Branchiostoma floridae Hubbs 1922 (Florida lancelet)
Branchiostoma gambiense Webb 1958
Branchiostoma indicum (Willey 1901)
Branchiostoma japonicum (Willey 1897) (Pacific lancelet)
Branchiostoma lanceolatum (Pallas 1774) (European lancelet)
Branchiostoma leonense Webb 1956
Branchiostoma longirostrum Boschung 1983 (Shellhash lancelet)
Branchiostoma malayanum Webb 1956
Branchiostoma moretonense Kelly 1966; nomen dubium
Branchiostoma nigeriense Webb 1955
Branchiostoma platae Hubbs 1922
Branchiostoma senegalense Webb 1955
Branchiostoma tattersalli Hubbs 1922
Branchiostoma virginiae Hubbs 1922 (Virginian lancelet)
Genus Epigonichthys Peters 1876 [Amphipleurichthys Whitley 1932; Bathyamphioxus Whitley 1932; Heteropleuron Kirkaldy 1895; Merscalpellus Whitley 1932; Notasymmetron Whitley 1932; Paramphioxus Haekel 1893; Zeamphioxus Whitley 1932]
Epigonichthys australis (Raff 1912)
Epigonichthys bassanus (Günther 1884)
Epigonichthys cingalensis (Kirkaldy 1894); nomen dubium
Epigonichthys cultellus Peters 1877
Epigonichthys hectori (Benham 1901) (Hector's lancelet)
Epigonichthys maldivensis (Foster Cooper 1903)
The cladogram presented here illustrates the phylogeny (family tree) of lancelets, and follows a simplified version of the relationships found by Igawa, T.; M. Nozawa; D.G. Suzuki; J.D. Reimer; A.R. Morov; Y. Wang; Y. Henmi; K. Yasui (2017):
| Biology and health sciences | Chordata (except vertebrates) | Animals |
938177 | https://en.wikipedia.org/wiki/Medicinal%20chemistry | Medicinal chemistry | Medicinal or pharmaceutical chemistry is a scientific discipline at the intersection of chemistry and pharmacy involved with designing and developing pharmaceutical drugs. Medicinal chemistry involves the identification, synthesis and development of new chemical entities suitable for therapeutic use. It also includes the study of existing drugs, their biological properties, and their quantitative structure-activity relationships (QSAR).
Medicinal chemistry is a highly interdisciplinary science combining organic chemistry with biochemistry, computational chemistry, pharmacology, molecular biology, statistics, and physical chemistry.
Compounds used as medicines are most often organic compounds, which are often divided into the broad classes of small organic molecules (e.g., atorvastatin, fluticasone, clopidogrel) and "biologics" (infliximab, erythropoietin, insulin glargine), the latter of which are most often medicinal preparations of proteins (natural and recombinant antibodies, hormones etc.). Medicines can also be inorganic and organometallic compounds, commonly referred to as metallodrugs (e.g., platinum, lithium and gallium-based agents such as cisplatin, lithium carbonate and gallium nitrate, respectively). The discipline of Medicinal Inorganic Chemistry investigates the role of metals in medicine metallotherapeutics, which involves the study and treatment of diseases and health conditions associated with inorganic metals in biological systems. There are several metallotherapeutics approved for the treatment of cancer (e.g., contain Pt, Ru, Gd, Ti, Ge, V, and Ga), antimicrobials (e.g., Ag, Cu, and Ru), diabetes (e.g., V and Cr), broad-spectrum antibiotic (e.g., Bi), bipolar disorder (e.g., Li). Other areas of study include: metallomics, genomics, proteomics, diagnostic agents (e.g., MRI: Gd, Mn; X-ray: Ba, I) and radiopharmaceuticals (e.g., 99mTc for diagnostics, 186Re for therapeutics).
In particular, medicinal chemistry in its most common practice—focusing on small organic molecules—encompasses synthetic organic chemistry and aspects of natural products and computational chemistry in close combination with chemical biology, enzymology and structural biology, together aiming at the discovery and development of new therapeutic agents. Practically speaking, it involves chemical aspects of identification, and then systematic, thorough synthetic alteration of new chemical entities to make them suitable for therapeutic use. It includes synthetic and computational aspects of the study of existing drugs and agents in development in relation to their bioactivities (biological activities and properties), i.e., understanding their structure–activity relationships (SAR). Pharmaceutical chemistry is focused on quality aspects of medicines and aims to assure fitness for purpose of medicinal products.
At the biological interface, medicinal chemistry combines to form a set of highly interdisciplinary sciences, setting its organic, physical, and computational emphases alongside biological areas such as biochemistry, molecular biology, pharmacognosy and pharmacology, toxicology and veterinary and human medicine; these, with project management, statistics, and pharmaceutical business practices, systematically oversee altering identified chemical agents such that after pharmaceutical formulation, they are safe and efficacious, and therefore suitable for use in treatment of disease.
In the path of drug discovery
Discovery
Discovery is the identification of novel active chemical compounds, often called "hits", which are typically found by assay of compounds for a desired biological activity. Initial hits can come from repurposing existing agents toward a new pathologic processes, and from observations of biologic effects of new or existing natural products from bacteria, fungi, plants, etc. In addition, hits also routinely originate from structural observations of small molecule "fragments" bound to therapeutic targets (enzymes, receptors, etc.), where the fragments serve as starting points to develop more chemically complex forms by synthesis. Finally, hits also regularly originate from en-masse testing of chemical compounds against biological targets using biochemical or chemoproteomics assays, where the compounds may be from novel synthetic chemical libraries known to have particular properties (kinase inhibitory activity, diversity or drug-likeness, etc.), or from historic chemical compound collections or libraries created through combinatorial chemistry. While a number of approaches toward the identification and development of hits exist, the most successful techniques are based on chemical and biological intuition developed in team environments through years of rigorous practice aimed solely at discovering new therapeutic agents.
Hit to lead and lead optimization
Further chemistry and analysis is necessary, first to identify the "triage" compounds that do not provide series displaying suitable SAR and chemical characteristics associated with long-term potential for development, then to improve the remaining hit series concerning the desired primary activity, as well as secondary activities and physiochemical properties such that the agent will be useful when administered in real patients. In this regard, chemical modifications can improve the recognition and binding geometries (pharmacophores) of the candidate compounds, and so their affinities for their targets, as well as improving the physicochemical properties of the molecule that underlie necessary pharmacokinetic/pharmacodynamic (PK/PD), and toxicologic profiles (stability toward metabolic degradation, lack of geno-, hepatic, and cardiac toxicities, etc.) such that the chemical compound or biologic is suitable for introduction into animal and human studies.
Process chemistry and development
The final synthetic chemistry stages involve the production of a lead compound in suitable quantity and quality to allow large scale animal testing, and then human clinical trials. This involves the optimization of the synthetic route for bulk industrial production, and discovery of the most suitable drug formulation. The former of these is still the bailiwick of medicinal chemistry, the latter brings in the specialization of formulation science (with its components of physical and polymer chemistry and materials science). The synthetic chemistry specialization in medicinal chemistry aimed at adaptation and optimization of the synthetic route for industrial scale syntheses of hundreds of kilograms or more is termed process synthesis, and involves thorough knowledge of acceptable synthetic practice in the context of large scale reactions (reaction thermodynamics, economics, safety, etc.). Critical at this stage is the transition to more stringent GMP requirements for material sourcing, handling, and chemistry.
Synthetic analysis
The synthetic methodology employed in medicinal chemistry is subject to constraints that do not apply to traditional organic synthesis. Owing to the prospect of scaling the preparation, safety is of paramount importance. The potential toxicity of reagents affects methodology.
Structural analysis
The structures of pharmaceuticals are assessed in many ways, in part as a means to predict efficacy, stability, and accessibility. Lipinski's rule of five focus on the number of hydrogen bond donors and acceptors, number of rotatable bonds, surface area, and lipophilicity. Other parameters by which medicinal chemists assess or classify their compounds are: synthetic complexity, chirality, flatness, and aromatic ring count.
Structural analysis of lead compounds is often performed through computational methods prior to actual synthesis of the ligand(s). This is done for a number of reasons, including but not limited to: time and financial considerations (expenditure, etc.). Once the ligand of interest has been synthesized in the laboratory, analysis is then performed by traditional methods (TLC, NMR, GC/MS, and others).
Training
Medicinal chemistry is by nature an interdisciplinary science, and practitioners have a strong background in organic chemistry, which must eventually be coupled with a broad understanding of biological concepts related to cellular drug targets. Scientists in medicinal chemistry work are principally industrial scientists (but see following), working as part of an interdisciplinary team that uses their chemistry abilities, especially, their synthetic abilities, to use chemical principles to design effective therapeutic agents. The length of training is intense, with practitioners often required to attain a 4-year bachelor's degree followed by a 4–6 year Ph.D. in organic chemistry. Most training regimens also include a postdoctoral fellowship period of 2 or more years after receiving a Ph.D. in chemistry, making the total length of training range from 10 to 12 years of college education. However, employment opportunities at the Master's level also exist in the pharmaceutical industry, and at that and the Ph.D. level there are further opportunities for employment in academia and government.
Graduate level programs in medicinal chemistry can be found in traditional medicinal chemistry or pharmaceutical sciences departments, both of which are traditionally associated with schools of pharmacy, and in some chemistry departments. However, the majority of working medicinal chemists have graduate degrees (MS, but especially Ph.D.) in organic chemistry, rather than medicinal chemistry, and the preponderance of positions are in research, where the net is necessarily cast widest, and most broad synthetic activity occurs.
In research of small molecule therapeutics, an emphasis on training that provides for breadth of synthetic experience and "pace" of bench operations is clearly present (e.g., for individuals with pure synthetic organic and natural products synthesis in Ph.D. and post-doctoral positions, ibid.). In the medicinal chemistry specialty areas associated with the design and synthesis of chemical libraries or the execution of process chemistry aimed at viable commercial syntheses (areas generally with fewer opportunities), training paths are often much more varied (e.g., including focused training in physical organic chemistry, library-related syntheses, etc.).
As such, most entry-level workers in medicinal chemistry, especially in the U.S., do not have formal training in medicinal chemistry but receive the necessary medicinal chemistry and pharmacologic background after employment—at entry into their work in a pharmaceutical company, where the company provides its particular understanding or model of "medichem" training through active involvement in practical synthesis on therapeutic projects. (The same is somewhat true of computational medicinal chemistry specialties, but not to the same degree as in synthetic areas.)
| Physical sciences | Chemistry: General | null |
938385 | https://en.wikipedia.org/wiki/Flight%20recorder | Flight recorder | A flight recorder is an electronic recording device placed in an aircraft for the purpose of facilitating the investigation of aviation accidents and incidents. The device may often be referred to colloquially as a "black box", an outdated name which has become a misnomer—they are now required to be painted bright orange, to aid in their recovery after accidents.
There are two types of flight recording devices: the flight data recorder (FDR) preserves the recent history of the flight through the recording of dozens of parameters collected several times per second; the cockpit voice recorder (CVR) preserves the recent history of the sounds in the cockpit, including the conversation of the pilots. The two devices may be combined into a single unit. Together, the FDR and CVR objectively document the aircraft's flight history, which may assist in any later investigation.
The two flight recorders are required by international regulation, overseen by the International Civil Aviation Organization, to be capable of surviving the conditions likely to be encountered in a severe aircraft accident. For this reason, they are typically specified to withstand an impact of 3400 g and temperatures of over , as required by EUROCAE ED-112. They have been a mandatory requirement in commercial aircraft in the United States since 1967. After the unexplained disappearance of Malaysia Airlines Flight 370 in 2014, commentators have called for live streaming of data to the ground, as well as extending the battery life of the underwater locator beacons.
History
In seafaring, a device which recorded the position of different vessels in case of an accident was patented by John Sen Inches Thomson in January, 1897.
Early designs
One of the earliest and proven attempts was made by François Hussenot and Paul Beaudouin in 1939 at the Marignane flight test center, France, with their "type HB" flight recorder; they were essentially photograph-based flight recorders, because the record was made on a scrolling photographic film long by wide. The latent image was made by a thin ray of light deviated by a mirror tilted according to the magnitude of the data to be recorded (altitude, speed, etc.). A pre-production run of 25 "HB" recorders was ordered in 1941 and HB recorders remained in use in French flight test centers well into the 1970s.
In 1947, Hussenot founded the Société Française des Instruments de Mesure with Beaudouin and another associate, so as to market his invention, which was also known as the "hussenograph". This company went on to become a major supplier of data recorders, used not only aboard aircraft but also trains and other vehicles. SFIM is today part of the Safran group and is still present in the flight recorder market. The advantage of the film technology was that it could be easily developed afterwards and provides a durable, visual feedback of the flight parameters without needing any playback device. On the other hand, unlike magnetic tapes or later flash memory-based technology, a photographic film cannot be erased and reused, and so must be changed periodically. The technology was reserved for one-shot uses, mostly during planned test flights: it was not mounted aboard civilian aircraft during routine commercial flights. Also, cockpit conversation was not recorded.
Another form of flight data recorder was developed in the UK during World War II. Len Harrison and Vic Husband developed a unit that could withstand a crash and fire to keep the flight data intact. The unit was the forerunner of today's recorders, in being able to withstand conditions that aircrew could not. It used copper foil as the recording medium, with various styli, corresponding to various instruments or aircraft controls, indenting the foil. The foil was periodically advanced at set time intervals, giving a history of the aircraft's instrument readings and control settings. The unit was developed at Farnborough for the Ministry of Aircraft Production. At the war's end the Ministry got Harrison and Husband to sign over their invention to it and the Ministry patented it under British patent 19330/45.
The first modern flight data recorder, called "Mata-Hari", was created in 1942 by Finnish aviation engineer Veijo Hietala. This black high-tech mechanical box was able to record all required data during test flights of fighter aircraft that the Finnish Air Force repaired or built in its main aviation factory in Tampere, Finland.
During World War II both British and American air forces successfully experimented with aircraft voice recorders. In August 1943 the USAAF conducted an experiment with a magnetic wire recorder to capture the inter-phone conversations of a B-17 bomber flight crew on a combat mission over Nazi-occupied France. The recording was broadcast back to the United States by radio two days afterwards.
Australian designs
In 1953, while working at the Aeronautical Research Laboratories (ARL) of the Defence Science and Technology Organisation in Melbourne, Australian research scientist David Warren conceived a device that would record not only the instrument readings, but also the voices in the cockpit. In 1954 he published a report entitled "A Device for Assisting Investigation into Aircraft Accidents".
Warren built a prototype FDR called "The ARL Flight Memory Unit" in 1956, and in 1958 he built the first combined FDR/CVR prototype. It was designed with civilian aircraft in mind, explicitly for post-crash examination purposes. Aviation authorities from around the world were largely uninterested at first, but this changed in 1958 when Sir Robert Hardingham, the secretary of the British Air Registration Board, visited the ARL and was introduced to David Warren. Hardingham realized the significance of the invention and arranged for Warren to demonstrate the prototype in the UK.
The ARL assigned an engineering team to help Warren develop the prototype to the airborne stage. The team, consisting of electronics engineers Lane Sear, Wally Boswell, and Ken Fraser, developed a working design that incorporated a fire-resistant and shockproof case, a reliable system for encoding and recording aircraft instrument readings and voice on one wire, and a ground-based decoding device. The ARL system, made by the British firm of S. Davall & Sons Ltd, in Middlesex, was named the "Red Egg" because of its shape and bright red color.
The units were redesigned in 1965 and relocated at the rear of aircraft to increase the probability of successful data retrieval after a crash.
Carriage of data recording equipment became mandatory in UK-registered aircraft in two phases; the first, for new turbine-engined public transport category aircraft over in weight, was mandated in 1965, with a further requirement in 1966 for piston-engined transports over , with the earlier requirement further extended to all jet transports. One of the first UK uses of the data recovered from an aircraft accident was that recovered from the Royston "Midas" data recorder that was on board the British Midland Argonaut involved in the Stockport Air Disaster in 1967.
American designs
A flight recorder was invented and patented in the United States by James J. Ryan. Ryan's "Flight Recorder" patent was filed in August 1953 and approved on November 8, 1960, as US Patent 2,959,459. A second patent by Ryan for a "Coding Apparatus For Flight Recorders" is US Patent 3,075,192 dated January 22, 1963.
A "Cockpit Sound Recorder" (CSR) was independently invented and patented by Edmund A. Boniface Jr., an aeronautical engineer at Lockheed Aircraft Corporation. He originally filed with the US Patent Office on February 2, 1961, as an "Aircraft Cockpit Sound Recorder". The 1961 invention was viewed by some as an "invasion of privacy". Subsequently, Boniface filed again on February 4, 1963, for a "Cockpit Sound Recorder" (US Patent 3,327,067) with the addition of a spring-loaded switch which allowed the pilot to erase the audio/sound tape recording at the conclusion of a safe flight and landing.
Boniface's participation in aircraft crash investigations in the 1940s and in the accident investigations of the loss of one of the wings at cruise altitude on each of two Lockheed Electra turboprop powered aircraft (Flight 542 operated by Braniff Airlines in 1959 and Flight 710 operated by Northwest Orient Airlines in 1961) led to his wondering what the pilots may have said just prior to the wing loss and during the descent as well as the type and nature of any sounds or explosions that may have preceded or occurred during the wing loss.
His patent was for a device for recording audio of pilot remarks and engine or other sounds to be "contained with the in-flight recorder within a sealed container that is shock mounted, fireproofed and made watertight" and "sealed in such a manner as to be capable of withstanding extreme temperatures during a crash fire". The CSR was an analog device which provided a continuous erasing/recording loop (lasting 30 or more minutes) of all sounds (explosion, voice, and the noise of any aircraft structural components undergoing serious fracture and breakage) which could be overheard in the cockpit.
On November 1, 1966, the director of the Bureau of Safety of the Civil Aeronautics Board Bobbie R. Allen and the chief of Technical Services Section John S. Leak presented "The Potential Role of Flight Recorders in Aircraft Accident Investigation" at the AIAA/CASI Joint Meeting on Aviation Safety, Toronto, Canada.
Terminology
The term "black box" was a World War II British phrase, originating with the development of radio, radar, and electronic navigational aids in British and Allied combat aircraft. These often-secret electronic devices were encased in non-reflective black boxes or housings. The earliest identified reference to "black boxes" occurs in a May 1945 Flight article, "Radar for Airlines", describing the application of wartime RAF radar and navigational aids to civilian aircraft: "The stowage of the 'black boxes' and, even more important, the detrimental effect on performance of external aerials, still remain as a radio and radar problem." (The term "black box" is used with a different meaning in science and engineering, describing a system exclusively by its inputs and outputs, with no information whatsoever about its inner workings.)
Magnetic tape and wire voice recorders had been tested on RAF and USAAF bombers by 1943 thus adding to the assemblage of fielded and experimental electronic devices employed on Allied aircraft. As early as 1944 aviation writers envisioned use of these recording devices on commercial aircraft to aid incident investigations. When modern flight recorders were proposed to the British Aeronautical Research Council in 1958, the term "black box" was in colloquial use by experts.
By 1967, when flight recorders were mandated by leading aviation countries, the expression had found its way into general use: "These so-called 'black boxes' are, in fact, of fluorescent flame-orange in colour." The formal names of the devices are flight data recorder and cockpit voice recorder. The recorders must be housed in boxes that are bright orange in color to make them more visually conspicuous in the debris after an accident.
Components
Flight data recorder
A flight data recorder (FDR; also ADR, for accident data recorder) is an electronic device employed to record instructions sent to any electronic systems on an aircraft.
The data recorded by the FDR are used for accident and incident investigation. Due to their importance in investigating accidents, these ICAO-regulated devices are carefully engineered and constructed to withstand the force of a high speed impact and the heat of an intense fire. Contrary to the popular term "black box", the exterior of the FDR is coated with heat-resistant bright orange paint for high visibility in wreckage, and the unit is usually mounted in the aircraft's tail section, where it is more likely to survive a crash. Following an accident, the recovery of the FDR is usually a high priority for the investigating body, as analysis of the recorded parameters can often detect and identify causes or contributing factors.
Modern day FDRs receive inputs via specific data frames from the flight-data acquisition units. They record significant flight parameters, including the control and actuator positions, engine information and time of day. There are 88 parameters required as a minimum under current US federal regulations (only 29 were required until 2002), but some systems monitor many more variables. Generally each parameter is recorded a few times per second, though some units store "bursts" of data at a much higher frequency if the data begin to change quickly. Most FDRs record approximately 17–25 hours of data in a continuous loop. It is required by regulations that an FDR verification check (readout) is performed annually in order to verify that all mandatory parameters are recorded. Many aircraft today are equipped with an "event" button in the cockpit that could be activated by the crew if an abnormality occurs in flight. Pushing the button places a signal on the recording, marking the time of the event.
Modern FDRs are typically double wrapped in strong corrosion-resistant stainless steel or titanium, with high-temperature insulation inside. Modern FDRs are accompanied by an underwater locator beacon that emits an ultrasonic "ping" to aid in detection when submerged. These beacons operate for up to 30 days and are able to operate while immersed to a depth of up to .
Cockpit voice recorder
A cockpit voice recorder (CVR) is a flight recorder used to record the audio environment in the flight deck of an aircraft for the purpose of investigation of accidents and incidents. This is typically achieved by recording the signals of the microphones and earphones of the pilots' headsets and of an area microphone in the roof of the cockpit. The current applicable FAA TSO is C123b titled Cockpit Voice Recorder Equipment.
Where an aircraft is required to carry a CVR and uses digital communications the CVR is required to record such communications with air traffic control unless this is recorded elsewhere. it is an FAA requirement that the recording duration is a minimum of two hours. The European Aviation Safety Agency increased the recording duration to 25 hours in 2021. In 2023, the FAA proposed extending requirements to 25 hours to help in investigations like runway incursions. In a January 2024 press conference on Alaska Airlines Flight 1282, National Transportation Safety Board (NTSB) chair Jennifer Homendy again called for extending retention to 25 hours, rather than the currently-mandated 2 hours, on all existing devices, rather than only newly manufactured ones.
A standard CVR is capable of recording four channels of audio data for a period of two hours. The original requirement was for a CVR to record for 30 minutes, but this has been found to be insufficient in many cases because significant parts of the audio data needed for a subsequent investigation occurred more than 30 minutes before the end of the recording.
The earliest CVRs used analog wire recording, later replaced by analog magnetic tape. Some of the tape units used two reels, with the tape automatically reversing at each end. The original was the ARL Flight Memory Unit produced in 1957 by Australian David Warren and instrument maker Tych Mirfield.
Other units used a single reel, with the tape spliced into a continuous loop, much as in an 8-track cartridge. The tape would circulate and old audio information would be overwritten every 30 minutes. Recovery of sound from magnetic tape often proves difficult if the recorder is recovered from water and its housing has been breached. Thus, the latest designs employ solid-state memory and use fault tolerant digital recording techniques, making them much more resistant to shock, vibration and moisture. With the reduced power requirements of solid-state recorders, it is now practical to incorporate a battery in the units, so that recording can continue until flight termination, even if the aircraft electrical system fails.
Like the FDR, the CVR is typically mounted in the rear of the airplane fuselage to maximize the likelihood of its survival in a crash.
Combined units
With the advent of digital recorders, the FDR and CVR can be manufactured in one fireproof, shock proof, and waterproof container as a combined digital cockpit voice and data recorder (CVDR). Currently, CVDRs are manufactured by L3Harris Technologies and Hensoldt among others.
Solid state recorders became commercially practical in 1990, having the advantage of not requiring scheduled maintenance and making the data easier to retrieve. This was extended to the two-hour voice recording in 1995.
Additional equipment
Since the 1970s, most large civil jet transports have been additionally equipped with a "quick access recorder" (QAR). This records data on a removable storage medium. Access to the FDR and CVR is necessarily difficult because they must be fitted where they are most likely to survive an accident; they also require specialized equipment to read the recording. The QAR recording medium is readily removable and is designed to be read by equipment attached to a standard desktop computer. In many airlines, the quick access recordings are scanned for "events", an event being a significant deviation from normal operational parameters. This allows operational problems to be detected and eliminated before an accident or incident results.
A flight-data acquisition unit (FDAU) is a unit that receives various discrete, analog and digital parameters from a number of sensors and avionic systems and then routes them to the FDR and, if installed, to the QAR. Information from the FDAU to the FDR is sent via specific data frames, which depend on the aircraft manufacturer.
Many modern aircraft systems are digital or digitally controlled. Very often, the digital system will include built-in test equipment which records information about the operation of the system. This information may also be accessed to assist with the investigation of an accident or incident.
Specifications
The design of today's FDR is governed by the internationally recognized standards and recommended practices relating to flight recorders which are contained in ICAO Annex 6 which makes reference to industry crashworthiness and fire protection specifications such as those to be found in the European Organisation for Civil Aviation Equipment documents EUROCAE ED55, ED56 Fiken A and ED112 (Minimum Operational Performance Specification for Crash Protected Airborne Recorder Systems). In the United States, the Federal Aviation Administration (FAA) regulates all aspects of US aviation, and cites design requirements in their Technical Standard Order, based on the EUROCAE documents (as do the aviation authorities of many other countries).
Currently, EUROCAE specifies that a recorder must be able to withstand an acceleration of 3400 g (33 km/s2) for 6.5 milliseconds. This is roughly equivalent to an impact velocity of and a deceleration or crushing distance of . Additionally, there are requirements for penetration resistance, static crush, high and low temperature fires, deep sea pressure, sea water immersion, and fluid immersion.
EUROCAE ED-112 (Minimum Operational Performance Specification for Crash Protected Airborne Recorder Systems) defines the minimum specification to be met for all aircraft requiring flight recorders for recording of flight data, cockpit audio, images and CNS / ATM digital messages and used for investigations of accidents or incidents. When issued in March 2003, ED-112 superseded previous ED-55 and ED-56A that were separate specifications for FDR and CVR. FAA TSOs for FDR and CVR reference ED-112 for characteristics common to both types.
In order to facilitate recovery of the recorder from an aircraft accident site, they are required to be coloured bright yellow or orange with reflective surfaces. All are lettered "Flight recorder do not open" on one side in English and "Enregistreur de vol ne pas ouvrir" in French on the other side. To assist recovery from submerged sites they must be equipped with an underwater locator beacon which is automatically activated in the event of an accident.
Regulation
The first attempt at a regulatory attempt to require flight data recorders occurred in April 1941, when the Civil Aeronautics Board (CAB) required flight recorders on passenger aircraft that would record the aircraft's altitude and whether the radio transmitter was turned on or off. The compliance deadline for that regulation was extended several times, until June 1944 when the requirement was rescinded due to maintenance problems and the lack of parts due to World War 2. A similar regulation was adopted in September 1947, which required recorders in aircraft of or more, but that requirement was again rescinded in July 1948 because of a lack of availability of reliable devices. In August 1957, the CAB adopted amendments to flight regulations that required the installation of flight recorders by July 1958 in all aircraft over and that were operated at altitudes over 25,000 feet. The requirements were further amended in September 1959, requiring the retention of records for 60 days, and the operation of the flight recorders continuously from the time of takeoff roll to the completion of the landing roll.
In the investigation of the 1960 crash of Trans Australia Airlines Flight 538 at Mackay, Queensland, the inquiry judge strongly recommended that flight recorders be installed in all Australian airliners. Australia became the first country in the world to make cockpit-voice recording compulsory.
The United States' first cockpit voice recorder rules were passed in 1964, requiring all turbine and piston aircraft with four or more engines to have CVRs by March 1, 1967. it is an FAA requirement that the CVR recording duration is a minimum of two hours, following the NTSB recommendation that it should be increased from its previously mandated 30-minute duration. From 2014 the United States requires flight data recorders and cockpit voice recorders on aircraft that have 20 or more passenger seats, or those that have six or more passenger seats, are turbine-powered, and require two pilots.
For US air carriers and manufacturers, the NTSB is responsible for investigating accidents and safety-related incidents. The NTSB also serves in an advisory role for many international investigations not under its formal jurisdiction. The NTSB does not have regulatory authority, but must depend on legislation and other government agencies to act on its safety recommendations. In addition, after the public outcry that followed recordings released for the crash of Delta Air Lines Flight 1141 in 1988, 49 USC Section 1114(c) prohibits the NTSB from making the audio recordings public except when related to a safety investigation, and in such cases the release is only in the form of a written transcript.
The ARINC Standards are prepared by the Airlines Electronic Engineering Committee (AEEC). The 700 Series of standards describe the form, fit, and function of avionics equipment installed predominately on transport category aircraft. The FDR is defined by ARINC Characteristic 747. The CVR is defined by ARINC Characteristic 757.
Post-incident overwriting of voice data by Nigerian crews led to a 2023 All Operators Letter reinforcing that this practice is forbidden.
Proposed requirements
Deployable recorders
The NTSB recommended in 1999 that operators be required to install two sets of CVDR systems, with the second CVDR designed to be ejected from the aircraft prior to impact with the ground or water. Ejection would be initiated by computer based on sensor information indicating an accident is imminent. A deployable recorder combines the cockpit voice/flight data recorders and an emergency locator transmitter (ELT) in a single unit. The unit would be designed to eject and float away from the aircraft and survive its descent to the ground, or float on water indefinitely. It would be equipped with satellite technology to aid in prompt recovery. Deployable CVDR technology has been used by the US Navy since 1993.
While the recommendations would involve a massive, expensive retrofit program, government funding would meet cost objections from manufacturers and airlines. Operators would get both sets of recorders (including the currently-used fixed recorder) free of charge. The cost of the second deployable/ejectable CVDR (or black box) was estimated at US$30 million for installation in 500 new aircraft (about $60,000 per new commercial plane).
In the United States, the proposed SAFE Act calls for implementing the NTSB 1999 recommendations. However, so far the proposed legislation has failed to pass Congress, having been introduced in 2003 (H.R. 2632), in 2005 (H.R. 3336), and in 2007 (H.R. 4336). Originally the Safe Aviation Flight Enhancement (SAFE) Act of 2003 was introduced on June 26, 2003, by Congressman David Price (D-NC) and Congressman John Duncan (R-Tenn.) in a bipartisan effort to ensure investigators have access to information immediately following accidents to transport category aircraft.
On July 19, 2005, a revised proposal for a SAFE Act was introduced and referred to the Committee on Transportation and Infrastructure of the US House of Representatives. The bill was referred to the House Subcommittee on Aviation during the 108th, 109th, and 110th Congresses.
After Malaysia Airlines Flight 370
In the United States, on March 12, 2014, in response to the missing Malaysia Airlines Flight 370, David Price re-introduced the SAFE Act in the US House of Representatives.
The disappearance of Malaysia Airlines Flight 370 demonstrated the limits of the contemporary flight recorder technology, namely how physical possession of the flight recorder device is necessary to help investigate the cause of an aircraft incident. Considering the advances of modern communication, technology commentators called for flight recorders to be supplemented or replaced by a system that provides "live streaming" of data from the aircraft to the ground. Furthermore, commentators called for the underwater locator beacon's range and battery life to be extended, as well as the outfitting of civil aircraft with the deployable flight recorders typically used in military aircraft. Previous to MH370, the investigators of 2009 Air France Flight 447 urged that the battery life be extended as "rapidly as possible" after the crash's flight recorders went unrecovered for over a year.
After Indonesia AirAsia Flight 8501
On December 28, 2014, Indonesia AirAsia Flight 8501, en route from Surabaya, Indonesia, to Singapore, crashed in bad weather, killing all 155 passengers and seven crew on board.
On January 8, 2015, before the recovery of the flight recorders, an anonymous ICAO representative said: "The time has come that deployable recorders are going to get a serious look." A second ICAO official said that public attention had "galvanized momentum in favour of ejectable recorders on commercial aircraft".
Boeing 737 MAX
Live flight data streaming as on the Boeing 777F ecoDemonstrator, plus 20 minutes of data before and after a triggering event, could have removed the uncertainty before the Boeing 737 MAX groundings following the March 2019 Ethiopian Airlines Flight 302 crash. In the Alaska Airlines Flight 1282 accident, the Cockpit Voice Recorder functioned properly but the data was overwritten as the CVR remained powered, and functioning. The critical accident data was overwritten by over two hours of post-incident sounds until a maintenance crew could enter the aircraft after the incident and power down the CVR.
Image recorders
The NTSB has asked for the installation of cockpit image recorders in large transport aircraft to provide information that would supplement existing CVR and FDR data in accident investigations. They have recommended that image recorders be placed into smaller aircraft that are not required to have a CVR or FDR. The rationale is that what is seen on an instrument by the pilots of an aircraft is not necessarily the same as the data sent to the display device. This is particularly true of aircraft equipped with electronic displays (CRT or LCD). A mechanical instrument panel is likely to preserve its last indications, but this is not the case with an electronic display. Such systems, estimated to cost less than $8,000 installed, typically consist of a camera and microphone located in the cockpit to continuously record cockpit instrumentation, the outside viewing area, engine sounds, radio communications, and ambient cockpit sounds. As with conventional CVRs and FDRs, data from such a system is stored in a crash-protected unit to ensure survivability. Since the recorders can sometimes be crushed into unreadable pieces, or even located in deep water, some modern units are self-ejecting (taking advantage of kinetic energy at impact to separate themselves from the aircraft) and also equipped with radio emergency locator transmitters and sonar underwater locator beacons to aid in their location.
Cultural references
The artwork for the band Rammstein's album Reise, Reise is made to look like a CVR; it also includes a recording from a crash. The recording is from the last 1–2 minutes of the CVR of Japan Air Lines Flight 123, which crashed on August 12, 1985, killing 520 people; JAL123 is the deadliest single-aircraft disaster in history.
Members of the performing arts collective Collective:Unconscious made a theatrical presentation of a play called Charlie Victor Romeo with a script based on transcripts from CVR voice recordings of nine aircraft emergencies. The play features the famous United Airlines Flight 232 that crash-landed in a cornfield near Sioux City, Iowa, after suffering a catastrophic failure of one engine and most flight controls.
Survivor, a novel by American author Chuck Palahniuk, is about a cult member who dictates his life story to a flight recorder before the plane runs out of fuel and crashes.
In stand-up comedy, many jokes have been made asking why the entire airplane is not made out of the material used to make black boxes, given that the black box survives the crash. This is referenced in the 2001 Chris Rock movie Down to Earth, although the original joke is widely credited to George Carlin.
| Technology | Aircraft components | null |
938631 | https://en.wikipedia.org/wiki/Antineutron | Antineutron | The antineutron is the antiparticle of the neutron with symbol . It differs from the neutron only in that some of its properties have equal magnitude but opposite sign. It has the same mass as the neutron, and no net electric charge, but has opposite baryon number (+1 for neutron, −1 for the antineutron). This is because the antineutron is composed of antiquarks, while neutrons are composed of quarks. The antineutron consists of one up antiquark and two down antiquarks.
Background
The antineutron was discovered in proton–antiproton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by the team of Bruce Cork, Glen Lambertson, Oreste Piccioni, and William Wenzel in 1956, one year after the antiproton was discovered.
Since the antineutron is electrically neutral, it cannot easily be observed directly. Instead, the products of its annihilation with ordinary matter are observed. In theory, a free antineutron should decay into an antiproton, a positron, and a neutrino in a process analogous to the beta decay of free neutrons. There are theoretical proposals of neutron–antineutron oscillations, a process that implies the violation of the baryon number conservation.
Magnetic moment
The magnetic moment of the antineutron is the opposite of that of the neutron. It is for the antineutron but for the neutron (relative to the direction of the spin). Here μN is the nuclear magneton.
| Physical sciences | Antimatter | Physics |
33112993 | https://en.wikipedia.org/wiki/Pelvis | Pelvis | The pelvis (: pelves or pelvises) is the lower part of an anatomical trunk, between the abdomen and the thighs (sometimes also called pelvic region), together with its embedded skeleton (sometimes also called bony pelvis or pelvic skeleton).
The pelvic region of the trunk includes the bony pelvis, the pelvic cavity (the space enclosed by the bony pelvis), the pelvic floor, below the pelvic cavity, and the perineum, below the pelvic floor. The pelvic skeleton is formed in the area of the back, by the sacrum and the coccyx and anteriorly and to the left and right sides, by a pair of hip bones.
The two hip bones connect the spine with the lower limbs. They are attached to the sacrum posteriorly, connected to each other anteriorly, and joined with the two femurs at the hip joints. The gap enclosed by the bony pelvis, called the pelvic cavity, is the section of the body underneath the abdomen and mainly consists of the reproductive organs and the rectum, while the pelvic floor at the base of the cavity assists in supporting the organs of the abdomen.
In mammals, the bony pelvis has a gap in the middle, significantly larger in females than in males. Their offspring pass through this gap when they are born.
Structure
The pelvic region of the trunk is the lower part of the trunk, between the abdomen and the thighs. It includes several structures: the bony pelvis, the pelvic cavity, the pelvic floor, and the perineum. The bony pelvis (pelvic skeleton) is the part of the skeleton embedded in the pelvic region of the trunk. It is subdivided into the pelvic girdle and the pelvic spine. The pelvic girdle is composed of the appendicular hip bones (ilium, ischium, and pubis) oriented in a ring, and connects the pelvic region of the spine to the lower limbs. The pelvic spine consists of the sacrum and coccyx.
the pelvic cavity, typically defined as a small part of the space enclosed by the bony pelvis, delimited by the pelvic brim above and the pelvic floor below; alternatively, the pelvic cavity is sometimes also defined as the whole space enclosed by the pelvic skeleton, subdivided into:
the greater (or false) pelvis, above the pelvic brim
the lesser (or true) pelvis, below the pelvic brim
the pelvic floor (or pelvic diaphragm), below the pelvic cavity
the perineum, below the pelvic floor
Pelvic bone
The pelvic skeleton is formed posteriorly (in the area of the back), by the sacrum and the coccyx and laterally and anteriorly (forward and to the sides), by a pair of hip bones.
Each hip bone consists of three sections: ilium, ischium, and pubis. During childhood, these sections are separate bones, joined by the triradiate cartilage. During puberty, they fuse together to form a single bone.
Pelvic cavity
The pelvic cavity is a body cavity that is bounded by the bones of the pelvis and which primarily contains reproductive organs and the rectum.
A distinction is made between the lesser or true pelvis inferior to the terminal line, and the greater or false pelvis above it. The pelvic inlet or superior pelvic aperture, which leads into the lesser pelvis, is bordered by the promontory, the arcuate line of ilium, the iliopubic eminence, the pecten of the pubis, and the upper part of the pubic symphysis. The pelvic outlet or inferior pelvic aperture is the region between the subpubic angle or pubic arch, the ischial tuberosities and the coccyx.
Ligaments: obturator membrane, inguinal ligament (lacunar ligament, iliopectineal arch)
Alternatively, the pelvis is divided into three planes: the inlet, midplane, and outlet.
Pelvic floor
The pelvic floor has two inherently conflicting functions: One is to close the pelvic and abdominal cavities and bear the load of the visceral organs; the other is to control the openings of the rectum and urogenital organs that pierce the pelvic floor and make it weaker. To achieve both these tasks, the pelvic floor is composed of several overlapping sheets of muscles and connective tissues.
The pelvic diaphragm is composed of the levator ani and the coccygeus muscle. These arise between the symphysis and the ischial spine and converge on the coccyx and the anococcygeal ligament which spans between the tip of the coccyx and the anal hiatus. This leaves a slit for the anal and urogenital openings. Because of the width of the genital aperture, which is wider in females, a second closing mechanism is required. The urogenital diaphragm consists mainly of the deep transverse perineal which arises from the inferior ischial and pubic rami and extends to the urogenital hiatus. The urogenital diaphragm is reinforced posteriorly by the superficial transverse perineal.
The external anal and urethral sphincters close the anus and the urethra. The former is surrounded by the bulbospongiosus which narrows the vaginal introitus in females and surrounds the corpus spongiosum in males. Ischiocavernosus squeezes blood into the corpora cavernosa penis and clitoridis.
Variation
Modern humans are to a large extent characterized by bipedal locomotion and large brains. Because the pelvis is vital to both locomotion and childbirth, natural selection has been confronted by two conflicting demands: a wide birth canal and locomotion efficiency, a conflict referred to as the "obstetrical dilemma". The female pelvis, or gynecoid pelvis, has evolved to its maximum width for childbirth—a wider pelvis would make women unable to walk. In contrast, human male pelvises are not constrained by the need to give birth and therefore are more optimized for bipedal locomotion.
The principal differences between male and female true and false pelvis include:
The female pelvis is larger and broader than the male pelvis which is taller, narrower, and more compact. The female pelvis is lighter and thinner than the male pelvis.
The female inlet is larger and oval in shape, while the male sacral promontory projects further (i.e. the male inlet is more heart-shaped).
The sides of the male pelvis converge from the inlet to the outlet, whereas the sides of the female pelvis are wider apart.
The angle between the inferior pubic rami is acute (70 degrees) in men, but obtuse (90–100 degrees) in women. Accordingly, the angle is called subpubic angle in men and pubic arch in women. Additionally, the bones forming the angle/arch are more concave in females but straight in males.
The distance between the ischia bones is small in males, making the outlet narrow, but large in females, who have a relatively large outlet. The ischial spines and tuberosities are heavier and project farther into the pelvic cavity in males. The greater sciatic notch is wider in females.
The iliac crests are higher and more pronounced in males, making the male false pelvis deeper and more narrow than in females.
The male sacrum is long, narrow, more straight, and has a pronounced sacral promontory. The female sacrum is shorter, wider, more curved posteriorly, and has a less pronounced promontory.
The acetabula are wider apart in females than in males. In males, the acetabulum faces more laterally, while it faces more anteriorly in females. Consequently, when males walk the leg can move forwards and backwards in a single plane. In females, the leg must swing forward and inward, from where the pivoting head of the femur moves the leg back in another plane. This change in the angle of the femoral head gives the female gait its characteristic (i.e. swinging of hips).
Development
Each side of the pelvis is formed as cartilage, which ossifies as three main bones which stay separate through childhood: ilium, ischium, pubis. At birth the whole of the hip joint (the acetabulum area and the top of the femur) is still made of cartilage (but there may be a small piece of bone in the great trochanter of the femur); this makes it difficult to detect congenital hip dislocation by X-raying.
There is preliminary evidence that the pelvis continues to widen over the course of a lifetime.
Functions
The skeleton of the pelvis is a basin-shaped ring of bones connecting the vertebral column to the femora. It is then connected to two hip bones.
Its primary functions are to bear the weight of the upper body when sitting and standing, transferring that weight from the axial skeleton to the lower appendicular skeleton when standing and walking, and providing attachments for and withstanding the forces of the powerful muscles of locomotion and posture. Compared to the shoulder girdle, the pelvic girdle is thus strong and rigid.
Its secondary functions are to contain and protect the pelvic and abdominopelvic viscera (inferior parts of the urinary tracts, internal reproductive organs), providing attachment for external reproductive organs and associated muscles and membranes.
As a mechanical structure
The pelvic girdle consists of the two hip bones. The hip bones are connected to each other anteriorly at the pubic symphysis, and posteriorly to the sacrum at the sacroiliac joints to form the pelvic ring. The ring is very stable and allows very little mobility, a prerequisite for transmitting loads from the trunk to the lower limbs.
As a mechanical structure the pelvis may be thought of as four roughly triangular and twisted rings. Each superior ring is formed by the iliac bone; the anterior side stretches from the acetabulum up to the anterior superior iliac spine; the posterior side reaches from the top of the acetabulum to the sacroiliac joint; and the third side is formed by the palpable iliac crest. The lower ring, formed by the rami of the pubic and ischial bones, supports the acetabulum and is twisted 80–90 degrees in relation to the superior ring.
An alternative approach is to consider the pelvis part of an integrated mechanical system based on the tensegrity icosahedron as an infinite element. Such a system is able to withstand omnidirectional forces—ranging from weight-bearing to childbearing—and, as a low energy requiring system, is favoured by natural selection.
The pelvic inclination angle is the single most important element of the human body posture and is adjusted at the hips. It is also one of the rare things that can be measured at the assessment of the posture. A simple method of measurement was described by the British orthopedist Philip Willes and is performed by using an inclinometer.
As an anchor for muscles
The lumbosacral joint, between the sacrum and the last lumbar vertebra, has, like all vertebral joints, an intervertebral disc, anterior and posterior ligaments, ligamenta flava, interspinous and supraspinous ligaments, and synovial joints between the articular processes of the two bones. In addition to these ligaments the joint is strengthened by the iliolumbar and lateral lumbosacral ligaments. The iliolumbar ligament passes between the tip of the transverse process of the fifth lumbar vertebra and the posterior part of the iliac crest. The lateral lumbosacral ligament, partly continuous with the iliolumbar ligament, passes down from the lower border of the transverse process of the fifth vertebra to the ala of the sacrum. The movements possible in the lumbosacral joint are flexion and extension, a small amount of lateral flexion (from 7 degrees in childhood to 1 degree in adults), but no axial rotation. Between ages 2–13 the joint is responsible for as much as 75% (about 18 degrees) of flexion and extension in the lumbar spine. From age 35 the ligaments considerably limit the range of motions.
The three extracapsular ligaments of the hip joint—the iliofemoral, ischiofemoral, and pubofemoral ligaments—form a twisting mechanism encircling the neck of the femur. When sitting, with the hip joint flexed, these ligaments become lax permitting a high degree of mobility in the joint. When standing, with the hip joint extended, the ligaments get twisted around the femoral neck, pushing the head of the femur firmly into the acetabulum, thus stabilizing the joint. The zona orbicularis assists in maintaining the contact in the joint by acting like a buttonhole on the femoral head. The intracapsular ligament, the ligamentum teres, transmits blood vessels that nourish the femoral head.
Junctions
The two hip bones are joined anteriorly at the pubic symphysis by a fibrous cartilage covered by a hyaline cartilage, the interpubic disk, within which a non-synovial cavity might be present. Two ligaments, the superior and inferior pubic ligaments, reinforce the symphysis.
Both sacroiliac joints, formed between the auricular surfaces of the sacrum and the two hip bones. are amphiarthroses, almost immobile joints enclosed by very taut joint capsules. This capsule is strengthened by the ventral, interosseous, and dorsal sacroiliac ligaments. The most important accessory ligaments of the sacroiliac joint are the sacrospinous and sacrotuberous ligaments which stabilize the hip bone on the sacrum and prevent the promonotory from tilting forward. Additionally, these two ligaments transform the greater and lesser sciatic notches into the greater and lesser foramina, a pair of important pelvic openings. The iliolumbar ligament is a strong ligament which connects the tip of the transverse process of the fifth lumbar vertebra to the posterior part of the inner lip of the iliac crest. It can be thought of as the lower border of the thoracolumbar fascia and is occasionally accompanied by a smaller ligamentous band passing between the fourth lumbar vertebra and the iliac crest. The lateral lumbosacral ligament is partly continuous with the iliolumbar ligament. It passes between the transverse process of the fifth vertebra to the ala of the sacrum where it intermingle with the anterior sacroiliac ligament.
The joint between the sacrum and the coccyx, the sacrococcygeal symphysis, is strengthened by a series of ligaments. The anterior sacrococcygeal ligament is an extension of the anterior longitudinal ligament (ALL) that run down the anterior side of the vertebral bodies. Its irregular fibers blend with the periosteum. The posterior sacrococcygeal ligament has a deep and a superficial part, the former is a flat band corresponding to the posterior longitudinal ligament (PLL) and the latter corresponds to the ligamenta flava. Several other ligaments complete the foramen of the last sacral nerve.
Shoulder and intrinsic back
The inferior parts of latissimus dorsi, one of the muscles of the upper limb, arises from the posterior third of the iliac crest. Its action on the shoulder joint are internal rotation, adduction, and retroversion. It also contributes to respiration (i.e. coughing). When the arm is adducted, latissimus dorsi can pull it backward and medially until the back of the hand covers the buttocks.
In a longitudinal osteofibrous canal on either side of the spine there is a group of muscles called the erector spinae which is subdivided into a lateral superficial and a medial deep tract. In the lateral tract, the iliocostalis lumborum and longissimus thoracis originates on the back of the sacrum and the posterior part of the iliac crest. Contracting these muscles bilaterally extends the spine and unilaterally contraction bends the spine to the same side. The medial tract has a "straight" (interspinales, intertransversarii, and spinalis) and an "oblique" (multifidus and semispinalis) component, both of which stretch between vertebral processes; the former acts similar to the muscles of the lateral tract, while the latter function unilaterally as spine extensors and bilaterally as spine rotators. In the medial tract, the multifidi originates on the sacrum.
Abdomen
The muscles of the abdominal wall are subdivided into a superficial and a deep group.
The superficial group is subdivided into a lateral and a medial group. In the medial superficial group, on both sides of the centre of the abdominal wall (the linea alba), the rectus abdominis stretches from the cartilages of ribs V-VII and the sternum down to the pubic crest. At the lower end of the rectus abdominis, the pyramidalis tenses the linea alba. The lateral superficial muscles, the transversus and external and internal oblique muscles, originate on the rib cage and on the pelvis (iliac crest and inguinal ligament) and are attached to the anterior and posterior layers of the sheath of the rectus.
Flexing the trunk (bending forward) is essentially a movement of the rectus muscles, while lateral flexion (bending sideways) is achieved by contracting the obliques together with the quadratus lumborum and intrinsic back muscles. Lateral rotation (rotating either the trunk or the pelvis sideways) is achieved by contracting the internal oblique on one side and the external oblique on the other. The transversus' main function is to produce abdominal pressure in order to constrict the abdominal cavity and pull the diaphragm upward.
There are two muscles in the deep or posterior group. Quadratus lumborum arises from the posterior part of the iliac crest and extends to the rib XII and lumbar vertebrae I–IV. It unilaterally bends the trunk to the side and bilaterally pulls the 12th rib down and assists in expiration. The iliopsoas consists of psoas major (and occasionally psoas minor) and iliacus, muscles with separate origins but a common insertion on the lesser trochanter of the femur. Of these, only iliacus is attached to the pelvis (the iliac fossa). However, psoas passes through the pelvis and because it acts on two joints, it is topographically classified as a posterior abdominal muscle but functionally as a hip muscle. Iliopsoas flexes and externally rotates the hip joints, while unilateral contraction bends the trunk laterally and bilateral contraction raises the trunk from the supine position.
Hip and thigh
The muscles of the hip are divided into a dorsal and a ventral group.
The dorsal hip muscles are either inserted into the region of the lesser trochanter (anterior or inner group) or the greater trochanter (posterior or outer group). Anteriorly, the psoas major (and occasionally psoas minor) originates along the spine between the rib cage and pelvis. The iliacus originates on the iliac fossa to join psoas at the iliopubic eminence to form the iliopsoas which is inserted into the lesser trochanter. The iliopsoas is the most powerful hip flexor.
The posterior group includes the gluteus maximus, gluteus medius, and gluteus minimus. Maximus has a wide origin stretching from the posterior part of the iliac crest and along the sacrum and coccyx, and has two separate insertions: a proximal which radiates into the iliotibial tract and a distal which inserts into the gluteal tuberosity on the posterior side of the femoral shaft. It is primarily an extensor and lateral rotator of the hip joint, but, because of its bipartite insertion, it can both adduct and abduct the hip. Medius and minimus arise on the external surface of the ilium and are both inserted into the greater trochanter. Their anterior fibers are medial rotators and flexors while the posterior fibers are lateral rotators and extensors. The piriformis has its origin on the ventral side of the sacrum and is inserted on the greater trochanter. It abducts and laterally rotates the hip in the upright posture and assists in extension of the thigh. The tensor fasciae latae arises on the anterior superior iliac spine and inserts into the iliotibial tract. It presses the head of the femur into the acetabulum and flexes, medially rotates, and abducts the hip.
The ventral hip muscles are important in the control of the body's balance. The internal and external obturator muscles together with the quadratus femoris are lateral rotators of the hip. Together they are stronger than the medial rotators and therefore the feet point outward in the normal position to achieve a better support. The obturators have their origins on either sides of the obturator foramen and are inserted into the trochanteric fossa on the femur. Quadratus arises on the ischial tuberosity and is inserted into the intertrochanteric crest. The superior and inferior gemelli, arising from the ischial spine and ischial tuberosity respectively, can be thought of as marginal heads of the obturator internus, and their main function is to assist this muscle.
The muscles of the thigh can be subdivided into adductors (medial group), extensors (anterior group), and flexors (posterior group). The extensors and flexors act on the knee joint, while the adductors mainly act on the hip joint.
The thigh adductors have their origins on the inferior ramus of the pubic bone and are, with the exception of gracilis, inserted along the femoral shaft. Together with sartorius and semitendinosus, gracilis reaches beyond the knee to their common insertion on the tibia.
The anterior thigh muscles form the quadriceps which is inserted on the patella with a common tendon. Three of the four muscles have their origins on the femur, while rectus femoris arises from the anterior inferior iliac spine and is thus the only of the four acting on two joints.
The posterior thigh muscles have their origins on the inferior ischial ramus, with the exception of the short head of the biceps femoris. The semitendinosus and semimembranosus are inserted on the tibia on the medial side of the knee, while biceps femoris is inserted on the fibula, on the knee's lateral side.
In pregnancy and childbirth
In later stages of pregnancy the fetus's head aligns inside the pelvis. Also joints of bones soften due to the effect of pregnancy hormones. These factors may cause pelvic joint pain (symphysis pubis dysfunction or SPD). As the end of pregnancy approaches, the ligaments of the sacroiliac joint loosen, letting the pelvis outlet widen somewhat; this is easily noticeable in the cow.
During childbirth (unless by Cesarean section) the fetus passes through the maternal pelvic opening.
Clinical significance
Hip fractures often affect the elderly and occur more often in females; this is frequently due to osteoporosis. There are also different types of pelvic fracture, often resulting from traffic accidents.
Pelvic pain can affect anybody and has a variety of causes, including bowel adhesions, irritable bowel syndrome, interstitial cystitis, and endometriosis in women.
There are many anatomical variations of the pelvis. In the female the pelvis can be of a much larger size than normal, known as a giant pelvis or pelvis justo major, or it can be much smaller, known as a reduced pelvis or pelvis justo minor. Other variations include an android pelvis, a pelvis of the normal male shape in a female, which can prove problematic in childbirth.
History
Caldwell–Moloy classification
Throughout the 20th century pelvimetric measurements were made on pregnant women to determine whether a natural birth would be possible, a practice today limited to cases where a specific problem is suspected or following a caesarean delivery. William Edgar Caldwell and Howard Carmen Moloy studied collections of skeletal pelves and thousands of stereoscopic radiograms and finally recognized three types of female pelves plus the masculine type. In 1933 and 1934 they published their typology, including the Greek names since then frequently quoted in various handbooks: Gynaecoid (gyne, woman), anthropoid (anthropos, human being), platypelloid (platys, flat), and android (aner, man).
The gynaecoid pelvis is the so-called normal female pelvis. Its inlet is either slightly oval, with a greater transverse diameter, or round. The interior walls are straight, the subpubic arch wide, the sacrum shows an average to backward inclination, and the greater sciatic notch is well rounded. Because this type is spacious and well proportioned there is little or no difficulty in the birth process. Caldwell and his co-workers found gynaecoid pelves in about 50 per cent of specimens.This gives a round shape to the gluteus region,circle shape to hip region and circle shape side way profile https://www.ncbi.nlm.nih.gov/books/NBK519068/#
The platypelloid pelvis has a transversally wide, flattened shape, is wide anteriorly, greater sciatic notches of male type, and has a short sacrum that curves inwards reducing the diameters of the lower pelvis. This is similar to the rachitic pelvis where the softened bones widen laterally because of the weight from the upper body resulting in a reduced anteroposterior diameter. Giving birth with this type of pelvis is associated with problems, such as transverse arrest. Less than 3 per cent of women have this pelvis type.This gives a inverted triangle shape to the gluteus region, hip region and straight sideway profile https://www.ncbi.nlm.nih.gov/books/NBK519068/#
The android pelvis is a female pelvis with masculine features, including a wedge or heart shaped inlet caused by a prominent sacrum and a triangular anterior segment. The reduced pelvis outlet often causes problems during child birth. In 1939 Caldwell found this type in one-third of white women and in one-sixth of non-white women. The android pelvis is found in the vast majority of subsaharan african women . The android pelvis gives the human skeleton a triangle shape looking at it from the front and a android shape looking at it from the back these features result in giving steotopygia the basis of its unique specific shape with added relation to the femur. also gives a trapezoidal shape to the gluteus region,hip region and side way profile https://www.ncbi.nlm.nih.gov/books/NBK519068/# This trapezoidal shape is what gives the genetic characteristic steotopygia it's specific shape.
The anthropoid pelvis is characterized by an oval shape with a greater anteroposterior diameter. It has straight walls, a small subpubic arch, and large sacrosciatic notches. The sciatic spines are placed widely apart and the sacrum is usually straight resulting in deep non-obstructed pelvis. Caldwell found this type in one-quarter of white women and almost half of non-white women. This gives a square shape to the gluteus region, also hip region and side way profile https://www.ncbi.nlm.nih.gov/books/NBK519068/#
However, Caldwell and Moloy then complicated this simple fourfold scheme by dividing the pelvic inlet into posterior and anterior segments. They named a pelvis according to the anterior segment and affixed another type according to the character of the posterior segment (i.e. anthropoid-android) and ended up with no less than 14 morphologies. Notwithstanding the popularity of this simple classification, the pelvis is much more complicated than this as the pelvis can have different dimensions at various levels of the birth canal.
Caldwell and Moloy also classified the physique of women according to their types of pelves: the gynaecoid type has small shoulders, a small waist and wide hips; the android type looks square-shaped from behind; and the anthropoid type has wide shoulders and narrow hips. Lastly, in their article they described all non-gynaecoid or "mixed" types of pelves as "abnormal", a word which has stuck in the medical world even though at least 50 per cent of women have these "abnormal" pelves.
The classification of Caldwell and Moloy was influenced by earlier classifications attempting to define the ideal female pelvis, treating any deviations from this ideal as dysfunctions and the cause of obstructed labour. In the 19th century anthropologists and others saw an evolutionary scheme in these pelvic typologies, a scheme since then refuted by archaeology. Since the 1950s malnutrition is thought to be one of the chief factors affecting pelvic shape in the Third World even though there are at least some genetic component to variation in pelvic morphology.
Nowadays obstetric suitability of the female pelvis is assessed by ultrasound. The dimensions of the head of the fetus and of the birth canal are accurately measured and compared, and the feasibility of labor can be predicted.
Other animals
The pelvic girdle was present in early vertebrates, and can be tracked back to the paired fins of fish that were some of the earliest chordates.
The shape of the pelvis, most notably the orientation of the iliac crests and shape and depth of the acetabula, reflects the style of locomotion and body mass of an animal. In bipedal mammals, the iliac crests are parallel to the vertically oriented sacroiliac joints, where in quadrupedal mammals they are parallel to the horizontally oriented sacroiliac joints. In heavy mammals, especially in quadrupeds, the pelvis tend to be more vertically oriented because this allows the pelvis to support greater weight without dislocating the sacroiliac joints or adding torsion to the vertebral column.
In ambulatory mammals, the acetabula are shallow and open to allow a wider range of hip movements, including significant abduction, than in cursorial mammals. The lengths of the ilium and ischium and their angles relative to the acetabulum are functionally important as they determine the moment arms for the hip extensor muscles that provide momentum during locomotion.
In addition to this, the relatively wide shape (front to back) of the pelvis provides greater leverage for the gluteus medius and minimus. These muscles are responsible for hip abduction which plays an integral role in upright balance.
Primates
In primates, the pelvis consists of four parts - the left and the right hip bones which meet in the mid-line ventrally and are fixed to the sacrum dorsally and the coccyx. Each hip bone consists of three components, the ilium, the ischium, and the pubis, and at the time of sexual maturity these bones become fused together, though there is never any movement between them. In humans, the ventral joint of the pubic bones is closed.
Larger apes, such as Pongo (orangutans), Gorilla (gorillas), Australopithecus afarensis, and Pan troglodytes (chimpanzees), have longer three-pelvic planes with a maximum diameter in the sagittal plane.
Evolution
The present-day morphology of the pelvis is inherited from the pelvis of our quadrupedal ancestors. The most striking feature of evolution of the pelvis in primates is the widening and the shortening of the blade called the ilium. Because of the stresses involved in bipedal locomotion, the muscles of the thigh move the thigh forward and backward, providing the power for bi-pedal and quadrupedal locomotion.
The drying of the environment of East Africa in the period since the creation of the Red Sea and the African Rift Valley saw open woodlands replace the previous closed canopy forest. The apes in this environment were compelled to travel from one clump of trees to another across open country. This led to a number of complementary changes to the human pelvis. It is suggested that bipedalism was the result.
Additional images
| Biology and health sciences | Skeletal system | null |
6864370 | https://en.wikipedia.org/wiki/Outgoing%20longwave%20radiation | Outgoing longwave radiation | In climate science, longwave radiation (LWR) is electromagnetic thermal radiation emitted by Earth's surface, atmosphere, and clouds. It is also referred to as terrestrial radiation. This radiation is in the infrared portion of the spectrum, but is distinct from the shortwave (SW) near-infrared radiation found in sunlight.
Outgoing longwave radiation (OLR) is the longwave radiation emitted to space from the top of Earth's atmosphere. It may also be referred to as emitted terrestrial radiation. Outgoing longwave radiation plays an important role in planetary cooling.
Longwave radiation generally spans wavelengths ranging from 3–100 micrometres (μm). A cutoff of 4 μm is sometimes used to differentiate sunlight from longwave radiation. Less than 1% of sunlight has wavelengths greater than 4 μm. Over 99% of outgoing longwave radiation has wavelengths between 4 μm and 100 μm.
The flux of energy transported by outgoing longwave radiation is typically measured in units of watts per metre squared (W⋅m−2). In the case of global energy flux, the W/m2 value is obtained by dividing the total energy flow over the surface of the globe (measured in watts) by the surface area of the Earth, .
Emitting outgoing longwave radiation is the only way Earth loses energy to space, i.e., the only way the planet cools itself. Radiative heating from absorbed sunlight, and radiative cooling to space via OLR power the heat engine that drives atmospheric dynamics.
The balance between OLR (energy lost) and incoming solar shortwave radiation (energy gained) determines whether the Earth is experiencing global heating or cooling (see Earth's energy budget).
Planetary energy balance
Outgoing longwave radiation (OLR) constitutes a critical component of Earth's energy budget.
The principle of conservation of energy says that energy cannot appear or disappear. Thus, any energy that enters a system but does not leave must be retained within the system. So, the amount of energy retained on Earth (in Earth's climate system) is governed by an equation:
[change in Earth's energy] = [energy arriving] − [energy leaving].
Energy arrives in the form of absorbed solar radiation (ASR). Energy leaves as outgoing longwave radiation (OLR). Thus, the rate of change in the energy in Earth's climate system is given by Earth's energy imbalance (EEI):
.
When energy is arriving at a higher rate than it leaves (i.e., ASR > OLR, so that EEI is positive), the amount of energy in Earth's climate increases. Temperature is a measure of the amount of thermal energy in matter. So, under these circumstances, temperatures tend to increase overall (though temperatures might decrease in some places as the distribution of energy changes). As temperatures increase, the amount of thermal radiation emitted also increases, leading to more outgoing longwave radiation (OLR), and a smaller energy imbalance (EEI).
Similarly, if energy arrives at a lower rate than it leaves (i.e., ASR < OLR, so than EEI is negative), the amount of energy in Earth's climate decreases, and temperatures tend to decrease overall. As temperatures decrease, OLR decreases, making the imbalance closer to zero.
In this fashion, a planet naturally constantly adjusts its temperature so as to keep the energy imbalance small. If there is more solar radiation absorbed than OLR emitted, the planet will heat up. If there is more OLR than absorbed solar radiation the planet will cool. In both cases, the temperature change works to shift the energy imbalance towards zero. When the energy imbalance is zero, a planet is said to be in radiative equilibrium. Planets natural tend to a state of approximate radiative equilibrium.
In recent decades, energy has been measured to be arriving on Earth at a higher rate than it leaves, corresponding to planetary warming. The energy imbalance has been increasing. It can take decades to centuries for oceans to warm and planetary temperature to shift sufficiently to compensate for an energy imbalance.
Emission
Thermal radiation is emitted by nearly all matter, in proportion to the fourth power of its absolute temperature.
In particular, the emitted energy flux, (measured in W/m2) is given by the Stefan–Boltzmann law for non-blackbody matter:
where is the absolute temperature, is the Stefan–Boltzmann constant, and is the emissivity. The emissivity is a value between zero and one which indicates how much less radiation is emitted compared to what a perfect blackbody would emit.
Surface
The emissivity of Earth's surface has been measured to be in the range 0.65 to 0.99 (based on observations in the 8-13 micron wavelength range) with the lowest values being for barren desert regions. The emissivity is mostly above 0.9, and the global average surface emissivity is estimated to be around 0.95.
Atmosphere
The most common gases in air (i.e., nitrogen, oxygen, and argon) have a negligible ability to absorb or emit longwave thermal radiation. Consequently, the ability of air to absorb and emit longwave radiation is determined by the concentration of trace gases like water vapor and carbon dioxide.
According to Kirchhoff's law of thermal radiation, the emissivity of matter is always equal to its absorptivity, at a given wavelength. At some wavelengths, greenhouse gases absorb 100% of the longwave radiation emitted by the surface. So, at those wavelengths, the emissivity of the atmosphere is 1 and the atmosphere emits thermal radiation much like an ideal blackbody would. However, this applies only at wavelengths where the atmosphere fully absorbs longwave radiation.
Although greenhouse gases in air have a high emissivity at some wavelengths, this does not necessarily correspond to a high rate of thermal radiation being emitted to space. This is because the atmosphere is generally much colder than the surface, and the rate at which longwave radiation is emitted scales as the fourth power of temperature. Thus, the higher the altitude at which longwave radiation is emitted, the lower its intensity.
Atmospheric absorption
The atmosphere is relatively transparent to solar radiation, but it is nearly opaque to longwave radiation. The atmosphere typically absorbs most of the longwave radiation emitted by the surface. Absorption of longwave radiation prevents that radiation from reaching space.
At wavelengths where the atmosphere absorbs surface radiation, some portion of the radiation that was absorbed is replaced by a lesser amount of thermal radiation emitted by the atmosphere at a higher altitude.
When absorbed, the energy transmitted by this radiation is transferred to the substance that absorbed it. However, overall, greenhouse gases in the troposphere emit more thermal radiation than they absorb, so longwave radiative heat transfer has a net cooling effect on air.
Atmospheric window
Assuming no cloud cover, most of the surface emissions that reach space do so through the atmospheric window. The atmospheric window is a region of the electromagnetic wavelength spectrum between 8 and 11 μm where the atmosphere does not absorb longwave radiation (except for the ozone band between 9.6 and 9.8 μm).
Gases
Greenhouse gases in the atmosphere are responsible for a majority of the absorption of longwave radiation in the atmosphere. The most important of these gases are water vapor, carbon dioxide, methane, and ozone.
The absorption of longwave radiation by gases depends on the specific absorption bands of the gases in the atmosphere. The specific absorption bands are determined by their molecular structure and energy levels. Each type of greenhouse gas has a unique group of absorption bands that correspond to particular wavelengths of radiation that the gas can absorb.
Clouds
The OLR balance is affected by clouds, dust, and aerosols in the atmosphere. Clouds tend to block penetration of upwelling longwave radiation, causing a lower flux of long-wave radiation penetrating to higher altitudes. Clouds are effective at absorbing and scattering longwave radiation, and therefore reduce the amount of outgoing longwave radiation.
Clouds have both cooling and warming effects. They have a cooling effect insofar as they reflect sunlight (as measured by cloud albedo), and a warming effect, insofar as they absorb longwave radiation. For low clouds, the reflection of solar radiation is the larger effect; so, these clouds cool the Earth. In contrast, for high thin clouds in cold air, the absorption of longwave radiation is the more significant effect; so these clouds warm the planet.
Details
The interaction between emitted longwave radiation and the atmosphere is complicated due to the factors that affect absorption. The path of the radiation in the atmosphere also determines radiative absorption: longer paths through the atmosphere result in greater absorption because of the cumulative absorption by many layers of gas. Lastly, the temperature and altitude of the absorbing gas also affect its absorption of longwave radiation.
OLR is affected by Earth's surface skin temperature (i.e, the temperature of the top layer of the surface), skin surface emissivity, atmospheric temperature, water vapor profile, and cloud cover.
Day and night
The net all-wave radiation is dominated by longwave radiation during the night and in the polar regions. While there is no absorbed solar radiation during the night, terrestrial radiation continues to be emitted, primarily as a result of solar energy absorbed during the day.
Relationship to greenhouse effect
The reduction of the outgoing longwave radiation (OLR), relative to longwave radiation emitted by the surface, is at the heart of the greenhouse effect.
More specifically, the greenhouse effect may be defined quantitatively as the amount of longwave radiation emitted by the surface that does not reach space. On Earth as of 2015, about 398 W/m of longwave radiation was emitted by the surface, while OLR, the amount reaching space, was 239 W/m. Thus, the greenhouse effect was 398−239 = 159 W/m, or 159/398 = 40% of surface emissions, not reaching space.
Effect of increasing greenhouse gases
When the concentration of a greenhouse gas (such as carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and water vapor (H2O) and is increased, this has a number of effects. At a given wavelength
the fraction of surface emissions that are absorbed is increased, decreasing OLR (unless 100% of surface emissions at that wavelength are already being absorbed);
the altitude from which the atmosphere emits that that wavelength to space increases (since the altitude at which the atmosphere becomes transparent to that wavelength increases); if the emission altitude is within the troposphere, the temperature of the emitting air will be lower, which will result in a reduction in OLR at that wavelength.
The size of the reduction in OLR will vary by wavelength. Even if OLR does not decrease at certain wavelengths (e.g., because 100% of surface emissions are absorbed and the emission altitude is in the stratosphere), increased greenhouse gas concentration can still lead to significant reductions in OLR at other wavelengths where absorption is weaker.
When OLR decreases, this leads to an energy imbalance, with energy received being greater than energy lost, causing a warming effect. Therefore, an increase in the concentrations of greenhouse gases causes energy to accumulate in Earth's climate system, contributing to global warming.
Surface budget fallacy
If the absorptivity of the gas is high and the gas is present in a high enough concentration, the absorption at certain wavelengths becomes saturated. This means there is enough gas present to completely absorb the radiated energy at that wavelength before the upper atmosphere is reached.
It is sometimes incorrectly argued that this means an increase in the concentration of this gas will have no additional effect on the planet's energy budget. This argument neglects the fact that outgoing longwave radiation is determined not only by the amount of surface radiation that is absorbed, but also by the altitude (and temperature) at which longwave radiation is emitted to space. Even if 100% of surface emissions are absorbed at a given wavelength, the OLR at that wavelength can still be reduced by increased greenhouse gas concentration, since the increased concentration leads to the atmosphere emitting longwave radiation to space from a higher altitude. If the air at that higher altitude is colder (as is true throughout the troposphere), then thermal emissions to space will be reduced, decreasing OLR.
False conclusions about the implications of absorption being "saturated" are examples of the surface budget fallacy, i.e., erroneous reasoning that results from focusing on energy exchange at the surface, instead of focusing on the top-of-atmosphere (TOA) energy balance.
Measurements
Measurements of outgoing longwave radiation at the top of the atmosphere and of longwave radiation back towards the surface are important to understand how much energy is retained in Earth's climate system: for example, how thermal radiation cools and warms the surface, and how this energy is distributed to affect the development of clouds. Observing this radiative flux from a surface also provides a practical way of assessing surface temperatures on both local and global scales. This energy distribution is what drives atmospheric thermodynamics.
OLR
Outgoing long-wave radiation (OLR) has been monitored and reported since 1970 by a progression of satellite missions and instruments.
Earliest observations were with infrared interferometer spectrometer and radiometer (IRIS) instruments developed for the Nimbus program and deployed on Nimbus-3 and Nimbus-4. These Michelson interferometers were designed to span wavelengths of 5 to 25 μm.
Improved measurements were obtained starting with the Earth Radiation Balance (ERB) instruments on Nimbus-6 and Nimbus-7.
These were followed by the Earth Radiation Budget Experiment scanners and the non scanner on NOAA-9, NOAA-10 and Earth Radiation Budget Satellite; also, the Clouds and the Earth's Radiant Energy System instruments aboard Aqua, Terra, Suomi-NPP and NOAA-20, and the Geostationary Earth Radiation Budget instrument (GERB) instrument on the Meteosat Second Generation (MSG) satellite.
Surface LW radiation
Longwave radiation at the surface (both outward and inward) is mainly measured by pyrgeometers. A most notable ground-based network for monitoring surface long-wave radiation is the Baseline Surface Radiation Network (BSRN), which provides crucial well-calibrated measurements for studying global dimming and brightening.
Data
Data on surface longwave radiation and OLR is available from a number of sources including:
NASA GEWEX Surface Radiation Budget (1983-2007)
NASA Clouds and the Earth's Radiant Energy System (CERES) project (2000-2022)
OLR calculation and simulation
Many applications call for calculation of long-wave radiation quantities. Local radiative cooling by outgoing longwave radiation, suppression of radiative cooling (by downwelling longwave radiation cancelling out energy transfer by upwelling longwave radiation), and radiative heating through incoming solar radiation drive the temperature and dynamics of different parts of the atmosphere.
By using the radiance measured from a particular direction by an instrument, atmospheric properties (like temperature or humidity) can be inversely inferred.
Calculations of these quantities solve the radiative transfer equations that describe radiation in the atmosphere. Usually the solution is done numerically by atmospheric radiative transfer codes adapted to the specific problem.
Another common approach is to estimate values using surface temperature and emissivity, then compare to satellite top-of-atmosphere radiance or brightness temperature.
There are online interactive tools that allow one to see the spectrum of outgoing longwave radiation that is predicted to reach space under various atmospheric conditions.
| Physical sciences | Climate change | Earth science |
37168114 | https://en.wikipedia.org/wiki/Ironstone%20china | Ironstone china | Ironstone china, ironstone ware or most commonly just ironstone, is a type of vitreous pottery first made in the United Kingdom in the early 19th century. It is often classed as earthenware although in appearance and properties it is similar to fine stoneware. It was developed in the 19th century by potters in Staffordshire, England, as a cheaper, mass-produced alternative for porcelain.
The formulation quoted in the original patent (Brit. Pat. 3724, 1813) by Charles James Mason, is four parts china clay, four parts china stone, four parts calcined flint, three parts prepared ironstone and a trace of cobalt oxide. However, it has long been known that no ironstone was used; its mention, and the name of the product, was used to suggest high strength.
Ironstone in Britain's Staffordshire potteries was closely associated with the company founded by Mason following his patent of 1813, with the name subsequently becoming generic. The strength of Mason's ironstone body enabled the company to produce ornamental objects of considerable size including vestibule vases 1.5 metres high and mantelpieces assembled from several large sections.
Antique ironstone wares are collectable, and in particular items made by Mason's.
History
Ironstone was patented by the British potter Mason in 1813. His father, Miles Mason (1752–1822) married the daughter of Richard Farrar, who had a business selling imported Oriental porcelain in London. Subsequently, Mason continued this business, but after the East India Company ceased the bulk importation of Oriental porcelain in 1791 he began to manufacture his own wares. His first manufacturing venture was a partnership with Thomas Wolfe and John Lucock in Liverpool, and he later formed a partnership with George Wolfe to manufacture pottery in Staffordshire.
Subsequently other manufacturers produced ironstone, with James Edwards (1805–1867) of the Dalehall Pottery in Staffordshire also credited as its pioneer. Other sources also attribute the invention of ironstone to William Turner of Longton, and Josiah Spode who is known to have been producing ironstone ware by 1805, "which he exported in immense quantities to France and other countries". The popularity of Spode's ironstone surpassed the traditional faience pottery in France.
A variety of ironstone types was being produced by the mid-19th century. "Derbyshire ironstone" became a particularly popular variety in the 19th century, as well as "yellow ironstone". Patterns with raised edges became popular in the mid-19th century, including "cane-coloured" Derbyshire ironstone. Some of the most well-known and collectable British ironstone manufacturers of the 19th century include:
Church Gresley Pottery
Edge, Malkin, Burslem, Staffordshire
Hartshorne Pottery (founded by James Onions around 1790)
Hartshorne Potteries (founded in 1818 by Joseph Thompson)
Hill Top Works
Old Midway Pottery
Rawdon Pottery
Sharpe Brothers
Spode
Spode and Copeland
Swadlincote Potteries
T&R BOOTE
Waterloo Pottery
Wooden Box Pottery
Woodville Pottery (founded in 1833 by Thomas Hall and William Davenport)
Woodville Potteries (founded in 1810 by Mr Watts)
United States
In the United States, ironstone ware was being manufactured from the 1850s onward. The earliest American ironstone potters were in operation around Trenton, New Jersey. Before this, white ironstone ware was imported to the United States from England, beginning in the 1840s. Undecorated tableware was most popular in the United States, and British potteries produced white ironstone ware, known as "White Ironstone" or "White Granite" ware, for the American market. During the mid-19th century it was the largest export market for Staffordshire's potteries. In the 1860s, British manufacturers began adding agricultural motifs, such as wheat, to their products to appeal to the American market. These patterns became known as "farmers' china" or "threshers' china". Plain white ironstone ware was widely marketed in the United States until the end of the 19th century.
Notable 19th-century ironstone manufacturers in the United States include:
Empire Pottery
Onondaga Pottery, Syracuse China
Walter Scott Lenox
Homer Laughlin
Types of ironstone ware
Transferware
Transfer-printed designs were applied to ironstone by Mason's in an attempt to copy Chinese porcelain cheaply. Transferware is most often in one colour against a white background, such as blue, red, green or brown. Some patterns included detail colours that were added on top of the main transfer after the glaze had been applied.
Transferware designs range from dense patterns that cover the piece, to small motifs applied sparingly to give a delicate appearance, as with floral motifs.
| Technology | Materials | null |
21510668 | https://en.wikipedia.org/wiki/Pascal%27s%20law | Pascal's law | Pascal's law (also Pascal's principle or the principle of transmission of fluid-pressure) is a principle in fluid mechanics given by Blaise Pascal that states that a pressure change at any point in a confined incompressible fluid is transmitted throughout the fluid such that the same change occurs everywhere. The law was established by French mathematician Blaise Pascal in 1653 and published in 1663.
Definition
Pascal's principle is defined as:
Fluid column with gravity
For a fluid column in a uniform gravity (e.g. in a hydraulic press), this principle can be stated mathematically as:
where
The intuitive explanation of this formula is that the change in pressure between two elevations is due to the weight of the fluid between the elevations. Alternatively, the result can be interpreted as a pressure change caused by the change of potential energy per unit volume of the liquid due to the existence of the gravitational field. Note that the variation with height does not depend on any additional pressures. Therefore, Pascal's law can be interpreted as saying that any change in pressure applied at any given point of the fluid is transmitted undiminished throughout the fluid.
The formula is a specific case of Navier–Stokes equations without inertia and viscosity terms.
Applications
If a U-tube is filled with water and pistons are placed at each end, pressure exerted by the left piston will be transmitted throughout the liquid and against the bottom of the right piston (The pistons are simply "plugs" that can slide freely but snugly inside the tube.). The pressure that the left piston exerts against the water will be exactly equal to the pressure the water exerts against the right piston . By using we get . Suppose the tube on the right side is made 50 times wider . If a 1 N load is placed on the left piston (), an additional pressure due to the weight of the load is transmitted throughout the liquid and up against the right piston. This additional pressure on the right piston will cause an upward force which is 50 times bigger than the force on the left piston. The difference between force and pressure is important: the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area, 50 times as much force is exerted on the larger piston. Thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston.
Forces can be multiplied using such a device. One newton input produces 50 newtons output. By further increasing the area of the larger piston (or reducing the area of the smaller piston), forces can be multiplied, in principle, by any amount. Pascal's principle underlies the operation of the hydraulic press. The hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the large piston will be raised only one-fiftieth of this, or 2 centimeters. The input force multiplied by the distance moved by the smaller piston is equal to the output force multiplied by the distance moved by the larger piston; this is one more example of a simple machine operating on the same principle as a mechanical lever.
A typical application of Pascal's principle for gases and liquids is the automobile lift seen in many service stations (the hydraulic jack). Increased air pressure produced by an air compressor is transmitted through the air to the surface of oil in an underground reservoir. The oil, in turn, transmits the pressure to a piston, which lifts the automobile. The relatively low pressure that exerts the lifting force against the piston is about the same as the air pressure in automobile tires. Hydraulics is employed by modern devices ranging from very small to enormous. For example, there are hydraulic pistons in almost all construction machines where heavy loads are involved.
Other applications:
Force amplification in the braking system of most motor vehicles.
Used in artesian wells, water towers, and dams.
Scuba divers must understand this principle. Starting from normal atmospheric pressure, about 100 kilopascal, the pressure increases by about 100 kPa for each increase of 10 m depth.
Usually Pascal's rule is applied to confined space (static flow), but due to the continuous flow process, Pascal's principle can be applied to the lift oil mechanism (which can be represented as a U tube with pistons on either end).
Pascal's barrel
Pascal's barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646. In the experiment, Pascal supposedly inserted a long vertical tube into an (otherwise sealed) barrel filled with water. When water was poured into the vertical tube, the increase in hydrostatic pressure caused the barrel to burst.
The experiment is mentioned nowhere in Pascal's preserved works and it may be apocryphal, attributed to him by 19th-century French authors, among whom the experiment is known as crève-tonneau (approx.: "barrel-buster");
nevertheless the experiment remains associated with Pascal in many elementary physics textbooks.
| Physical sciences | Fluid mechanics | Physics |
3879451 | https://en.wikipedia.org/wiki/Gaming%20computer | Gaming computer | A gaming computer, also known as a gaming PC, is a specialized personal computer designed for playing PC games at high standards. They typically differ from mainstream personal computers by using high-performance graphics cards, a high core-count CPU with higher raw performance and higher-performance RAM. Gaming PCs are also used for other demanding tasks such as video editing. While often in desktop form, gaming PCs may also be laptops or handhelds.
History
Early history
The Nimrod, designed by John Makepeace Bennett, built by Raymond Stuart-Williams and exhibited in the 1951 Festival of Britain, is regarded as the first gaming computer. Bennett did not intend for it to be a real gaming computer, however, as it was supposed to be an exercise in mathematics as well as to prove computers could "carry out very complex practical problems", not purely for enjoyment.
Few years later, game consoles like the Magnavox Odyssey (released in 1972) and the Atari 2600 (released 1977) were the basis of the future of not just gaming consoles, but gaming computers as well with their increasing popularity with families everywhere. The first "modern" computer was made in 1942, the Atanasoff–Berry Computer (ABC for short). Unlike modern desktops and laptops, the ABC was a gargantuan machine that occupied "1,800 square feet… weighing almost 50 tons",. When the Apple II and the Commodore 64 were released in 1977 and 1982 respectively, personal computers became more appealing for general consumer use.
The Commodore 64 was an affordable and relatively powerful computer for its time in 1982, featuring an MOS Technology 6510 CPU with 64 kb of RAM. It could display up to "40 columns and 25 lines of text" along with 16 colors on its 320x200 resolution screen.
The Apple II cost around US$1,298 in 1977 ($5,633 adjusted for inflation in 2021) and the Commodore 64 cost around , making it expensive for most consumers. However, their overall computing power, efficiency, and compact size was more advanced from even the most advanced computers at the time.
Since 1990s and current market
IBM PC-compatibles have been the dominant types of PCs globally, both mainstream and by extension in gaming, since the 1990s. During that decade a number of special PC product lines were created by OEMs that focused on pre-built gaming desktop computers, such as Alienware, formed in 1997 as the gaming division of Dell; and HP with their OMEN division, whose lineage dates back to 1991 under the defunct brand VoodooPC; and both of which continue to be marketed today.
From the mid-1990s as 3D gaming was taking off, companies like 3dfx (with their Voodoo) and Nvidia (with their RIVA 128) advanced the market with their new graphical processing units.
More manufacturers started making gaming PC lines (or were started for this purpose) during the 2000s and 2010s, such as Toshiba's now-defunct Qosmio; Asus's ROG (Republic of Gamers) and TUG; Acer's Predator line; Lenovo's Legion; and Razer. During this time, gaming laptops started to gain popularity. More recently in the 2020s, portable handheld gaming PCs have started to gain traction that run on full desktop x86 (the de facto standard) platforms. These began with GPD's Win and Alienware's UFO concept, inspired by the Nintendo Switch (which is not a PC), and have been popularized by Valve Corporation's Steam Deck.
65.1 million gaming products have been sold overall as of 2021, of which 27.9 million are gaming notebooks, 19.7 million are gaming monitors, and 17.5 million are gaming desktops.
Hardware
Technically, any computer can be considered a "gaming computer"; however the most common ones are typically based around an x86-based CPU with a graphics accelerator card, a sufficient amount of high-performance RAM, and fast storage drives.
In a desktop configuration, a case is also needed, and gaming cases are often modified or manufactured with extra LED lights or see-through panels for aesthetic reasons. Individual components are typically attached to a motherboard through different bus slots, including the CPU, RAM, and graphics card, or wired to it with SATA or IDE cabling (for hard disks or optical drives). Laptops also share a similar format, but with smaller and less power hungry components.
Gamers and computer enthusiasts may choose to overclock their CPUs and GPUs in order to gain extra performance. The added power draw needed to overclock either processing unit often requires additional cooling, usually by air cooling or water cooling.
These configurations mostly dates back to the 1990s when Intel and Microsoft first began to dominate the PC marketplace, and has not changed significantly since then. Hardware specs continue to improve over time due to the graphical demands of games, especially with architectural and other changes within CPU and GPU designs.
Form factors
Senior editor of Tom's Hardware Andrew Freedman says that "Gaming rigs aren't one-size fits all", and that there are certain instances where a gaming desktop will be more appropriate than a laptop and other circumstances where a laptop is more appropriate than a desktop. Each platform has its pros and cons, which may change depending on a person's needs. For example, someone looking for maximum portability may choose a laptop over a desktop since it is all self-contained in one unit, whereas a desktop setup is split up into multiple components: a monitor, keyboard, mouse, and the desktop itself. Freedman states that laptops are ideal candidates for LAN parties, especially ones equipped with "Nvidia's Max-Q GPUs" which "can easily fit into a backpack and don't pack outrageously large chargers".
Desktop
Gaming desktop computers are the most versatile types of gaming computers. People usually buy gaming PCs because they want the performance that is expected to them. The majority of this potential lies in the parts of desktops, which can be overclocked for more performance as well as being able to with stand abuse because of their higher durability. The usual large chassis on a desktop also allows for more fans, for improved cooling and heat dissipation which would ultimately lead to improved gaming performance.
Pre-built desktops, may use "proprietary motherboards that aren't standard sizes". These uniquely shaped motherboards can limit the owner's capability to upgrade components in the future, but they can still generally change out "the RAM, GPU and… CPU". Razer Inc.'s project Christine (2014) proposes the use of modules to allow for fast replacement of computer parts.
Gaming Laptops
Laptop gaming computers give the ability for gaming on portable computers. The usable space inside a laptop is much more limited compared to a desktop. There are also fewer items that can be changed out on a laptop than a desktop, like RAM and storage, compared to a desktop where almost all the components, including motherboards and CPUs, can be swapped out with the latest technology available at the time.
Handheld
Handheld PCs built for gaming are a relatively recent form factor. Due to their mobile chassis, they are the most limited types of gaming computers as components generally can't be upgraded. Handheld gaming PCs may come with a physical keyboard or may discard it entirely to be styled like a handheld gaming console.
Build types
As stated before, there are options PC gamers take into account when deciding to build their own unit versus buying a pre-built one. There are not many options when it comes to the laptop configuration but they do exist. Jason Clarke, a contributor to Chillblast, mentioned that there are a number of builders that deal specifically with laptops, with some adding configurable features that were not originally there, such as being able to change CPUs and GPUs. These PC builders build from scratch, and the possibility to change out CPUs and GPUs after they have been installed is unlikely. Clarke also advised that people should and cannot build their own laptops because of how complex and compact everything is.
Many PC gamers and journalists, like Clarke and Freedman, advise people to start with gaming desktops as they are the way to go when seeking pure performance. Pre-built desktops like Alienware's Aurora R11 are ready-to-go systems with a history behind them, but some claim that their systems are over-priced. This is mainly due to the cost of building the PC and ease of access for components for the consumer. Marshall Honorof, a writer for Tom's Guide, explains that the steps on how to build a gaming PC from scratch "can be a daunting process, particularly for newcomers" but it could be one of the best technological decisions someone can make. According to his research, Honorof found that $1,500 is enough to buy a "powerful, but not quite top-of-the-line" computer and one can choose his or her own components.
| Technology | Computer hardware | null |
3879598 | https://en.wikipedia.org/wiki/Tension%20%28physics%29 | Tension (physics) | Tension is the pulling or stretching force transmitted axially along an object such as a string, rope, chain, rod, truss member, or other object, so as to stretch or pull apart the object. In terms of force, it is the opposite of compression. Tension might also be described as the action-reaction pair of forces acting at each end of an object.
At the atomic level, when atoms or molecules are pulled apart from each other and gain potential energy with a restoring force still existing, the restoring force might create what is also called tension. Each end of a string or rod under such tension could pull on the object it is attached to, in order to restore the string/rod to its relaxed length.
Tension (as a transmitted force, as an action-reaction pair of forces, or as a restoring force) is measured in newtons in the International System of Units (or pounds-force in Imperial units). The ends of a string or other object transmitting tension will exert forces on the objects to which the string or rod is connected, in the direction of the string at the point of attachment. These forces due to tension are also called "passive forces". There are two basic possibilities for systems of objects held by strings: either acceleration is zero and the system is therefore in equilibrium, or there is acceleration, and therefore a net force is present in the system.
Tension in one dimension
Tension in a string is a non-negative vector quantity. Zero tension is slack. A string or rope is often idealized as one dimension, having fixed length but being massless with zero cross section. If there are no bends in the string, as occur with vibrations or pulleys, then tension is a constant along the string, equal to the magnitude of the forces applied by the ends of the string. By Newton's third law, these are the same forces exerted on the ends of the string by the objects to which the ends are attached. If the string curves around one or more pulleys, it will still have constant tension along its length in the idealized situation that the pulleys are massless and frictionless. A vibrating string vibrates with a set of frequencies that depend on the string's tension. These frequencies can be derived from Newton's laws of motion. Each microscopic segment of the string pulls on and is pulled upon by its neighboring segments, with a force equal to the tension at that position along the string.
If the string has curvature, then the two pulls on a segment by its two neighbors will not add to zero, and there will be a net force on that segment of the string, causing an acceleration. This net force is a restoring force, and the motion of the string can include transverse waves that solve the equation central to Sturm–Liouville theory:
where is the force constant per unit length [units force per area], is the ...., is the ...., and are the eigenvalues for resonances of transverse displacement on the string, with solutions that include the various harmonics on a stringed instrument.
Tension of three dimensions
Tension is also used to describe the force exerted by the ends of a three-dimensional, continuous material such as a rod or truss member. In this context, tension is analogous to negative pressure. A rod under tension elongates. The amount of elongation and the load that will cause failure both depend on the force per cross-sectional area rather than the force alone, so stress = axial force / cross sectional area is more useful for engineering purposes than tension. Stress is a 3x3 matrix called a tensor, and the element of the stress tensor is tensile force per area, or compression force per area, denoted as a negative number for this element, if the rod is being compressed rather than elongated.
Thus, one can obtain a scalar analogous to tension by taking the trace of the stress tensor.
System in equilibrium
A system is in equilibrium when the sum of all forces is zero.
For example, consider a system consisting of an object that is being lowered vertically by a string with tension, T, at a constant velocity. The system has a constant velocity and is therefore in equilibrium because the tension in the string, which is pulling up on the object, is equal to the weight force, mg ("m" is mass, "g" is the acceleration caused by the gravity of Earth), which is pulling down on the object.
System under net force
A system has a net force when an unbalanced force is exerted on it, in other words the sum of all forces is not zero. Acceleration and net force always exist together.
For example, consider the same system as above but suppose the object is now being lowered with an increasing velocity downwards (positive acceleration) therefore there exists a net force somewhere in the system. In this case, negative acceleration would indicate that .
In another example, suppose that two bodies A and B having masses and , respectively, are connected with each other by an inextensible string over a frictionless pulley. There are two forces acting on the body A: its weight () pulling down, and the tension in the string pulling up. Therefore, the net force on body A is , so . In an extensible string, Hooke's law applies.
Strings in modern physics
String-like objects in relativistic theories, such as the strings used in some models of interactions between quarks, or those used in the modern string theory, also possess tension. These strings are analyzed in terms of their world sheet, and the energy is then typically proportional to the length of the string. As a result, the tension in such strings is independent of the amount of stretching.
| Physical sciences | Solid mechanics | Physics |
3881049 | https://en.wikipedia.org/wiki/Chongqing%20Rail%20Transit | Chongqing Rail Transit | The Chongqing Rail Transit (branded as CRT; also known as Chongqing Metro) is the rapid transit system in the city of Chongqing, China. In operation since 2005, it serves the transportation needs of the city's main business and entertainment downtown areas and inner suburbs. , CRT consisted of eleven lines, with a total track length of . Lines 1, 4, 5, 6, 9, 10, 18, the Loop line and Jiangtiao line are conventional heavy-rail metro lines, while Lines 2 and 3 are high-capacity monorails. To keep up with urban growth, construction is under way on Line 18 and several other lines, in addition to extensions to Lines 5, 6 and 10.
The Chongqing Rail Transit is a unique transit system in China because of the geography of Chongqing being a densely-populated but mountainous city, with multiple river valleys. Two lines use heavy-monorail technology, leveraging the ability to negotiate steep grades and tight curves and rapid transit capacity. They are capable of transporting 32,000 passengers per hour per direction. However the busiest section of Line 3 reaches a peak passenger volume of 37,700 pphpd in 2019. At , the system's two monorail lines form the longest monorail system in the world, with the Line 3 being the world's longest single monorail line even if the Konggang branch is excluded. The length and the capacity of its monorail network both also make it the world's busiest monorail system, with a total of 94 million and 250 million rides in 2015 on Line 2 and Line 3, respectively. The latter ridership statistic for Line 3 also makes it the world's busiest single monorail line.
The extreme difference in elevation between the river valleys and the hilly plateaus of Chongqing pose a unique challenge in designing alignments for conventional rail transit lines. The network currently has the world's highest metro-only bridge, the Caijia Rail Transit Bridge for Line 6, spanning the Jialing River valley, with the bridge deck being approximately above the water. Hongyancun station is the deepest metro station in China and the deepest metro station in the world with the station reaching below the surface, surpassing the Kyiv Metro's Arsenalna station. Hongtudi station and Liyuchi station, both on Line 10, are the second and third deepest stations in China, being and below the surface respectively. Additionally, Hualongqiao station is a six story structure that has Line 9 trains stopping 48 meters above the surface, making it the tallest metro station in the world, surpassing Smith–Ninth Streets station in New York.
The Chongqing Rail Transit system possesses a number of extremely-long metro-only bridges. The long Egongyan Rail Transit Bridge carries the southern arc of the Loop line across the Yangtze River using a long suspension main span, making it the longest metro-only suspension bridge by main span in the world. The Nanjimen Bridge carries Line 10 trains across a cable-stayed bridge with a main span of , making it the longest metro-only cable-stayed bridge by main span in the world. The Gaojia Huayuan Jialing River Rail Transit Bridge carries the western arc of the Loop line over the Jialing River using a long bridge with a main span of . Additionally, Chongqing Rail Transit system has numerous double-deck bridges carrying vehicle and metro traffic, such as the Chaotianmen Bridge, which is the world's longest arch bridge.
Network
Loop line
The Loop line (coded as "Line 0") is a rapid transit loop line. The northeastern section was opened on 28 December 2018. The southern section with the Egongyan Rail Transit Bridge opened on 30 December 2019. Three major railway stations in Chongqing are also linked by this line: Chongqing North railway station, Shapingba railway station, and Chongqing West railway station. Loop Line's color is yellow.
Line 1
Line 1 runs from Chaotianmen, in the central west, to Shapingba and then to Bishan with a total length of . It is the first heavy-rail subway line in Chongqing and the second in Western China. The passenger capacity is 36,000 passengers per hour in each way. The line serves as the system's backbone connecting the densest areas including the main Central Business Districts of Jiefangbei, Lianglukou, Daping, and Shapingba. It is the first conventional subway, running in a deep-bored tunnel below Yuzhong and Shapingba Districts.
Line 1 has transfer interchange stations with Line 6 at Xiaoshizi and Line 2 at Jiaochangkou in Jiefangbei CBD and at Daping and Line 3 at Lianglukou, which is near Chongqing railway station in central Yuzhong. Line 1 is also transferable with the Loop Line at Shapingba, although out-of-station transfer is currently needed due to construction setbacks on the interchange channel and concourse connecting the two subway lines.
In 1992, the Chongqing government signed a Build-Operate-Transfer agreement with a Hong Kong company and provided the land for the project, but work ceased in 1997 because of legal issues. Work resumed from Chaotianmen to Shapingba on 9 June 2009, and a limited opening occurred on 28 July 2011. Thales provided an operations control centre for the line. Line 1's color is red.
Line 2
Line 2, a monorail line, runs and has 25 stations. It begins as a subway under downtown Jiefangbei, then runs west along the southern bank of Jialing River on an elevated line, and then turns south into the southwestern inner suburbs, looping back east, to terminate at Yudong, in Ba'nan District. It runs mostly elevated, but a section is underground, including three of its 18 stations in the Jiefangbei CBD and central Daping areas in the extremely-dense area of Yuzhong District. Line 2 runs through four administrative districts in the central city (Yuzhong, Jiulongpo, Dadukou, and Ba'nan). In 2010, Line 2 served 45 million passengers. It also runs through Daping CBD, Yangjiaping CBD in Jiulongpo District, and Chongqing Zoo at. Most trains have four cars, and six-car trains began to operate in September 2012. Line 2 is the first rapid transit line to open in the Interior West of China, in 2005. In 2013, six-car trains are being implemented because of overcrowding and increasing demand. Line 2's color is green.
Line 3
Line 3 is the longest and busiest monorail in the world. It runs from north to south and links the districts separated by the Yangtze (Chang Jiang) and the Jialing Rivers. The initial segment, from Lianglukou to Yuanyang (18 stations, ), opened on 29 September 2011, with a northern extension, from Yuanyang to Jiangbei Airport, opening on 30 December 2011. A southern extension, from Ertang to Yudong, opened on 28 December 2012.
Most trains have six cars, more than on the older Line 2. The line started to equip eight-car trains in 2014, which are now in operation. There are interchange stations in the Yuzhong district with Line 1, at Lianglukou (Caiyuanba Intercity Railway/Coach Station), and with Line 2, at . Line 3's color is indigo.
Line 4
Line 4 is a rapid transit line. In June 2018, debugging of the first segment of Phase I commenced. The line began operating on 28 December that year. Line 4's color is orange.
Line 5
Line 5 is a northeast–southwest heavy-rail line crossing the centre, and the line has opened its northern and southern sections of phase 1 and northern extension. It will connect Yubei, Jiangbei, Yuzhong, Jiulongpo, Shapingba and Dadukou districts. New six-car trains were introduced on the line. Line 5's color is light blue.
Line 6
Line 6 is the second heavy-rail subway line of Chongqing. Opened on 28 September 2012, it connects Nan'an, Yuzhong, Jiangbei and Yubei districts in central Chongqing.
A northern branch, from Lijia to Wulukou, Beibei District, was opened on 31 December 2013, long with five stations. Phase 1 of the Chayuan extension was opened in 2014. Thales provided an operations control centre for the line. Line 6's color is pink.
Line 9
The first phase of Line 9 opened on 25 January 2022. Line 9's colour is crimson.
Line 10
The line serves the North Railway station and the airport terminals. The first phase (Liyuchi to Wangjiazhuang) opened on 28 December 2017, and the second phase will connect Yuzhong and Nan'an districts by crossing the Jialing and the Yangtze rivers. Two new bridges, Zengjiayan Jialing River Bridge and Nanjimen Rail Transit Bridge, are under construction for train services to the south. Line 10's color is purple.
Jiangtiao line
Jiangtiao line is a suburban rapid transit line connecting Jiangjin District with the metropolitan area. There will be through service between Line 5 and Jiangtiao line in the future. Jiangtiao line's color is blue.
Ticketing
Transport cards
CRT accepts Life & Transport Card (Chongqing Universal Card, released by Chongqing City Card Payment Co., Ltd.) and its compatible cards, released by partner companies in other cities of China. There is a 10% discount applied to the Regular Card if it is used on public transit in the city. The higher price is paid for transfers between the bus and the metro within 1 hour (not including metro-to-metro, according to the paying time). The Regular Card can be purchased at any CRT station, and a deposit can be recovered when the card is returned with its receipt. In addition the card can be used in many shops, cinemas, restaurants, etc. in Chongqing. The Students' Card and the Elders' Card can not be directly used on the metro since their monthly fee covers only buses unless a cash sub-account, which allows a 50% discount, is added to the cards for free at the service points.
Time limit
All trips must be completed in 3 hours upon entering the fare-paid area, or the highest ticket price in the system will be charged in addition.
Operation
During times of heavy use like for major events, CRT may close some stations to avoid overcrowding. In 2018, CRT closed Xiaoshizi, Jiaochangkou, Qixinggang, Lianglukou, Xiaolongkan, and Shapingba stations of Line 1; Jiaochangkou and Linjiangmen stations of Line 2; Lianglukou, Huaxinjie, Guanyinqiao, and Hongqihegou stations of Line 3; Shangxinjie, Xiaoshizi, Grand Theater, Jiangbeicheng, and Hongqihegou stations of Line 6 after 20:00 on Christmas Eve, Christmas Day, after 19:00 on New Year's Eve. And they also closed Shapingba and Shangxinjie stations of Loop line after 19:00 on New Year's Eve.
From 9 to 12 November 2018, they closed Grand Theater and Jiangbeicheng stations from 10:00 to 15:00 because of heavy use during Flower Expo; from 1 to 7 November 2019, they closed Grand Theater and Jiangbeicheng stations since 10:00 till 16:00 because of heavy use during Flower Expo.
Accessibility
Almost every station has accessible elevators and toilets, and almost every train has wheelchair locks. Only the oldest rolling stock and toilets of Line 2 are not fully accessible. In addition, many older interchange channels between lines are not designed with accessibility in mind, which means the disabled there must transfer via the main concourse.
Luggage rack
The trains on Line 10, which links Jiangbei Airport and Chongqing North railway station, are equipped with a luggage rack on each car.
History
The CRT is part of the central government's project to develop the Western regions. The Japan Bank for International Cooperation provided some of the funding. Construction was carried out, with co-operation between Changchun Railway Vehicles Co. Ltd. and Hitachi Monorail, which used advanced Japanese monorail technology. Construction on Line 2 began in 1999, and the line was officially opened in June 2005 from Jiaochangkou (Jiefangbei CBD) to Zoo (Chongqing Zoo).
Early concepts and attempts
1946 plan: The Nationalist government made a plan of high-speed tram system. The rail weighs 47.77 kg/m, with a rail gauge of 1000 mm, a maximum slope of 9%, a minimum radius of curvature of . The top speed is in the urban area and in the suburban area. The train was 8 m long, 1.8 m wide, with two 35-horsepower motors and a trailer. Each train took 240 passengers. The headway was designed to be 10 minutes. The system was expected to carry 1 million passengers per day. Some of the tracks were underground.
Line A, Longmenhao – Ciqikou, 9 stations,
Line B, Longmenhao – Nanwenquan, 7 stations,
Line C, Longmenhao – Datiankan, 3 stations,
1958 attempt: "Yuzhong District Subway Engineering Unit" was started in late 1958, only to be suspended one year later.
1960 plan: A underground rapid rail transit system, linking the city center with Xinpaifang, Xiaolongkan, Yangjiaping, Shiqiaopu, Lianglukou, and other populated areas, was planned.
1965 attempt: The unit was reinstated. It has 4 units, including more than 1000 workers in total. Construction was stopped again in late 1966 by the Cultural Revolution. The unit was officially disbanded again in 1971. The completed tunnel sections were taken over by the civil air defense authorities.
1983 plan: A subway line (Chaotianmen – Yangjiaping) was planned. It is the precursor to today's Line 2.
1988 attempt: Some Hong Kong businessmen arrived to start a metro company in Lianglukou. The tunnel from previous attempts were extended.
1991 plan: A 4-line monorail system was planned.
Official long-term plans
1998 plan: Has 5 lines in total, with a length of about .
2003 and 2007 plans: Two similar expansion including 10 lines, with a total length of about . Line 4 in the previous blueprint received a huge update and was renamed to Loop line, according to its new shape.
2011 plan: Features 8 new lines, with a length of about .
2019 plan: Target a 30-line network as of 2050, with a length of about .
Commencement and expansions
Incidents
At around 14:00 of 8 January 2019, an improperly secured air defense lock was struck by an in service Loop line train, derailing it and causing serious damage to the cabcar. The accident injured three employees and one passenger. One of the employees, the driver, later died from their injuries shortly after being sent to the hospital.
Technology
Visual design
Unlike most metro systems of other cities in China, CRT did not follow the design style of MTR Corporation in Hong Kong. The signage system was designed by GK Design Group in Japan, and the monorail lines are based on Hitachi Monorail technology. That gives the Chongqing Rail Transit a distinctive Japanese aesthetic, in contrast to other metro systems in China.
CRT also gave each line a theme about the local culture, and the stations on the line will have some art works in the theme.
Expansion
CRT is expected to have 8 lines criss-crossing the urban districts by 2020 and a loop line connecting the commercial areas in the urban area. The other 9 lines are expected to be in operation by 2050.
Phase 3 projects
Phase 4 projects
The short-term plan, including Line 4 (West extension), Line 6 (Extension to Chongqing East Station), Lines 7, 15, 17, 24, 27 and Line 18 (Phase 2) was approved by NDRC. Construction on several lines started in March 2021. In April 2021, Lines 7 and 17 were redesigned from mostly elevated heavy monorails (similar to Lines 2 and 3) to conventional underground Type As metro lines akin to Lines 4, 5, 9 and 10.
| Technology | China | null |
3882225 | https://en.wikipedia.org/wiki/Erythroxylum | Erythroxylum | Erythroxylum is a genus of tropical flowering plants in the family Erythroxylaceae. Many of the approximately 200 species contain the tropane alkaloid cocaine, and two of the species within this genus, Erythroxylum coca and Erythroxylum novogranatense, both native to South America, are the main commercial source of cocaine and of the mild stimulant coca tea. Another species, Erythroxylum vaccinifolium (also known as catuaba) is used as an aphrodisiac in Brazilian drinks and herbal medicine.
Erythroxylum species are food sources for the larvae of some butterflies and moths, including several Morpho species and Dalcera abrasa, which has been recorded on E. deciduum, and the species of Agrias.
Species
, Kew's Plants of the World Online listed 259 species:
| Biology and health sciences | Malpighiales | Plants |
1436148 | https://en.wikipedia.org/wiki/Chital | Chital | The chital or cheetal (Axis axis; ), also known as the spotted deer, chital deer and axis deer, is a deer species native to the Indian subcontinent. It was first described and given a binomial name by German naturalist Johann Christian Polycarp Erxleben in 1777. A moderate-sized deer, male chital reach and females at the shoulder. While males weigh , females weigh around . It is sexually dimorphic; males are larger than females, and antlers are present only on males. The upper parts are golden to rufous, completely covered in white spots. The abdomen, rump, throat, insides of legs, ears, and tail are all white. The antlers, three-pronged, are nearly long.
Etymology
The vernacular name "chital" (pronounced ) comes from cītal (), derived from the Sanskrit word (चित्रल), meaning "variegated" or "spotted". The name of the cheetah has a similar origin. Variations of "chital" include "cheetal" and "cheetul". Other common names for the chital are Indian spotted deer (or simply the spotted deer) and axis deer.
Taxonomy and phylogeny
The chital was first described by Johann Christian Polycarp Erxleben in 1777 as Cervus axis. In 1827, Charles Hamilton Smith placed the chital in its own subgenus Axis under the genus Cervus. Axis was elevated to generic status by Colin P. Groves and Peter Grubb in 1987. The genus Hyelaphus was considered a subgenus of Axis. However, a morphological analysis showed significant differences between Axis and Hyelaphus. A phylogenetic study later that year showed that Hyelaphus is closer to the genus Rusa than Axis. Axis was revealed to be paraphyletic and distant from Hyelaphus in the phylogenetic tree; the chital was found to form a clade with the barasingha (Rucervus duvaucelii) and the Schomburgk's deer (Rucervus schomburgki). The chital was estimated to have genetically diverged from the Rucervus lineage in the Early Pliocene about . The following cladogram is based on a 2006 phylogenetic study:
Fossils of extinct Axis species dating to the early to Middle Pliocene were excavated from Iran in the west to Indochina in the east. Remains of the chital were found in the Middle Pleistocene deposits of Thailand along with sun bear, Stegodon, gaur, wild water buffalo and other living and extinct mammals.
Description
The chital is a moderately sized deer. Males reach up to and females at the shoulder; the head-and-body length is around . While immature males weigh , the lighter females weigh . Mature stags can weigh up to . The tail, long, is marked by a dark stripe that stretches along its length. The species is sexually dimorphic; males are larger than females, and antlers are present only on males.
The dorsal (upper) parts are golden to rufous, completely covered in white spots. The abdomen, rump, throat, insides of legs, ears, and tail are all white. A conspicuous black stripe runs along the spine (back bone). The chital has well-developed preorbital glands (near the eyes) with stiff hairs. It also has well-developed metatarsal glands and pedal glands located in its hind legs. The preorbital glands, larger in males than in females, are frequently opened in response to certain stimuli.
Each of the antlers has three lines on it. The brow tine (the first division in the antler) is roughly perpendicular to the beam (the central stalk of the antler). The antlers, three-pronged, are nearly long. Antlers, as in most other cervids, are shed annually. The antlers emerge as soft tissues (known as velvet antlers) and progressively harden into bony structures (known as hard antlers), following mineralisation and blockage of blood vessels in the tissue, from the tip to the base. A study of the mineral composition of the antlers of captive barasingha, chital, and hog deer showed that the antlers of the deer are very similar. The mineral content of the chital's antlers was determined to be (per kg) copper, cobalt, and zinc.
Hooves measure between in length; hooves of the fore legs are longer than those of the hind legs. The toes taper to a point. The dental formula is , same as the elk. The milk canine, nearly long, falls off before one year of age, but is not replaced by a permanent tooth as in other cervids. Compared to the hog deer, the chital has a more cursorial build. The antlers and brow tines are longer than those in the hog deer. The pedicles (the bony cores from which antlers arise) are shorter, and the auditory bullae are smaller in the chital. The chital may be confused with the fallow deer. Chital have several white spots, whereas fallow deer usually have white splotches. Fallow also have palmate antlers whereas chital have 3 distinct points on each side. The chital has a prominent white patch on its throat, while the throat of the fallow deer is completely white. The biggest distinction is the dark brown stripe running down the chital's back. The hairs are smooth and flexible.
Distribution and habitat
The chital ranges over 8–30°N in India, Nepal, Bhutan, Bangladesh and Sri Lanka. The western limit of its range is eastern Rajasthan and Gujarat; its northern limit is throughout the Terai and northern West Bengal, Sikkim to western Assam and forested valleys in Bhutan below an elevation of . It also occurs in the Sundarbans and some eco parks around the Bay of Bengal, but is locally extinct in central and north-eastern Bangladesh. The Andaman and Nicobar Islands and Sri Lanka are the southern limits of its distribution. It sporadically occur in forested areas throughout the Indian peninsula.
Australia
The chital was the first species of deer introduced into Australia in the early 1800s by John Harris, surgeon to the New South Wales Corps, and he had about 400 of these animals on his property by 1813. These did not survive, and the primary range of the chital is now confined to a few cattle stations in North Queensland near Charters Towers and several feral herds on the NSW north and south coasts. While some of the stock originated from Sri Lanka (Ceylon), the Indian race likely is also represented.
United States
In the 1860s, chital were introduced to the island of Molokai, Hawaii, as a gift from Hong Kong to King Kamehameha V. By 2021, there were approximately 50,000 to 70,000 Axis deer on Molokai, as opposed to a human population of 7,500 people. During a drought that extended into 2021, hundreds of the deer died of starvation.
Chital were introduced to Lanai island, and soon became plentiful on both islands. Chital were introduced to Maui island in the 1950s to increase hunting opportunities. Because the chital has no natural predators on the Hawaiian islands, the population had been growing 20 to 30% each year, causing serious damage to agriculture and natural areas. To help control the excess population on Maui, a company called Maui Nui was founded in 2017 to hunt the deer and sell venison. In 2022, the company took 9,526 deer and sold of venison. The deer are harvested at night using infrared technology, accompanied by a USDA representative.
Releasing them on the island of Hawaii was planned, but was abandoned after pressure from scientists over damage to landscapes caused by the chital on other islands. In 2012, chital were spotted on the island of Hawaii; wildlife officials think that people had flown them by helicopter and transported them by boat onto the island. In August 2012, a helicopter pilot pleaded guilty to transporting four chital from Maui to Hawaii. Hawaii law now prohibits "the intentional possession or interisland transportation or release of wild or feral deer."
In 1932, chital were introduced to Texas. In 1988, self-sustaining herds were present in 27 counties in Central and South Texas. The chital is most populous on the Edwards Plateau.
Croatia
Chital of unknown origin were introduced to the islands of Brijuni in 1911. They also live on Rab Island. The population on the islands comprised about 200 individuals as of 2010. Attempts by hunters to introduce the species to the mainland of Croatia were unsuccessful.
Colombia
There have been sightings of herds of introduced chital in an interandean valley near the municipality of Puerto Triunfo in Antioquia Department.
Behaviour and ecology
Chital are active throughout the day. In the summer, time is spent in rest under shade, and the sun's glare is avoided if the temperature reaches ; activity peaks as dusk approaches. As days grow cooler, foraging begins before sunrise and peaks by early morning. Activity slows down during midday, when the animals rest or loiter about slowly. Foraging recommences by late afternoon and continues till midnight. They fall asleep a few hours before sunrise, typically in the forest which is cooler than the glades. These deer typically move in a single file on specific tracks, with a distance of two to three times their width between them, when on a journey, typically in search of food and water sources. A study in the Gir National Park (Gujarat, India) showed that chital travel the most in summer of all seasons.
When cautiously inspecting its vicinity, the chital stands motionless and listens with rapt attention, facing the potential danger, if any. This stance may be adopted by nearby individuals, as well. As an antipredator measure, chital flee in groups (unlike the hog deer that disperse on alarm); sprints are often followed by hiding in dense undergrowth. The running chital has its tail raised, exposing the white underparts. The chital can leap and clear fences as high as but prefers to dive under them. It stays within of cover.
A gregarious animal, the chital forms matriarchal herds comprising an adult female and her offspring of the previous and the present year, which may be associated with individuals of any age and either sex, male herds, and herds of juveniles and mothers. Small herds are common, though aggregations of as many as 100 individuals have been observed. Groups are loose and disband frequently, save for the juvenile-mother herd. Herd membership in Texas is typically up to 15; herds can have five to 40 members in India. Studies in the Nallamala Hills (Andhra Pradesh, India) and the Western Ghats (western coast of India) showed seasonal variation in the sex ratio of herds; this was attributed to the tendency of females to isolate themselves ahead of parturition. Similarly, rutting males leave their herds during the mating season, hence altering the herd composition. Large herds are most common in monsoon, observed foraging in the grasslands.
Predators of chitals include Indian wolves, tigers, Asiatic lions, leopards, pythons, dholes, Indian pariah dogs, and crocodiles. Fishing cats, jungle cats, foxes, golden jackals and eagles target juveniles. Males are less vulnerable than females and juveniles.
A vocal animal, the chital, akin to the North American elk, gives out bellows and alarm barks. Its calls are, however, not as strong as those of elk or red deer; they are mainly coarse bellows or loud growls. Bellowing coincides with rutting. Dominant males guarding females in oestrus make high-pitched growls at less powerful males. Males may moan during aggressive displays or while resting. Chital, mainly females and juveniles, bark persistently when alarmed or if they encounter a predator. Fawns in search of their mother often squeal. The chital can respond to the alarm calls of several animals, such as the common myna and langurs.
Marking behaviour is pronounced in males. Males have well-developed preorbital glands (near the eyes). They stand on their hind legs to reach tall branches and rub the open preorbital glands to deposit their scent there. This posture is also used while foraging. Urine marking is also observed; the smell of urine is typically stronger than that of the deposited scent. Sparring between males begins with the larger male displaying his dominance before the other; this display consists of hissing heading away from the other male with the tail facing him, the nose pointing to the ground, the ears down, the antlers upright, and the upper lip raised. The fur often bristles during the display. The male approaches the other in a slow gait. Males with velvet antlers may hunch over instead of standing erect as the males with hard antlers. The opponents then interlock their horns and push against each other, with the smaller male producing a sound at times which is louder than that produced by sambar deer, but not as much as the barasingha's. The fight terminates with the males stepping backward, or simply leaving and foraging. Fights are not generally serious.
Individuals may occasionally bite one another. Common mynas are often attracted to the chital. An interesting relationship has been observed between herds of chital and troops of the northern plains grey langurs, a widespread South Asian monkey. Chital benefit from the langurs' eyesight and ability to post a lookout from trees, while the langur benefit from the chital's strong sense of smell, both of which help keep a check on potential danger. The chital also benefit from fruits dropped by langurs from trees such as Terminalia bellirica and Phyllanthus emblica. The chital has been observed foraging with sambar deer in the Western Ghats.
Diet
Grazers as well as browsers, the chital mainly feed on grasses throughout the year. They prefer young shoots, in the absence of which, tall and coarse grasses are nibbled off at the tips. Browse forms a major portion of the diet only in the winter-October to January-when the grasses, tall or dried up, are no longer palatable. Browse includes herbs, shrubs, foliage, fruits, and forbs; Moghania species are often preferred while browsing. Fruits eaten by chital in the Kanha National Park (Madhya Pradesh, India) include those of Ficus species from January to May, Cordia myxa from May to June, and Syzygium cumini from June to July. Individuals tend to group together and forage while moving slowly. Chital are generally silent when grazing together. Males often stand on their hindlegs to reach tall branches. Water holes are visited nearly twice daily, with great caution. In the Kanha National Park, mineral licks rich in calcium and phosphorus pentoxide were scraped at by the incisors. Chital also gnaw bones and fallen antlers for their minerals. Males in velvet indulge in such osteophagia to a greater extent. Chital in the Sunderbans may be omnivores; remains of red crabs have been found in the rumen of individuals.
Reproduction
Breeding takes place throughout the year, with peaks that vary geographically. Sperm is produced year-round, though testosterone levels register a fall during the development of the antlers. Females have regular oestrus cycles, each lasting three weeks. The female can conceive again two weeks to four months after the birth. Males sporting hard antlers are dominant over those in velvet or those without antlers, irrespective of their size. Courtship is based on tending bonds. A rutting male fasts during the mating season and follows and guards a female in oestrus. The pair does several bouts of chasing and mutual licking before copulation.
The newborn is hidden for a week after birth, a period much shorter than most other deer. The mother-fawn bond is not very strong, as the two get separated often, though they can reunite easily as the herds are cohesive. If the fawn dies, the mother can breed once again so as to give birth twice that year. The males continue their growth till seven to eight years. The average lifespan in captivity is nearly 22 years. The longevity in the wild, however, is merely five to ten years.
The chital is found in large numbers in dense deciduous or semi-evergreen forests and open grasslands. The highest numbers of chital are found in the forests of India, where they feed upon tall grass and shrubs. Chital have been also spotted in Phibsoo Wildlife Sanctuary in Bhutan, which has the only remaining natural sal (Shorea robusta) forest in the country. They do not occur at high altitudes, where they are usually replaced by other species such as the sambar deer. They also prefer heavy forest cover for shade and avoid direct sunlight.
Conservation status
The chital is listed on the IUCN Red List as least concern "because it occurs over a very wide range within which there are many large populations". Currently, no range-wide threats to chitals are present, and they live in many protected areas. However, population densities are below ecological carrying capacity in many places due to hunting and competition with domestic livestock. Hunting for the deer's meat has caused substantial declines and local extinctions. The axis deer is protected under Schedule III of the Indian Wildlife Protection Act (1972) and under the Wildlife (Preservation) (Amendment) Act, 1974 of Bangladesh. Two primary reasons for its good conservation status are its legal protection as a species and a network of functioning protected areas.
The chital has been introduced to the Andaman Islands, Argentina, Australia, Brazil, Chile, Mexico, Paraguay, Uruguay, Alabama, Point Reyes National Seashore in California, Florida, Hawaii, Mississippi, and Texas in the United States, and the Veliki Brijun Island in the Brijuni Archipelago of the Istrian Peninsula in Croatia.
With effect from 2 August 2022, the European Union added the chital to the list of invasive alien species and banned its import into the EU.
| Biology and health sciences | Deer | Animals |
1437123 | https://en.wikipedia.org/wiki/Red-billed%20quelea | Red-billed quelea | The red-billed quelea (; Quelea quelea), also known as the red-billed weaver or red-billed dioch, is a small—approximately long and weighing —migratory, sparrow-like bird of the weaver family, Ploceidae, native to Sub-Saharan Africa.
It was named by Linnaeus in 1758, who considered it a bunting, but Ludwig Reichenbach assigned it in 1850 to the new genus Quelea. Three subspecies are recognised, with Quelea quelea quelea occurring roughly from Senegal to Chad, Q. q. aethiopica from Sudan to Somalia and Tanzania, and Q. q. lathamii from Gabon to Mozambique and South Africa. Non-breeding birds have light underparts, striped brown upper parts, yellow-edged flight feathers and a reddish bill. Breeding females attain a yellowish bill. Breeding males have a black (or rarely white) facial mask, surrounded by a purplish, pinkish, rusty or yellowish wash on the head and breast. The species avoids forests, deserts and colder areas such as those at high altitude and in southern South Africa. It constructs oval roofed nests woven from strips of grass hanging from thorny branches, sugar cane or reeds. It breeds in very large colonies.
The quelea feeds primarily on seeds of annual grasses, but also causes extensive damage to cereal crops. Therefore, it is sometimes called "Africa's feathered locust". The usual pest-control measures are spraying avicides or detonating fire-bombs in the enormous colonies during the night. Extensive control measures have been largely unsuccessful in limiting the quelea population. When food runs out, the species migrates to locations of recent rainfall and plentiful grass seed; hence it exploits its food source very efficiently. It is regarded as the most numerous undomesticated bird on earth, with the total post-breeding population sometimes peaking at an estimated 1.5 billion individuals. It feeds in huge flocks of millions of individuals, with birds that run out of food at the rear flying over the entire group to a fresh feeding zone at the front, creating an image of a rolling cloud. The conservation status of red-billed quelea is least concern according to the IUCN Red List.
Taxonomy and naming
The red-billed quelea was one of the many birds described originally by Linnaeus in the landmark 1758 10th edition of his Systema Naturae. Classifying it in the bunting genus Emberiza, he gave it the binomial name of Emberiza quelea. He incorrectly mentioned that it originated in India, probably because ships from the East Indies picked up birds when visiting the African coast during their return voyage to Europe. It is likely that he had seen a draft of Ornithologia, sive Synopsis methodica sistens avium divisionem in ordines, sectiones, genera, species, ipsarumque varietates, a book written by Mathurin Jacques Brisson that was to be published in 1760, and which contained a black and white drawing of the species.
The erroneous type locality of India was corrected to Africa in the 12th edition of Systema Naturae of 1766, and Brisson was cited. Brisson mentions that the bird originates from Senegal, where it had been collected by Michel Adanson during his 1748-1752 expedition. He called the bird Moineau à bec rouge du Senegal in French and Passer senegalensis erythrorynchos in Latin, both meaning "red-billed Senegalese sparrow".
Also in 1766, George Edwards illustrated the species in colour, based on a live male specimen owned by a Mrs Clayton in Surrey. He called it the "Brazilian sparrow", despite being unsure whether it came from Brazil or Angola. In 1850, Ludwig Reichenbach thought the species was not a true bunting, but rather a weaver, and created the genus name Quelea, as well as the new combination Q. quelea. The white-faced morph was described as a separate species, Q. russii by Otto Finsch in 1877 and named after the aviculturist Karl Russ.
Three subspecies are recognised. In the field, these are distinguished by differences in male breeding plumage.
The nominate subspecies, Quelea quelea quelea, is native to west and central Africa, where it has been recorded from Mauritania, western and northern Senegal, Gambia, central Mali, Burkina Faso, southwestern and southern Niger, northern Nigeria, Cameroon, south-central Chad and northern Central African Republic.
Loxia lathamii was described by Andrew Smith in 1836, but later assigned to Q. quelea as its subspecies lathamii. It ranges across central and southern Africa, where it has been recorded from southwestern Gabon, southern Congo, Angola (except the northeast and arid coastal southwest), southern Democratic Republic of Congo and the mouth of the Congo River, Zambia, Malawi and western Mozambique across to Namibia (except the coastal desert) and central, southern and eastern South Africa.
Ploceus aethiopicus was described by Carl Jakob Sundevall in 1850, but later assigned to Q. quelea as its subspecies aethiopica. It is found in eastern Africa where it occurs in southern Sudan, eastern South Sudan, Ethiopia and Eritrea south to the northeastern parts of the Democratic Republic of Congo, Uganda, Kenya, central and eastern Tanzania and northwestern and southern Somalia.
Formerly, two other subspecies have been described. Q. quelea spoliator was described by Phillip Clancey in 1960 on the basis of more greyish nonbreeding plumage of populations of wetter habitats of northeastern South Africa, Eswatini and southern Mozambique. However, further analysis indicated no clear distinction in plumage between it and Q. quelea lathamii, with no evidence of genetic isolation. Hence it is not recognised as distinct. Q. quelea intermedia, described by Anton Reichenow in 1886 from east Africa, is regarded a synonym of subspecies aethiopica.
Etymology and vernacular names
Linnaeus himself did not explain the name quelea. Quelea quelea is locally called kwelea domo-jekundu in Swahili, enzunge in Kwangali, chimokoto in Shona, inyonyane in Siswati, thaha in Sesotho and ndzheyana in the Tsonga language. M.W. Jeffreys suggested that the term came from medieval Latin qualea, meaning "quail", linking the prodigious numbers of queleas to the hordes of quail that fed the Israelites during the Exodus from Egypt.
The subspecies lathamii is probably named in honor of the ornithologist John Latham.
The name of the subspecies aethiopica refers to Ethiopia, and its type was collected in the neighbouring Sennar province in today's Sudan.
"Red-billed quelea" has been designated the official name by the International Ornithological Committee (IOC). Other names in English include black-faced dioch, cardinal, common dioch, Latham's weaver-bird, pink-billed weaver, quelea finch, quelea weaver, red-billed dioch, red-billed weaver, Russ' weaver, South-African dioch, Sudan dioch and Uganda dioch.
Phylogeny
Based on recent DNA analysis, the red-billed quelea is the sister group of a clade that contains both other remaining species of the genus Quelea, namely the cardinal quelea (Q. cardinalis) and the red-headed quelea (Q. erythrops). The genus belongs to the group of true weavers (subfamily Ploceinae), and is most closely related to the fodies (Foudia), a genus of six or seven species that occur on the islands of the western Indian Ocean. These two genera are in turn the sister clade to the Asian species of the genus Ploceus. The following tree represents current insight of the relationships between the species of Quelea, and their closest relatives.
Interbreeding between red-billed and red-headed queleas has been observed in captivity.
Description
The red-billed quelea is a small sparrow-like bird, approximately long and weighing , with a heavy, cone-shaped bill, which is red (in females outside the breeding season and males) or orange to yellow (females during the breeding season).
Over 75% of males have a black facial "mask", comprising a black forehead, cheeks, lores and higher parts of the throat. Occasionally males have a white mask. The mask is surrounded by a variable band of yellow, rusty, pink or purple. White masks are sometimes bordered by black. This colouring may only reach the lower throat or extend along the belly, with the rest of the underparts light brown or whitish with some dark stripes. The upperparts have light and dark brown longitudinal stripes, particularly at midlength, and are paler on the rump. The tail and upper wing are dark brown. The flight feathers are edged greenish or yellow. The eye has a narrow naked red ring and a brown iris. The legs are orangey in colour. The bill is bright raspberry red. Outside the breeding season, the male lacks bright colours; it has a grey-brown head with dark streaks, whitish chin and throat, and a faint light stripe above the eyes. At this time, the bill becomes pink or dull red and the legs turn flesh-coloured.
The females resemble the males in non-breeding plumage, but have a yellow or orangey bill and eye-ring during the breeding season. At other times, the female bill is pink or dull red.
Newborns have white bills and are almost naked with some wisps of down on the top of the head and the shoulders. The eyes open during the fourth day, at the same time as the first feathers appear. Older nestlings have a horn-coloured bill with a hint of lavender, though it turns orange-purple before the post-juvenile moult. Young birds change feathers two to three months after hatching, after which the plumage resembles that of non-breeding adults, although the head is grey, the cheeks whitish, and wing coverts and flight feathers have buff margins. At an age of about five months they moult again and their plumage starts to look like that of breeding adults, with a pinkish-purple bill.
Different subspecies are distinguished by different colour patterns of the male breeding plumage. In the typical subspecies, Q. quelea quelea, breeding males have a buff crown, nape and underparts and the black mask extends high up the forehead. In Q. quelea lathamii the mask also extends high up the forehead, but the underparts are mainly white. In Q. quelea aethiopica the mask does not extend far above the bill, and the underparts may have a pink wash. There is much variability within subspecies, and some birds cannot be ascribed to a subspecies based on outward appearance alone. Because of interbreeding, specimens intermediate between subspecies may occur where the ranges of the subspecies overlap, such as at Lake Chad.
The female pin-tailed whydah could be mistaken for the red-billed quelea in non-breeding plumage, since both are sparrow-like birds with conical red-coloured bills, but the whydah has a whitish brow between a black stripe through the eye and a black stripe above.
Sound
Flying flocks make a distinct sound due to the many wing beats. After arriving at the roost or nest site, birds keep moving around and make a lot of noise for about half an hour before settling in. Both males and females call. The male sings in short bursts, starting with some chatter, followed by a warbling tweedle-toodle-tweedle.
Distribution and habitat
The red-billed quelea is mostly found in tropical and subtropical areas with a seasonal semi-arid climate, resulting in dry thornbush grassland, including the Sahel, and its distribution covers most of sub-Saharan Africa. It avoids forests, however, including miombo woodlands and rainforests such as those in central Africa, and is generally absent from western parts of South Africa and arid coastal regions of Namibia and Angola. It was introduced to the island of Réunion in 2000. Occasionally, it can be found as high as above sea-level, but mostly resides below . It visits agricultural areas, where it feeds on cereal crops, although it is thought to prefer seeds of wild annual grasses. It needs to drink daily and can only be found within about distance of the nearest body of water. It is found in wet habitats, congregating at the shores of waterbodies, such as Lake Ngami, during flooding. It needs shrubs, reeds or trees to nest and roost.
Red-billed queleas migrate seasonally over long distances in anticipation of the availability of their main natural food source, seeds of annual grasses. The presence of these grass seeds is the result of the beginning of rains weeks earlier, and the rainfall varies in a seasonal geographic pattern. The temporarily wet areas do not form a single zone that periodically moves back and forth across the entirety of Sub-Saharan Africa, but rather consist of five or six regions, within which the wet areas "move" or "jump". Red-billed quelea populations thus migrate between the temporarily wet areas within each of these five to six geographical regions. Each of the subspecies, as distinguished by different male breeding plumage, is confined to one or more of these geographical regions.
In Nigeria, the nominate subspecies generally travels southwards during the start of the rains in the north during June and July, when the grass seed germinates, and is no longer eaten by the queleas. When they reach the Benoue River valley, for instance, the rainy season has already passed and the grass has produced new seeds. After about six weeks, the birds migrate northwards to find a suitable breeding area, nurture a generation, and then repeat this sequence moving further north. Some populations may also move northwards when the rains have started, to eat the remaining ungerminated seeds. In Senegal migration is probably between the southeast and the northwest.
In eastern Africa, the subspecies aethiopica is thought to consist of two sub-populations. One moves from Central Tanzania to southern Somalia, to return to breed in Tanzania in February and March, followed by successive migrations to breed ever further north, the season's last usually occurring in central Kenya during May. The second group moves from northern and central Sudan and central Ethiopia in May and June, to breed in southern Sudan, South Sudan, southern Ethiopia and northern Kenya, moving back north from August to October.
In southern Africa, the total population of the subspecies Q. quelea lathamii in October converges on the Zimbabwean Highveld. In November, part of the population migrates to the northwest to northwestern Angola, while the remainder migrates to the southeast to southern Mozambique and eastern South-Africa, but no proof has been found that these migration cohorts are genetically or morphologically divergent.
Ecology and behaviour
The red-billed quelea is regarded as the most numerous undomesticated bird on earth, with the total post-breeding population sometimes peaking at an estimated 1.5 billion individuals. The species is specialised on feeding on seeds of annual grass species, which may be ripe, or still green, but have not germinated yet. Since the availability of these seeds varies with time and space, occurring in particular weeks after the local off-set of rains, queleas migrate as a strategy to ensure year-round food availability. The consumption of a lot of food with a high energy content is needed for the queleas to gain enough fat to allow migration to new feeding areas.
When breeding, it selects areas such as lowveld with thorny or spiny vegetation—typically Acacia species—below elevation. While foraging for food, they may fly each day and return to the roosting or nesting site in the evening. Small groups of red-billed queleas often mix with different weaver birds (Ploceus) and bishops (Euplectes), and in western Africa they may join the Sudan golden sparrow (Passer luteus) and various estrildids. Red-billed queleas may also roost together with weavers, estrildids and barn swallows. Their life expectancy is two to three years in the wild, but one captive bird lived for eighteen years.
Breeding
The red-billed quelea needs of precipitation to breed, with nest building usually commencing four to nine weeks after the onset of the rains. Nests are usually built in stands of thorny trees such as umbrella thorn acacia (Vachellia tortilis), blackthorn (Senegalia mellifera) and sicklebush (Dichrostachys cinerea), but sometimes in sugar cane fields or reeds. Colonies can consist of millions of nests, in densities of 30,000 per ha (12,000 per acre). Over 6000 nests in a single tree have been counted.
At Malilangwe in Zimbabwe one colony was long and wide. In southern Africa, suitable branches are stripped of leaves a few days in advance of the onset of nest construction. The male starts the nest by creating a ring of grass by twining strips around both branches of a hanging forked twig, and from there bridging the gaps in the circle his beak can reach, having one foot on each of the branchlets, using the same footholds and the same orientation throughout the building process. Two parallel stems of reeds or sugar cane can also be used to attach the nest from. They use both their bills and feet in adding the initial knots needed.
As soon as the ring is finished the male displays, trying to attract a female, after which the nest may be completed in two days. The nest chamber is created in front of the ring. The entrance may be constructed after the egg laying has started, while the male works from the outside. A finished nest looks like a small oval or globular ball of grass, around 18 cm (7 in) high and 16 cm (6 in) wide, with a 2.5 cm (1 in) wide entrance high up one side, sheltered by a shallow awning. About six to seven hundred fresh, green grass strips are used for each nest. This species may nest several times per year when conditions are favourable.
In the breeding season, males are diversely coloured. These differences in plumage do not signal condition, probably serving instead for the recognition of individual birds. However, the intensity of the red on the bills is regarded an indicator of the animal's quality and social dominance. Red-billed quelea males mate with one female only within one breeding cycle. There are usually three eggs in each clutch (though the full range is one to five) of approximately long and in diameter. The eggs are light bluish or greenish in colour, sometimes with some dark spots. Some clutches contain six eggs, but large clutches may be the result of other females dumping an egg in a stranger's nest.
Both sexes share the incubation of the eggs during the day, but the female alone does so during the cool night, and feeds during the day when air temperatures are high enough to sustain the development of the embryo. The breeding cycle of the red-billed quelea is one of the shortest known in any bird. Incubation takes nine or ten days. After the chicks hatch, they are fed for some days with protein-rich insects. Later the nestlings mainly get seeds. The young birds fledge after about two weeks in the nest. They are sexually mature in one year.
Feeding
Flocks of red-billed queleas usually feed on the ground, with birds in the rear constantly leap-frogging those in the front to exploit the next strip of fallen seeds. This behaviour creates the impression of a rolling cloud, and enables efficient exploitation of the available food. The birds also take seeds from the grass ears directly. They prefer grains of in size. Red-billed queleas feed mainly on grass seeds, which includes a large number of annual species from the genera Echinochloa, Panicum, Setaria, Sorghum, Tetrapogon and Urochloa.
One survey at Lake Chad showed that two-thirds of the seeds eaten belonged to only three species: African wild rice (Oryza barthii), Sorghum purpureosericeum and jungle rice (Echinochloa colona). When the supply of these seeds runs out, seeds of cereals such as barley (Hordeum disticum), teff (Eragrostis tef), sorghum (Sorghum bicolor), manna (Setaria italica), millet (Panicum miliaceum), rice (Oryza sativa), wheat (Triticum), oats (Avena aestiva), as well as buckwheat (Phagopyrum esculentum) and sunflower (Helianthus annuus) are eaten on a large scale. Red-billed queleas have also been observed feeding on crushed corn from cattle feedlots, but entire maize kernels are too big for them to swallow. A single bird may eat about in seeds each day. As much as half of the diet of nestlings consists of insects, such as grasshoppers, ants, beetles, bugs, caterpillars, flies and termites, as well as snails and spiders.
Insects are generally eaten during the breeding season, though winged termites are eaten at other times. Breeding females consume snail-shell fragments and calcareous grit, presumably to enable egg-shell formation. One colony in Namibia, of an estimated five million adults and five million chicks, was calculated to consume roughly of insects and of grass seeds during its breeding cycle. At sunrise they form flocks that co-operate to find food. After a successful search, they settle to feed. In the heat of the day, they rest in the shade, preferably near water, and preen. Birds seem to prefer drinking at least twice a day. In the evening, they once again fly off in search of food.
Predators and parasites
Natural enemies of the red-billed quelea include other birds, snakes, warthogs, squirrels, galagos, monkeys, mongooses, genets, civets, foxes, jackals, hyaenas, cats, lions and leopards. Bird species that prey on queleas include the lanner falcon, tawny eagle and marabou stork. The diederik cuckoo is a brood parasite that probably lays eggs in nests of queleas. Some predators, such as snakes, raid nests and eat eggs and chicks.
Nile crocodiles sometimes attack drinking queleas, and an individual in Ethiopia hit birds out of the vegetation on the bank into the water with its tail, subsequently eating them. Queleas drinking at a waterhole were grabbed from below by African helmeted turtles in Etosha. Among the invertebrates that kill and eat youngsters are the armoured bush cricket (Acanthoplus discoidalis) and the scorpion Cheloctonus jonesii. Internal parasites found in queleas include Haemoproteus and Plasmodium.
Interactions with humans
The red-billed quelea is caught and eaten in many parts of Africa. Around Lake Chad, three traditional methods are used to catch red-billed queleas. Trappers belonging to the Hadjerai tribe use triangular hand-held nets, which are both selective and efficient. Each team of six trappers caught about twenty thousand birds each night. An estimated five to ten million queleas are trapped near N'Djamena each year, representing a market value of approximately US$37,500–75,000. Between 13 June and 21 August 1994 alone, 1.2 million queleas were caught. Birds were taken from roosts in the trees during the moonless period each night. The feathers were plucked and the carcasses fried the following morning, dried in the sun, and transported to the city to be sold on the market.
The Sara people use standing fishing nets with a very fine mesh, while Masa and Musgum fishermen cast nets over groups of birds. The impact of hunting on the quelea population (about 200 million individuals in the Lake Chad Basin) is deemed insignificant. Woven traps made from star grass (Cynodon nlemfuensis) are used to catch hundreds of these birds daily in the Kondoa District, Tanzania.
Guano is collected from under large roosts in Nigeria and used as a fertiliser. Tourists like to watch the large flocks of queleas, such as during visits of the Kruger National Park. The birds themselves eat pest insects such as migratory locusts, and the moth species Helicoverpa armigera and Spodoptera exempta. The animal's large distribution and population resulted in a conservation status listed as least concern on the IUCN Red List.
Aviculture
The red-billed quelea is sometimes kept and bred in captivity by hobbyists. It thrives if kept in large and high cages, with space to fly to minimise the risk of obesity. A sociable bird, the red-billed quelea tolerates mixed-species aviaries. Keeping many individuals mimics its natural occurrence in large flocks. This species withstands frosts, but requires shelter from rain and wind. Affixing hanging branches, such as hawthorn, in the cage facilitates nesting. Adults are typically given a diet of tropical seeds enriched with grass seeds, augmented by living insects such as mealworms, spiders, or boiled shredded egg during the breeding season. Fine stone grit and calcium sources, such as shell grit and cuttlebone, provide nutrients as well. If provided with material like fresh grass or coconut fibre they can be bred.
Pest management
Sometimes called "Africa's feathered locust", the red-billed quelea is considered a serious agricultural pest in Sub-Saharan Africa.
The governments of Botswana, Ethiopia, Kenya, South Africa, Sudan, Tanzania, and Zimbabwe have regularly made attempts to lessen quelea populations. The most common method to kill members of problematic flocks was by spraying the organophosphate avicide fenthion from the air on breeding colonies and roosts. In Botswana and Zimbabwe, spraying was also executed from ground vehicles and manually. Kenya and South Africa regularly used fire-bombs. Attempts during the 1950s and '60s to eradicate populations, at least regionally, failed. Consequently, management is at present directed at removing those congregations that are likely to attack vulnerable fields. In eastern and southern Africa, the control of quelea is often coordinated by the Desert Locust Control Organization for Eastern Africa (DLCO-EA) and the International Red Locust Control Organization for Central and Southern Africa (IRLCO-CSA), which make their aircraft available for this purpose.
Gallery
| Biology and health sciences | Passerida | Animals |
1437235 | https://en.wikipedia.org/wiki/Audiobook | Audiobook | An audiobook (or a talking book) is a recording of a book or other work being read out loud. A reading of the complete text is described as "unabridged", while readings of shorter versions are abridgements.
Spoken audio has been available in schools and public libraries and to a lesser extent in music shops since the 1930s. Many spoken word albums were made prior to the age of cassettes, compact discs, and downloadable audio, often of poetry and plays rather than books. It was not until the 1980s that the medium began to attract book retailers, and then book retailers started displaying audiobooks on bookshelves rather than in separate displays.
Etymology
The term "talking book" came into being in the 1930s with government programs designed for blind readers, while the term "audiobook" came into use during the 1970s when audiocassettes began to replace phonograph records. In 1994, the Audio Publishers Association established the term "audiobook" as the industry standard.
History
Spoken word recordings first became possible with the invention of the phonograph by Thomas Edison in 1877. "Phonographic books" were one of the original applications envisioned by Edison which would "speak to blind people without effort on their part." The initial words spoken into the phonograph were Edison's recital of "Mary Had a Little Lamb", the first instance of recorded verse. In 1878, a demonstration at the Royal Institution in Britain included "Hey Diddle Diddle, the Cat and the Fiddle" and a line of Tennyson's poetry thus establishing from the very beginning of the technology an association with spoken literature.
United States
Beginnings to 1970
Many short, spoken word recordings were sold on cylinder in the late 19th and early 20th century; however, the round cylinders were limited to about 4 minutes each making books impractical; flat platters increased to 12 minutes but this too was impractical for longer works. "One early listener complained that he would need a wheelbarrow to carry around talking books recorded on discs with such limited storage capacity." By the 1930s close-grooved records increased to 20 minutes making possible longer narrative.
In 1931, the American Foundation for the Blind (AFB) and Library of Congress Books for the Adult Blind Project established the "Talking Books Program" (Books for the Blind), which was intended to provide reading material for veterans injured during World War I and other visually impaired adults. The first test recordings in 1932 included a chapter from Helen Keller's Midstream and Edgar Allan Poe's "The Raven". The organization received congressional approval for exemption from copyright and free postal distribution of talking books. The first recordings made for the Talking Books Program in 1934 included sections of the Bible; the Declaration of Independence and other patriotic documents; plays and sonnets by Shakespeare; and fiction by Gladys Hasty Carroll, E. M. Delafield, Cora Jarrett, Rudyard Kipling, John Masefield, and P. G. Wodehouse. To save costs and quickly build inventories of audiobooks, Britain and the United States shared recordings in their catalogs. By looking at old catalogs, historian Matthew Rubery has "probably" identified the first British-produced audiobook as Agatha Christie's The Murder of Roger Ackroyd, read by Anthony McDonald in 1934.
Recording for the Blind & Dyslexic (RFBD, later renamed Learning Ally) was founded in 1948 by Anne T. Macdonald, a member of the New York Public Library's Women's Auxiliary, in response to an influx of inquiries from soldiers who had lost their sight in combat during World War II. The newly passed GI Bill of Rights guaranteed a college education to all veterans, but texts were mostly inaccessible to the recently blinded veterans, who did not read Braille and had little access to live readers. Macdonald mobilized the women of the Auxiliary under the motto "Education is a right, not a privilege". Members of the Auxiliary transformed the attic of the New York Public Library into a studio, recording textbooks using then state-of-the-art six-inch vinyl SoundScriber phonograph discs that played approximately 12 minutes of material per side. In 1952, Macdonald established recording studios in seven additional cities across the United States.
Caedmon Records was a pioneer in the audiobook business. It was the first company dedicated to selling spoken work recordings to the public and has been called the "seed" of the audiobook industry. Caedmon was formed in New York in 1952 by college graduates Barbara Holdridge and Marianne Roney. Their first release was a collection of poems by Dylan Thomas as read by the author. The LP's B-side contained A Child's Christmas in Wales, which was added as an afterthought. The story was obscure and Thomas himself could not remember its title when asked what to use to fill up the B-side—but this recording went on to become one of his most loved works, and launched Caedmon into a successful company. The original 1952 recording was a selection for the 2008 United States National Recording Registry, stating it is "credited with launching the audiobook industry in the United States". Caedmon used LP records, invented in 1948, which made longer recordings more affordable and practical, however most of their works were poems, plays and other short works, not unabridged books due to the LP's limitation of about a 45-minute playing time (combined sides).
Listening Library was also a pioneering company, it was one of the first to distribute children's audiobooks to schools, libraries and other special markets, including VA hospitals. It was founded by Anthony Ditlow and his wife in 1955 in their Red Bank, New Jersey home; Ditlow was partially blind. Another early pioneering company was Spoken Arts founded in 1956 by Arthur Luce Klein and his wife, they produced over 700 recordings and were best known for poetry and drama recordings used in schools and libraries. Like Caedemon, Listening Library and Spoken Arts benefited from the new technology of LPs, but also increased governmental funding for schools and libraries beginning in the 1950s and 60s.
1970 to 1996
Though spoken recordings were popular in vinyl record format for schools and libraries into the early 1970s, the beginning of the modern retail market for audiobooks can be traced to the wide adoption of cassette tapes during the 1970s. Cassette tapes were invented in 1962 and a few libraries, such as the Library of Congress, began distributing books on cassette by 1969. However, during the 1970s, a number of technological innovations allowed the cassette tape wider usage in libraries and also spawned the creation of new commercial audiobook market. These innovations included the introduction of small and cheap portable players such as the Walkman, and the widespread use of cassette decks in cars, particularly imported Japanese models which flooded the market during the multiple energy crises of the decade.
In the early 1970s, instructional recordings were among the first commercial products sold on cassette. There were 8 companies distributing materials on cassette with titles such as Managing and Selling Companies (12 cassettes, $300) and Executive Seminar in Sound on a series of 60-minute cassettes. In libraries, most books on cassette were still made for blind and disabled people, however some new companies saw the opportunity for making audiobooks for a wider audience, such as Voice Over Books which produced abridged best-sellers with professional actors. Early pioneers included Olympic gold medalist Duvall Hecht who in 1975 founded the California-based Books on Tape as a direct to consumer mail order rental service for unabridged audiobooks and expanded their services selling their products to libraries and audiobooks gaining popularity with commuters and travelers. In 1978, Henry Trentman, a traveling salesman who listened to sales tapes while driving long distances, had the idea to create quality unabridged recordings of classic literature read by professional actors. His company, the Maryland-based Recorded Books, followed the model of Books on Tape but with higher quality studio recordings and actors. Recorded Books and Chivers Audio Books were the first to develop integrated production teams and to work with professional actors.
By 1984, there were eleven audiobook publishing companies, they included Caedmon, Metacom, Newman Communications, Recorded Books, Brilliance and Books on Tape. The companies were small, the largest had a catalog of 200 titles. Some abridged titles were being sold in bookstores, such as Walden Books, but had negligible sales figures, many were sold by mail-order subscription or through libraries. However, in 1984, Brilliance Audio invented a technique for recording twice as much on the same cassette thus allowing for affordable unabridged editions. The technique involved recording on each of the two channels of each stereo track. This opened the market to new opportunities and by September 1985, Publishers Weekly identified twenty-one audiobook publishers. These included new major publishers such as Harper and Row, Random House, and Warner Communications.
1986 has been identified as the turning point in the industry, when it matured from an experimental curiosity. A number of events happened: the Audio Publishers Association, a professional non-profit trade association, was established by publishers who joined to promote awareness of spoken word audio and provide industry statistic. Time-Life began offering members audiobooks. Book-of-the-Month club began offering audiobooks to its members, as did the Literary Guild. Other clubs such as the History Book Club, Get Rich Club, Nostalgia Book Club, Scholastic club for children all began offering audiobooks. Publishers began releasing religious and inspirational titles in Christian bookstores. By May 1987, Publishers Weekly initiated a regular column to cover the industry. By the end of 1987, the audiobook market was estimated to be a $200 million market, and audiobooks on cassette were being sold in 75% of regional and independent bookstores surveyed by Publishers Weekly. By August 1988 there were forty audiobook publishers, about four times as many as in 1984.
By the middle of the 1990s, the audio publishing business grew to 1.5 billion dollars a year in retail value. In 1996, the Audio Publishers Association established the Audie Awards for audiobooks, which is equivalent to the Oscar for the audiobook industry. The nominees are announced each year by February. The winners are announced at a gala banquet in May, usually in conjunction with BookExpo America.
1996 to present
With the spread of the Internet to consumers in the 1990s, faster download speeds with broadband technologies, new compressed audio formats and portable media players, the popularity of audiobooks increased significantly during the late 1990s and 2000s. In 1997, Audible pioneered the world's first mass-market digital media player, named "The Audible Player", it retailed for $200, held 2 hours of audio and was touted as being "smaller and lighter than a Walkman", the popular cassette player used at the time. Digital audiobooks were a significant new milestone as they allowed listeners freedom from physical media such as cassettes and CMP3sas which required transportation through the mail, allowing instead instant download access from online libraries of unlimited size, and portability using comparatively small and lightweight devices. Audible.com was the first to establish a website, in 1998, from which digital audiobooks could be purchased.
Another innovation was the creation of LibriVox in 2005 by Montreal-based writer Hugh McGuire who posed the question on his blog: "Can the net harness a bunch of volunteers to help bring books in the public domain to life through podcasting?" Thus began the creation of public domain audiobooks by volunteer narrators. By the end of 2021, LibriVox had a catalog of over 16,870 works.
The transition from vinyl, to cassette, to CD, to MP3CD, to digital download has been documented by Audio Publishers Association in annual surveys (the earlier transition from record to cassette is described in the section on the 1970s). The final year that cassettes represented greater than 50% of total market sales was 2002. Cassettes were replaced by CDs as the dominant medium during 2003–2004. CDs reached a peak of 78% of sales in 2008, then began to decline in favor of digital downloads. The 2012 survey found CDs accounted for "nearly half" of all sales meaning it was no longer the dominant medium (APA did not report the digital download figures for 2012, but in 2011 CDs accounted for 53% and digital download was 41%). The APA estimates that audiobook sales in 2015 in digital format increased by 34% over 2014.
The resurgence of audio storytelling is widely attributed to advances in mobile technologies such as smartphones, tablets, and multimedia entertainment systems in cars, also known as connected car platforms. Audio drama recordings are also now podcast over the internet.
In 2014, Bob and Debra Deyan of Deyan Audio opened the Deyan Institute of Vocal Artistry and Technology, the world's first campus and school for teaching the art and technology of audiobook production.
In 2018, approximately 50,000 audiobooks were recorded in the United States with a sales growth of 20 percent year over year. U.S. audiobook sales in 2019 totaled 1.2 billion dollars, up 16% from the previous year. In addition to the sales increase, Edison Research's national survey of American audiobook listeners ages 18 and up found that the average number of audiobooks listened to per year increased from 6.8 in 2019 to 8.1 in 2020.
Germany
The evolution and use of audiobooks in Germany (Hörbuch, "book for listening") closely parallels that of the US. A special example of its use is the West German Audio Book Library for the Blind, founded in 1955. Actors from the municipal theater in Münster recorded the first audiobooks for the visually impaired in an improvised studio lined with egg cartons. Because trams rattled past, these first productions took place at night. Later, texts were recorded by trained speakers in professional studios and distributed to users by mail. Until the 1970s recordings were on tape reels, then later cassettes. Since 2004, the offerings have been recorded in the DAISY Digital Talking Book MP3 standard, which provides additional features for visually impaired users to both listen and navigate written material aurally.
India
Audiobooks in India started to appear somewhat later than in the rest of the world. Only by 2010 did Audiobooks gain mainstream popularity in the Indian market. This is primarily due to lack of previous organized efforts on the part of publishers and authors. The marketing efforts and availability of Audiobooks has made India as one of the fastest growing Audiobooks markets in the world.
The lifestyle of urban Indian population and one of the highest daily commute time in the world has also helped in making Audiobooks popular in the region. Business and Self Help books have widespread appeal and have been more popular than fiction/non-fiction. This is because Audiobooks are primarily seen as an avenue for self-improvement and education, rather than entertainment.
Audiobooks are being released in various Indian languages. In Malayalam, the first audio novel, titled Ouija Board, was released by Kathacafe in 2018.
South Korea
In the Korean publishing sector, since the audiobook business began in 2000, it has disappeared due to its failure to achieve meaningful results.
Nearly 20 years later, interest in mobile has increased in 2019, but there are still tasks to be solved.
First, Audiobook lacked a lot of policy support related to the industry, and secondly, Audiobook production environment infrastructure was insufficient. Third, research and technology development, such as academia, has not been active in order to continue to grow as an Audiobook industry.
Production
Producing an audiobook consists of a narrator sitting in a recording booth reading the text, while a studio engineer and a director record and direct the performance. If a mistake is made the recording is stopped and the narrator reads it again. With recent advancements in recording technology, many audiobooks are also now recorded in home studios by narrators working independently. Audiobooks produced by major publishing houses undergo a proofing and editing process after narration is recorded.
Narrators are usually paid on a finished recorded hour basis, meaning if it took 20 hours to produce a 5-hour book, the narrator is paid for 5 hours, thus providing an incentive not to make mistakes. Depending on the narrator they are paid per finished hour to (). Many narrators also work as producers and deliver fully produced audiobooks, which have been edited, mastered, and proofed. They may charge an extra $75–$125 per finished hour in addition to their narration fee to coordinate and pay for the post-production services. The overall cost to produce an audiobook can vary significantly, as longer books require more studio time and more well known narrators come at a premium. According to a representative at Audible, the cost of recording an audiobook has fallen from around in the late 1990s to around - in 2014.
Formats
Audiobooks are distributed on any audio format available, but primarily these are records, cassette tapes, CDs, MP3 CDs, downloadable digital formats (e.g., MP3 (.mp3), Windows Media Audio (.wma), Advanced Audio Coding (.aac)), and solid state preloaded digital devices in which the audio content is preloaded and sold together with a hardware device.
In 1955, a German inventor introduced the Sound Book cassette system based on the Tefifon format where instead of a magnetic tape the sound was recorded on a continuous loop of grooved vinylite ribbon similar to the old 8-track tape. Even though the original Tefifon upon which it was based ran at 19 CPS and could hold a maximum of 4 hours, one Sound Book could hold eight hours of recordings as it ran at half the speed or 9.5 CPS. However, just like the Tefifon, the format never became widespread in use.
A small number of books are recorded for radio broadcast, usually in abridged form and sometimes serialized. Audiobooks may come as fully dramatized versions of the printed book, sometimes calling upon a complete cast, music, and sound effects. Effectively audio dramas, these audiobooks are known as full-cast audiobooks. BBC radio stations Radio 3, Radio 4, and Radio 4 Extra have broadcast such productions as the William Gibson novel Neuromancer.
An audio first production is a spoken word audio work that is an original production but not based on a book. Examples include Joe Hill, the son of Stephen King, who released a Vinyl First audiobook called Dark Carousel in 2018. It came in a 2-LP vinyl set, or as a downloadable MP3, but with no published text. Another example includes Spin, The Audiobook Musical (2018), a musical rendition of Rumpelstiltskin narrated by Jim Dale, and featuring a cast of Broadway musical stars.
Use
Audiobooks have been used to teach children to read and to increase reading comprehension. They are also useful for the blind. The National Library of Congress in the U.S. and the CNIB Library in Canada provide fees for audiobook library services to the visually impaired; requested books are mailed out (at no cost) to clients. Founded in 1996, Assistive Media of Ann Arbor, Michigan was the first organization to produce and deliver spoken-word recordings of written journalistic and literary works via the Internet to serve people with visual impairments.
About 40 percent of all audiobook consumption occurs through public libraries, with the remainder served primarily through retail book stores. Library download programs are currently experiencing rapid growth (more than 5,000 public libraries offer free downloadable audiobooks). Libraries are also popular places to check out audiobooks in the CD format. According to the National Endowment for the Arts' study, "Reading at Risk: A Survey of Literary Reading in America" (2004), audiobook listening increases general literacy.
Listening practices
Audiobooks are considered a valuable tool because of their format. Unlike traditional books or a video program, one can listen to an audiobook while doing other tasks. Such tasks include doing the laundry, exercising, weeding and similar activities. The most popular general use of audiobooks by adults is when commuting with an automobile or while traveling with public transport, as an alternative to radio or music. Many people listen as well just to relax or as they drift off to sleep.
A recent survey released by the Audio Publishers Association found that the overwhelming majority of audiobook users listen in the car, and more than two-thirds of audiobook buyers described audiobooks as relaxing and a good way to multitask. Another stated reason for choosing audiobooks over other formats is that an audio performance makes some books more interesting.
Common practices of listening include:
Replaying: Depending upon one's degree of attention and interest, it is often necessary to listen to segments of an audiobook more than once to allow the material to be understood and retained satisfactorily. Replaying may be done immediately or after extended periods of time.
Learning: People may listen to an audiobook (usually an unabridged one) while following along in an actual book. This helps them to learn words that they may not learn correctly if they were only to read the book. This can also be a very effective way to learn a new language.
Multitasking: Many audiobook listeners choose the format because it allows multitasking during otherwise mundane or routine tasks such as exercising, crafting, or cooking.
Entertainment: Audiobooks have become a popular form of travel entertainment for families or commuters.
Charitable and nonprofit organizations
Founded in 1948, Learning Ally serves more than 300,000 K–12, college and graduate students, veterans and lifelong learners—all of whom cannot read standard print due to blindness, visual impairment, dyslexia, or other learning disabilities. Learning Ally's collection of more than 80,000 human-narrated textbooks and literature titles can be downloaded on mainstream smartphones and tablets, and is the largest of its kind in the world.
Founded in 2002, Bookshare is an online library of computer-read audiobooks in accessible formats for people with print disabilities.
Founded in 2005, LibriVox is also an online library of downloadable audiobooks and a free non for profit organisation developed by Hugh McGuire. It has public domain audiobooks in several languages.
Calibre Audio Library is a UK charity providing a subscription-free service of unabridged audiobooks for people with sight problems, dyslexia or other disabilities, who cannot read print. They have a library of over 8,550 fiction and non-fiction titles which can be borrowed by post on MP3 CDs and memory sticks or via streaming.
Listening Books is a UK audiobook charity providing an internet streaming, download and postal service to anyone who has a disability or illness which makes it difficult to hold a book, turn its pages, or read in the usual way, this includes people with visual, physical, learning or mental health difficulties. They have audiobooks for both leisure and learning and a library of over 7,500 titles which are recorded in their own digital studios or commercially sourced.
The Royal National Institute of Blind People (RNIB) is a UK charity which offers a Talking Books library service. The audiobooks are provided in DAISY format and delivered to the reader's house by post as a CD or USB memory stick. There are over 30,000 audiobooks available to borrow, which are free to print disabled library members. RNIB subsidises the Talking Books service by around £4 million a year.
| Technology | Printing | null |
1437768 | https://en.wikipedia.org/wiki/Manilkara%20zapota | Manilkara zapota | Manilkara zapota, commonly known as sapodilla (), sapote, chicozapote, chicoo, chicle, naseberry, nispero, or
soapapple, among other names, is an evergreen tree native to southern Mexico and Central America. An example natural occurrence is in coastal Yucatán, in the Petenes mangroves ecoregion, where it is a subdominant plant species. It was introduced to the Philippines during Spanish colonization. It is grown in large quantities in Mexico and in tropical Asia, including India, Pakistan, Thailand, Malaysia, Cambodia, Indonesia, Vietnam, Bangladesh, as well as in the Caribbean.
Common names
Most of the common names of Manilkara zapota like "sapodilla", "chiku", and "chicozapote" come from Spanish meaning "little sapote". Other common names in English include bully tree, soapapple tree, sawo, marmalade plum and dilly tree.
The specific epithet zapota is from the Spanish , which ultimately derives from the Nahuatl word tzapotl used for other similar looking fruits.
Description
Sapodilla trees can live up to one hundred years. It can grow to more than tall with a trunk diameter of up to ; but the average height of cultivated specimens is usually between with a trunk diameter not exceeding . It is wind-resistant and the bark is rich in a white, gummy latex called chicle. Its leaves are elliptic to ovate long with entire margins on long petioles; they are medium green and glossy with brown and slightly furry midribs. They are arranged alternately.
The trees can survive only in warm, typically tropical environments (although it has low tolerance to drought and heat in its early years), dying easily if the temperature drops below freezing. From germination, the sapodilla tree will usually take anywhere from five to eight years to bear fruit. The sapodilla trees yield fruit twice a year, though flowering may continue year round.
The white flowers are inconspicuous and bell-like, with a six-lobed corolla.
Fruit
The fruit is a large berry, in diameter. An unripe fruit has a firm outer skin and when picked, releases white chicle from its stem. A fully ripened fruit has saggy skin and does not release chicle when picked. Inside, its flesh ranges from a pale yellow to an earthy brown color with a grainy texture akin to that of a well-ripened pear. Each fruit contains one to six seeds. The seeds are hard, glossy, and black, resembling beans, with a hook at one end that can catch in the throat if swallowed.
The fruit has an exceptionally sweet, malty flavor. The unripe fruit is hard to the touch and contains high amounts of saponin, which has astringent properties similar to tannin, drying out the mouth.
Biological studies
Compounds extracted from the leaves showed anti-diabetic, antioxidant and hypocholesterolemic (cholesterol-lowering) effects in rats.
Acetone extracts of the seeds exhibited in vitro antibacterial effects against strains of Pseudomonas oleovorans and Vibrio cholerae.
Synonyms
Synonyms of this species include:
Uses
The fruit is edible and a favorite in the tropical Americas. Chicle from the bark is used to make chewing gum.
| Biology and health sciences | Tropical and tropical-like fruit | Plants |
1439761 | https://en.wikipedia.org/wiki/Serpentinite | Serpentinite | Serpentinite is a metamorphic rock composed predominantly of serpentine group minerals formed by serpentinization of mafic or ultramafic rocks. The ancient origin of the name is uncertain; it may be from the similarity of its texture or color to snake skin. Greek pharmacologist Dioscorides (AD 50) recommended eating this rock to prevent snakebite.
Serpentinite has been called serpentine or serpentine rock, particularly in older geological texts and in wider cultural settings.
Most of the chemical reactions necessary to synthesize acetyl-CoA, essential to basic biochemical pathways of life, take place during serpentinization. Serpentinite thermal vents are therefore considered a candidate for the origin of life on Earth.
Formation and mineralogy
Serpentinite is formed by near to complete serpentinization of mafic or ultramafic rocks. Serpentinite is formed from mafic rock that is hydrated by carbon dioxide-deficient sea water that is pressed into the rock at great depths below the ocean floor. This occurs at mid-ocean ridges and in the forearc mantle of subduction zones.
The final mineral composition of serpentinite is usually dominated by antigorite, lizardite, chrysotile (minerals of the serpentine subgroup), and magnetite (), with brucite () less commonly present. Lizardite, chrysotile, and antigorite all have approximately the formula or , but differ in minor components and in form. Accessory minerals, present in small quantities, include awaruite, other native metal minerals, and sulfide minerals.
Hydrogen production
The serpentinization reaction involving the transformation of fayalite (Fe-end member of olivine) by water into magnetite and quartz also produces molecular hydrogen according to the following reaction:
3 Fe2SiO4 + 2 H2O -> 2 Fe3O4 + 3 SiO2 + 2 H2
This reaction closely resembles the Schikorr reaction also producing hydrogen gas by oxidation of Fe ions into Fe ions by the protons of water. Two are then reduced into .
3 Fe(OH)2 -> Fe3O4 + 2 H2O + H2
In the Schikorr reaction, the two reduced into are these from two anions, then transformed into two oxide anions () directly incorporated into the magnetite crystal lattice while the water in excess is liberated as a reaction by-product.
Hydrogen produced by the serpentinization reaction is important because it can fuel microbial activity in the deep subsurface environment.
Hydrothermal vents and mud volcanoes
Deep sea hydrothermal vents located on serpentinite close to the axis of mid-ocean ridges generally resemble black smokers located on basalt, but emit complex hydrocarbon molecules. The Rainbow field of the Mid-Atlantic Ridge is an example of such hydrothermal vents. Serpentinization alone cannot provide the heat supply for these vents, which must be driven mostly by magmatism. However, the Lost City Hydrothermal Field, located off the axis of the Mid-Atlantic Ridge, may be driven solely by heat of serpentinization. Its vents are unlike black smokers, emitting relatively cool fluids () that are highly alkaline, high in magnesium, and low in hydrogen sulfide. The vents build up very large chimneys, up to in height, composed of carbonate minerals and brucite. Lush microbial communities are associated with the vents. Though the vents themselves are not composed of serpentinite, they are hosted in serpentinite estimated to have formed at a temperature of about . Sepiolite deposits on mid-ocean ridges may have formed through serpentinite-driven hydrothermal activity. However, geologists continue to debate whether serpentinization alone can account for the heat flux from the Lost City field.
The forearc of the Marianas subduction zone hosts large serpentinite mud volcanoes, which erupt serpentinite mud that rises through faults from the underlying serpentinized forearc mantle. Study of these mud volcanoes gives insights into subduction processes, and the high pH fluids emitted at the volcanoes support a microbial community.
Experimental drilling into the gabbro layer of oceanic crust near mid-ocean ridges has demonstrated the presence of a sparse population of hydrocarbon-degrading bacteria. These may feed on hydrocarbons produced by serpentinization of the underlying ultramafic rock.
Potential 'cradle of life'
Serpentinite thermal vents are a candidate for the environment in which life on Earth originated. Most of the chemical reactions necessary to synthesize acetyl-CoA, essential to basic biochemical pathways of life, take place during serpentinization. The sulfide-metal clusters that activate many enzymes resemble sulfide minerals formed during serpentinization.
Ecology
Soil cover over serpentinite bedrock tends to be thin or absent. Soil with serpentine is poor in calcium and other major plant nutrients, but rich in elements toxic to plants such as chromium and nickel. Some species of plants, such as Clarkia franciscana and certain species of manzanita, are adapted to living on serpentinite outcrops. However, because serpentinite outcrops are few and isolated, their plant communities are ecological islands and these distinctive species are often highly endangered. On the other hand, plant communities adapted to living on the serpentine outcrops of New Caledonia resist displacement by introduced species that are poorly adapted to this environment.
Serpentine soils are widely distributed on Earth, in part mirroring the distribution of ophiolites and other serpentine bearing rocks. There are outcroppings of serpentine soils in the Balkan Peninsula, Turkey, the island of Cyprus, the Alps, Cuba, and New Caledonia. In North America, serpentine soils also are present in small but widely distributed areas on the eastern slope of the Appalachian Mountains in the eastern United States, and in the Pacific Ranges of Oregon and California.
Occurrences
Notable occurrences of serpentinite are found at Thetford Mines, Quebec; Lake Valhalla, New Jersey; Gila County, Arizona; Lizard complex, Lizard Point, Cornwall; and in localities in Greece, Italy, and other parts of Europe. Notable ophiolites containing serpentinite include the Semail Ophiolite of Oman, the Troodos Ophiolite of Cyprus, the Newfoundland ophiolites, and the Main Ophiolite Belt of New Guinea.
Uses
Decorative stone in architecture and art
Serpentine group minerals have a Mohs hardness of 2.5 to 3.5, so serpentinite is easily carved. Grades of serpentinite higher in calcite, along with the verd antique (breccia form of serpentinite), have historically been used as decorative stones for their marble-like qualities. College Hall at the University of Pennsylvania, for example, is constructed out of serpentine. Popular sources in Europe before contact with the Americas were the mountainous Piedmont region of Italy and Larissa, Greece.
Serpentinites are used in many ways in the arts and crafts. For example, the rock has been turned in Zöblitz in Saxony for several hundred years.
By the Inuit
The Inuit and other indigenous people of the Arctic areas and less so of southern areas used the carved bowl shaped serpentinite qulliq or kudlik lamp with wick, to burn oil or fat to heat, make light and cook with. The Inuit made tools and more recently carvings of animals for commerce.
As an ovenstone
A variety of chlorite talc schist associated with Alpine serpentinite is found in Val d'Anniviers, Switzerland and was used for making "ovenstones" (), a carved stone base beneath a cast iron stove.
Neutron shield in nuclear reactors
Serpentinite has a significant amount of bound water, hence it contains abundant hydrogen atoms able to slow down neutrons by elastic collision (neutron thermalization process). Because of this, serpentinite can be used as dry filler inside steel jackets in some designs of nuclear reactors. For example, in RBMK series, as at Chernobyl, it was used for top radiation shielding to protect operators from escaping neutrons. Serpentine can also be added as aggregate to special concrete used in nuclear reactor shielding to increase the concrete density () and its neutron capture cross section.
CO2 sequestration
Because it readily absorbs carbon dioxide, serpentinite may be of use for sequestering atmospheric carbon dioxide. To speed up the reaction, serpentinite may be reacted with carbon dioxide at elevated temperature in carbonation reactors. Carbon dioxide may also be reacted with alkaline mine waste from serpentine deposits, or carbon dioxide may be injected directly into underground serpentinite formations. Serpentinite may also be used as a source of magnesium in conjunction with electrolytic cells for CO2 scrubbing.
Cultural references
It is the state rock of California, USA and the California Legislature specified that serpentine was "the official State Rock and lithologic emblem." In 2010, a bill was introduced which would have removed serpentine's special status as state rock due to it potentially containing chrysotile asbestos. The bill met with resistance from some California geologists, who noted that the chrysotile present is not hazardous unless it is mobilized in the air as dust.
| Physical sciences | Silicate minerals | Earth science |
1439783 | https://en.wikipedia.org/wiki/Serpentinization | Serpentinization | Serpentinization is a hydration and metamorphic transformation of ferromagnesian minerals, such as olivine and pyroxene, in mafic and ultramafic rock to produce serpentinite. Minerals formed by serpentinization include the serpentine group minerals (antigorite, lizardite, chrysotile), brucite, talc, Ni-Fe alloys, and magnetite. The mineral alteration is particularly important at the sea floor at tectonic plate boundaries.
Formation and petrology
Serpentinization is a form of low-temperature (0 to ~600 °C) metamorphism of ferromagnesian minerals in mafic and ultramafic rocks, such as dunite, harzburgite, or lherzolite. These are rocks low in silica and composed mostly of olivine (), pyroxene (), and chromite (approximately ). Serpentinization is driven largely by hydration and oxidation of olivine and pyroxene to serpentine group minerals (antigorite, lizardite, and chrysotile), brucite (), talc (), and magnetite (). Under the unusual chemical conditions accompanying serpentinization, water is the oxidizing agent, and is itself reduced to hydrogen, . This leads to further reactions that produce rare iron group native element minerals, such as awaruite () and native iron; methane and other hydrocarbon compounds; and hydrogen sulfide.
During serpentinization, large amounts of water are absorbed into the rock, increasing the volume, reducing the density and destroying the original structure. The density changes from with a concurrent volume increase on the order of 30-40%. The reaction is highly exothermic, releasing up to per mole of water reacting with the rock, and rock temperatures can be raised by about , providing an energy source for formation of non-volcanic hydrothermal vents. The hydrogen, methane, and hydrogen sulfide produced during serpentinization are released at these vents and provide energy sources for deep sea chemotroph microorganisms.
Formation of serpentine minerals
Olivine is a solid solution of forsterite, the magnesium endmember of , and fayalite, the iron endmember, with forsterite typically making up about 90% of the olivine in ultramafic rocks. Serpentine can form from olivine via several reactions:
Reaction 1a tightly binds silica, lowering its chemical activity to the lowest values seen in common rocks of the Earth's crust. Serpentinization then continues through the hydration of olivine to yield serpentine and brucite (Reaction 1b). The mixture of brucite and serpentine formed by Reaction 1b has the lowest silica activity in the serpentinite, so that the brucite phase is very important in understanding serpentinization. However, the brucite is often blended in with the serpentine such that it is difficult to identify except with X-ray diffraction, and it is easily altered under surface weathering conditions.
A similar suite of reactions involves pyroxene-group minerals:
Reaction 2a quickly comes to a halt as silica becomes unavailable, and Reaction 2b takes over. When olivine is abundant, silica activity drops low enough that talc begins to react with olivine:
This reaction requires higher temperatures than those at which brucite forms.
The final mineralogy depends both on rock and fluid compositions, temperature, and pressure. Antigorite forms in reactions at temperatures that can exceed during metamorphism, and it is the serpentine group mineral stable at the highest temperatures. Lizardite and chrysotile can form at low temperatures very near the Earth's surface.
Breakdown of diopside and formation of rodingites
Ultramafic rocks often contain calcium-rich pyroxene (diopside), which breaks down according to the reaction:
This raises both the pH, often to very high values, and the calcium content of the fluids involved in serpentinization. These fluids are highly reactive and may transport calcium and other elements into surrounding mafic rocks. Fluid reaction with these rocks may create metasomatic reaction zones enriched in calcium and depleted in silica, called rodingites.
Formation of magnetite and hydrogen
In most crustal rock, the chemical activity of oxygen is prevented from dropping to very low values by the fayalite-magnetite-quartz (FMQ) buffer. The very low chemical activity of silica during serpentinization eliminates this buffer, allowing serpentinization to produce highly reducing conditions. Under these conditions, water is capable of oxidizing ferrous () ions in fayalite. The process is of interest because it generates hydrogen gas:
However, studies of serpentinites suggest that iron minerals are first converted to ferroan brucite, that is, brucite containing , which then undergoes the Schikorr reaction in the anaerobic conditions of serpentinization:
Maximum reducing conditions, and the maximum rate of production of hydrogen, occur when the temperature of serpentinization is between and when fluids are carbonate undersaturated. If the original ultramafic rock (the protolith) is peridotite, which is rich in olivine, considerable magnetite and hydrogen are produced. When the protolith is pyroxenite, which contains more pyroxene than olivine, iron-rich talc is produced with no magnetite and only modest hydrogen production. Infiltration of silica-bearing fluids during serpentinization can suppress both the formation of brucite and the subsequent production of hydrogen.
Chromite present in the protolith will be altered to chromium-rich magnetite at lower serpentinization temperatures. At higher temperatures, it will be altered to iron-rich chromite (ferrit-chromite). During serpentinization, the rock is enriched in chlorine, boron, fluorine, and sulfur. Sulfur will be reduced to hydrogen sulfide and sulfide minerals, though significant quantities are incorporated into serpentine minerals, and some may later be reoxidized to sulfate minerals such as anhydrite. The sulfides produced include nickel-rich sulfides, such as mackinawite.
Methane and other hydrocarbons
Laboratory experiments have confirmed that at a temperature of and pressure of 500 bars, olivine serpentinizes with release of hydrogen gas. In addition, methane and complex hydrocarbons are formed through reduction of carbon dioxide. The process may be catalyzed by magnetite formed during serpentinization. One reaction pathway is:
Metamorphism at higher pressure and temperature
Lizardite and chrysotile are stable at low temperatures and pressures, while antigorite is stable at higher temperatures and pressure. Its presence in a serpentinite indicates either that serpentinization took place at unusually high pressure and temperature or that the rock experienced higher grade metamorphism after serpentinization was complete.
Infiltration of -bearing fluids into serpentinite causes distinctive talc-carbonate alteration. Brucite rapidly converts to magnesite and serpentine minerals (other than antigorite) are converted to talc. The presence of pseudomorphs of the original serpentinite minerals shows that this alteration takes place after serpentinization.
Serpentinite may contain chlorite (a phyllosilicate mineral), tremolite (Ca2(Mg5.0-4.5Fe2+0.0-0.5)Si8O22(OH)2), and metamorphic olivine and diopside (calcium-rich pyroxene). This indicates that the serpentinite has been subject to more intense metamorphism, reaching the upper greenschist or amphibolite metamorphic facies.
Above about , antigorite begins to break down. Thus serpentinite does not exist at higher metamorphic facies.
Extraterrestrial production of methane by serpentinization
The presence of traces of methane in the atmosphere of Mars has been hypothesized to be a possible evidence for life on Mars if methane was produced by bacterial activity. Serpentinization has been proposed as an alternative non-biological source for the observed methane traces. In 2022 it was reported that microscopic examination of the ALH 84001 meteorite, which came from Mars, shows that indeed the organic matter it contains was formed by serpentinization, not by life processes.
Using data from the Cassini probe flybys obtained in 2010–12, scientists were able to confirm that Saturn's moon Enceladus likely has a liquid water ocean beneath its frozen surface. A model suggests that the ocean on Enceladus has an alkaline pH of 11–12. The high pH is interpreted to be a key consequence of serpentinization of chondritic rock, that leads to the generation of , a geochemical source of energy that can support both abiotic and biological synthesis of organic molecules.
Environment of formation
Serpentinization occurs at mid-ocean ridges, in the forearc mantle of subduction zones, in ophiolite packages, and in ultramafic intrusions.
Mid-ocean ridges
Conditions are highly favorable for serpentinization at slow to ultraslow spreading mid-ocean ridges. Here the rate of crustal extension is high compared with the volume of magmatism, bringing ultramafic mantle rock very close to the surface where fracturing allows seawater to infiltrate the rock.
Serpentinization at slow spreading mid-ocean ridges can cause the seismic Moho discontinuity to be placed at the serpentinization front, rather than the base of the crust as defined by normal petrological criteria. The Lanzo Massif of the Italian Alps shows a sharp serpentinization front that may be a relict seismic Moho.
Subduction Zones
Forearc mantle
Serpentinization is an important phenomenon in subduction zones that has a strong control on the water cycle and geodynamics of a subduction zone. Here mantle rock is cooled by the subducting slab to temperatures at which serpentinite is stable, and fluids are released from the subducting slab in great quantities into the ultramafic mantle rock. Direct evidence that serpentinization is taking place in the Mariana Islands island arc is provided by the activity of serpentinite mud volcanoes. Xenoliths of harzburgite and (less commonly) dunite are occasionally erupted by the mud volcanoes, giving clues to the nature of the protolith.
Because serpentinization lowers the density of the original rock, serpentinization may lead to uplift or exhumation of serpentinites to the surface, as has taken place with the serpentinite exposed at the Presidio of San Francisco following cessation of subduction.
Serpentinized ultramafic rock is found in many ophiolites. Ophiolites are fragments of oceanic lithosphere that has been thrust onto continents, a process called obduction. They typically consist of a layer of serpentinized harzburgite (sometimes called alpine peridotite in older writings), a layer of hydrothermally altered diabases and pillow basalts, and a layer of deep water sediments containing radiolarian ribbon chert.
Implications
Limitation on earthquake depth
Seismic wave studies can detect the presence of large bodies of serpentinite in the crust and upper mantle, since serpentinization has a huge impact on shear wave velocity. A higher degree of serpentinization will lead to lower shear wave velocity and higher Poisson's ratio. Seismic measurements confirm that serpentinization is pervasive in forearc mantle. The serpentinization can produce an inverted Moho discontinuity, in which seismic velocity abruptly decreases across the crust-mantle boundary, which is the opposite of the usual behavior. The serpentinite is highly deformable, creating an aseismic zone in the forearc, at which serpentinites slide at stable plate velocity. The presence of serpentinite may limit the maximum depth of megathrust earthquakes as they impede rupture into the forearc mantle.
| Physical sciences | Silicate minerals | Earth science |
1440511 | https://en.wikipedia.org/wiki/Isotope%20geochemistry | Isotope geochemistry | Isotope geochemistry is an aspect of geology based upon the study of natural variations in the relative abundances of isotopes of various elements. Variations in isotopic abundance are measured by isotope-ratio mass spectrometry, and can reveal information about the ages and origins of rock, air or water bodies, or processes of mixing between them.
Stable isotope geochemistry is largely concerned with isotopic variations arising from mass-dependent isotope fractionation, whereas radiogenic isotope geochemistry is concerned with the products of natural radioactivity.
Stable isotope geochemistry
For most stable isotopes, the magnitude of fractionation from kinetic and equilibrium fractionation is very small; for this reason, enrichments are typically reported in "per mil" (‰, parts per thousand). These enrichments (δ) represent the ratio of heavy isotope to light isotope in the sample over the ratio of a standard. That is,
‰
Hydrogen
Carbon
Carbon has two stable isotopes, 12C and 13C, and one radioactive isotope, 14C.
The stable carbon isotope ratio, δ13C, is measured against Vienna Pee Dee Belemnite (VPDB). The stable carbon isotopes are fractionated primarily by photosynthesis (Faure, 2004). The 13C/12C ratio is also an indicator of paleoclimate: a change in the ratio in the remains of plants indicates a change in the amount of photosynthetic activity, and thus in how favorable the environment was for the plants. During photosynthesis, organisms using the C3 pathway show different enrichments compared to those using the C4 pathway, allowing scientists not only to distinguish organic matter from abiotic carbon, but also what type of photosynthetic pathway the organic matter was using. Occasional spikes in the global 13C/12C ratio have also been useful as stratigraphic markers for chemostratigraphy, especially during the Paleozoic.
The 14C ratio has been used to track ocean circulation, among other things.
Nitrogen
Nitrogen has two stable isotopes, 14N and 15N. The ratio between these is measured relative to nitrogen in ambient air. Nitrogen ratios are frequently linked to agricultural activities. Nitrogen isotope data has also been used to measure the amount of exchange of air between the stratosphere and troposphere using data from the greenhouse gas N2O.
Oxygen
Oxygen has three stable isotopes, 16O, 17O, and 18O. Oxygen ratios are measured relative to Vienna Standard Mean Ocean Water (VSMOW) or Vienna Pee Dee Belemnite (VPDB). Variations in oxygen isotope ratios are used to track both water movement, paleoclimate, and atmospheric gases such as ozone and carbon dioxide. Typically, the VPDB oxygen reference is used for paleoclimate, while VSMOW is used for most other applications. Oxygen isotopes appear in anomalous ratios in atmospheric ozone, resulting from mass-independent fractionation. Isotope ratios in fossilized foraminifera have been used to deduce the temperature of ancient seas.
Sulfur
Sulfur has four stable isotopes, with the following abundances: 32S (0.9502), 33S (0.0075), 34S (0.0421) and 36S (0.0002). These abundances are compared to those found in Cañon Diablo troilite. Variations in sulfur isotope ratios are used to study the origin of sulfur in an orebody and the temperature of formation of sulfur–bearing minerals as well as a biosignature that can reveal presence of sulfate reducing microbes.
Radiogenic isotope geochemistry
Radiogenic isotopes provide powerful tracers for studying the ages and origins of Earth systems. They are particularly useful to understand mixing processes between different components, because (heavy) radiogenic isotope ratios are not usually fractionated by chemical processes.
Radiogenic isotope tracers are most powerful when used together with other tracers: The more tracers used, the more control on mixing processes. An example of this application is to the evolution of the Earth's crust and Earth's mantle through geological time.
Lead–lead isotope geochemistry
Lead has four stable isotopes: 204Pb, 206Pb, 207Pb, and 208Pb.
Lead is created in the Earth via decay of actinide elements, primarily uranium and thorium.
Lead isotope geochemistry is useful for providing isotopic dates on a variety of materials. Because the lead isotopes are created by decay of different transuranic elements, the ratios of the four lead isotopes to one another can be very useful in tracking the source of melts in igneous rocks, the source of sediments and even the origin of people via isotopic fingerprinting of their teeth, skin and bones.
It has been used to date ice cores from the Arctic shelf, and provides information on the source of atmospheric lead pollution.
Lead–lead isotopes has been successfully used in forensic science to fingerprint bullets, because each batch of ammunition has its own peculiar 204Pb/206Pb vs 207Pb/208Pb ratio.
Samarium–neodymium
Samarium–neodymium is an isotope system which can be utilised to provide a date as well as isotopic fingerprints of geological materials, and various other materials including archaeological finds (pots, ceramics).
147Sm decays to produce 143Nd with a half life of 1.06x1011 years.
Dating is achieved usually by trying to produce an isochron of several minerals within a rock specimen. The initial 143Nd/144Nd ratio is determined.
This initial ratio is modelled relative to CHUR (the Chondritic Uniform Reservoir), which is an approximation of the chondritic material which formed the solar system. CHUR was determined by analysing chondrite and achondrite meteorites.
The difference in the ratio of the sample relative to CHUR can give information on a model age of extraction from the mantle (for which an assumed evolution has been calculated relative to CHUR) and to whether this was extracted from a granitic source (depleted in radiogenic Nd), the mantle, or an enriched source.
Rhenium–osmium
Rhenium and osmium are siderophile elements which are present at very low abundances in the crust. Rhenium undergoes radioactive decay to produce osmium. The ratio of non-radiogenic osmium to radiogenic osmium throughout time varies.
Rhenium prefers to enter sulfides more readily than osmium. Hence, during melting of the mantle, rhenium is stripped out, and prevents the osmium–osmium ratio from changing appreciably. This locks in an initial osmium ratio of the sample at the time of the melting event. Osmium–osmium initial ratios are used to determine the source characteristic and age of mantle melting events.
Noble gas isotopes
Natural isotopic variations amongst the noble gases result from both radiogenic and nucleogenic production processes. Because of their unique properties, it is useful to distinguish them from the conventional radiogenic isotope systems described above.
Helium-3
Helium-3 was trapped in the planet when it formed. Some 3He is being added by meteoric dust, primarily collecting on the bottom of oceans (although due to subduction, all oceanic tectonic plates are younger than continental plates). However, 3He will be degassed from oceanic sediment during subduction, so cosmogenic 3He is not affecting the concentration or noble gas ratios of the mantle.
Helium-3 is created by cosmic ray bombardment, and by lithium spallation reactions which generally occur in the crust. Lithium spallation is the process by which a high-energy neutron bombards a lithium atom, creating a 3He and a 4He ion. This requires significant lithium to adversely affect the 3He/4He ratio.
All degassed helium is lost to space eventually, due to the average speed of helium exceeding the escape velocity for the Earth. Thus, it is assumed the helium content and ratios of Earth's atmosphere have remained essentially stable.
It has been observed that 3He is present in volcano emissions and oceanic ridge samples. How 3He is stored in the planet is under investigation, but it is associated with the mantle and is used as a marker of material of deep origin.
Due to similarities in helium and carbon in magma chemistry, outgassing of helium requires the loss of volatile components (water, carbon dioxide) from the mantle, which happens at depths of less than 60 km. However, 3He is transported to the surface primarily trapped in the crystal lattice of minerals within fluid inclusions.
Helium-4 is created by radiogenic production (by decay of uranium/thorium-series elements). The continental crust has become enriched with those elements relative to the mantle and thus more He4 is produced in the crust than in the mantle.
The ratio (R) of 3He to 4He is often used to represent 3He content. R usually is given as a multiple of the present atmospheric ratio (Ra).
Common values for R/Ra:
Old continental crust: less than 1
mid-ocean ridge basalt (MORB): 7 to 9
Spreading ridge rocks: 9.1 plus or minus 3.6
Hotspot rocks: 5 to 42
Ocean and terrestrial water: 1
Sedimentary formation water: less than 1
Thermal spring water: 3 to 11
3He/4He isotope chemistry is being used to date groundwaters, estimate groundwater flow rates, track water pollution, and provide insights into hydrothermal processes, igneous geology and ore genesis.
(U-Th)/He dating of apatite as a thermal history tool
USGS: Helium Discharge at Mammoth Mountain Fumarole (MMF)
Isotopes in actinide decay chains
Isotopes in the decay chains of actinides are unique amongst radiogenic isotopes because they are both radiogenic and radioactive. Because their abundances are normally quoted as activity ratios rather than atomic ratios, they are best considered separately from the other radiogenic isotope systems.
Protactinium/Thorium – 231Pa/230Th
Uranium is well mixed in the ocean, and its decay produces 231Pa and 230Th at a constant activity ratio (0.093). The decay products are rapidly removed by adsorption on settling particles, but not at equal rates. 231Pa has a residence equivalent to the residence time of deep water in the Atlantic basin (around 1000 yrs) but 230Th is removed more rapidly (centuries). Thermohaline circulation effectively exports 231Pa from the Atlantic into the Southern Ocean, while most of the 230Th remains in Atlantic sediments. As a result, there is a relationship between 231Pa/230Th in Atlantic sediments and the rate of overturning: faster overturning produces lower sediment 231Pa/230Th ratio, while slower overturning increases this ratio. The combination of δ13C and 231Pa/230Th can therefore provide a more complete insight into past circulation changes.
Anthropogenic isotopes
Tritium/helium-3
Tritium was released to the atmosphere during atmospheric testing of nuclear bombs. Radioactive decay of tritium produces the noble gas helium-3. Comparing the ratio of tritium to helium-3 (3H/3He) allows estimation of the age of recent ground waters. A small amount of tritium is also produced naturally by cosmic ray spallation and spontaneous ternary fission in natural uranium and thorium, but due to the relatively short half-life of tritium and the relatively small quantities (compared to those from anthropogenic sources) those sources of tritium usually play only a secondary role in the analysis of groundwater.
USGS Tritium/Helium-3 Dating
Hydrologic Isotope Tracers - Helium
| Physical sciences | Geochemistry | Earth science |
2894782 | https://en.wikipedia.org/wiki/Facial%20tissue | Facial tissue | Facial tissue and paper handkerchief refers to a class of soft, absorbent, disposable papers that are suitable for use on the face. They are disposable alternatives for cloth handkerchiefs. The terms are commonly used to refer to the type of paper tissue, usually sold in boxes, that is designed to facilitate the expulsion of nasal mucus from the nose (nose-blowing) although it may refer to other types of facial tissues such as napkins and wipes.
Facial tissues are often referred to simply as "tissues", or (in Canada and the United States) by the generic trademark "Kleenex", which popularized the invention and its use outside of Japan.
Manufacture
Facial tissue and paper handkerchiefs are made from the lowest basis weights tissue paper (14–18 g/m2). The surface is often made smoother by light calendering. These paper types consist usually of 2–3 plies. Because of high quality requirements the base tissue is normally made entirely from pure chemical pulp, but might contain added selected recycled fiber. The tissue paper might be treated with softeners, lotions or added perfume to get the right properties or "feeling". The finished facial tissues or handkerchiefs are folded and put in pocket-size packages or a box dispenser.
Facial tissue may contain non-biodegradable additives for strength.
History
Facial tissue has been used for centuries in Japan, in the form of washi () or Japanese tissue, as described in this 17th-century European account of the voyage of Hasekura Tsunenaga:
"They blow their noses in soft silky papers the size of a hand, which they never use twice, so that they throw them on the ground after usage, and they were delighted to see people around them precipitate themselves to pick them up."
In 1924, facial tissues as they are known today were first introduced by Kimberly-Clark as Kleenex. It was invented as a means to remove cold cream. Early advertisements linked Kleenex to Hollywood makeup departments and sometimes included endorsements from movie stars (Helen Hayes and Jean Harlow) who used Kleenex to remove their theatrical makeup with cold cream. It was the customers that started to use Kleenex as a disposable handkerchief, and a reader review in 1926 by a newspaper in Peoria, Illinois found that 60% of the users used it for blowing their nose. The other 40% used it for various reasons, including napkins and toilet paper.
Kimberly-Clark also introduced pop-up, colored, printed, pocket, and 3-ply facial tissues.
Leading global players
The leading global players in Facial Tissue Market are:
Procter & Gamble
Seventh Generation
Kimberly-Clark
Essity
Asia Pulp and Paper
Hengan International
Vinda International
Georgia-Pacific
Sofidel Group
WEPA Group
Metsa Group
CMPC Tissue
Industrie Cartarie Tronchetti (ICT)
Kruger
Cascades
C&S Paper
Sallettalen y Briand s.l (sbpapel)
Notable brands in the United States
Puffs
Kleenex
Scotties
Seventh Generation
Fiora
Nice 'N Soft
Marcal
Green Forest
Notable brands outside the United States
Tempo (brand)
Renova (brand)
| Biology and health sciences | Hygiene products | Health |
2898991 | https://en.wikipedia.org/wiki/Planck%20postulate | Planck postulate | The Planck postulate (or Planck's postulate), one of the fundamental principles of quantum mechanics, is the postulate that the energy of oscillators in a black body is quantized, and is given by
where is an integer (1, 2, 3, ...), is the Planck constant, and (the Greek letter nu) is the frequency of the oscillator.
The postulate was introduced by Max Planck in his derivation of his law of black body radiation in 1900. This assumption allowed Planck to derive a formula for the entire spectrum of the radiation emitted by a black body. Planck was unable to justify this assumption based on classical physics; he considered quantization as being purely a mathematical trick, rather than (as is now known) a fundamental change in the understanding of the world. In other words, Planck then contemplated virtual oscillators.
In 1905, Albert Einstein adapted the Planck postulate to explain the photoelectric effect, but Einstein proposed that the energy of photons themselves was quantized (with photon energy given by the Planck–Einstein relation), and that quantization was not merely a feature of microscopic oscillators. Planck's postulate was further applied to understanding the Compton effect, and was applied by Niels Bohr to explain the emission spectrum of the hydrogen atom and derive the correct value of the Rydberg constant.
| Physical sciences | Quantum mechanics | Physics |
25916521 | https://en.wikipedia.org/wiki/Plasma%20%28physics%29 | Plasma (physics) | Plasma () is one of four fundamental states of matter (the other three being solid, liquid, and gas) characterized by the presence of a significant portion of charged particles in any combination of ions or electrons. It is the most abundant form of ordinary matter in the universe, mostly in stars (including the Sun), but also dominating the rarefied intracluster medium and intergalactic medium.
Plasma can be artificially generated, for example, by heating a neutral gas or subjecting it to a strong electromagnetic field.
The presence of charged particles makes plasma electrically conductive, with the dynamics of individual particles and macroscopic plasma motion governed by collective electromagnetic fields and very sensitive to externally applied fields. The response of plasma to electromagnetic fields is used in many modern devices and technologies, such as plasma televisions or plasma etching.
Depending on temperature and density, a certain number of neutral particles may also be present, in which case plasma is called partially ionized. Neon signs and lightning are examples of partially ionized plasmas.
Unlike the phase transitions between the other three states of matter, the transition to plasma is not well defined and is a matter of interpretation and context. Whether a given degree of ionization suffices to call a substance "plasma" depends on the specific phenomenon being considered.
Early history
Plasma was first identified in laboratory by Sir William Crookes. Crookes presented a lecture on what he called "radiant matter" to the British Association for the Advancement of Science, in Sheffield, on Friday, 22 August 1879.
Systematic studies of plasma began with the research of Irving Langmuir and his colleagues in the 1920s. Langmuir also introduced the term "plasma" as a description of ionized gas in 1928:
Lewi Tonks and Harold Mott-Smith, both of whom worked with Langmuir in the 1920s, recall that Langmuir first used the term by analogy with the blood plasma. Mott-Smith recalls, in particular, that the transport of electrons from thermionic filaments reminded Langmuir of "the way blood plasma carries red and white corpuscles and germs."
Definitions
The fourth state of matter
Plasma is called the fourth state of matter after solid, liquid, and gas. It is a state of matter in which an ionized substance becomes highly electrically conductive to the point that long-range electric and magnetic fields dominate its behaviour.
Plasma is typically an electrically quasineutral medium of unbound positive and negative particles (i.e., the overall charge of a plasma is roughly zero). Although these particles are unbound, they are not "free" in the sense of not experiencing forces. Moving charged particles generate electric currents, and any movement of a charged plasma particle affects and is affected by the fields created by the other charges. In turn, this governs collective behaviour with many degrees of variation.
Plasma is distinct from the other states of matter. In particular, describing a low-density plasma as merely an "ionized gas" is wrong and misleading, even though it is similar to the gas phase in that both assume no definite shape or volume. The following table summarizes some principal differences:
Ideal plasma
Three factors define an ideal plasma:
The plasma approximation: The plasma approximation applies when the plasma parameter Λ, representing the number of charge carriers within the Debye sphere is much higher than unity. It can be readily shown that this criterion is equivalent to smallness of the ratio of the plasma electrostatic and thermal energy densities. Such plasmas are called weakly coupled.
Bulk interactions: The Debye length is much smaller than the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, where boundary effects may take place. When this criterion is satisfied, the plasma is quasineutral.
Collisionlessness: The electron plasma frequency (measuring plasma oscillations of the electrons) is much larger than the electron–neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics. Such plasmas are called collisionless.
Non-neutral plasma
The strength and range of the electric force and the good conductivity of plasmas usually ensure that the densities of positive and negative charges in any sizeable region are equal ("quasineutrality"). A plasma with a significant excess of charge density, or, in the extreme case, is composed of a single species, is called a non-neutral plasma. In such a plasma, electric fields play a dominant role. Examples are charged particle beams, an electron cloud in a Penning trap and positron plasmas.
Dusty plasma
A dusty plasma contains tiny charged particles of dust (typically found in space). The dust particles acquire high charges and interact with each other. A plasma that contains larger particles is called grain plasma. Under laboratory conditions, dusty plasmas are also called complex plasmas.
Properties and parameters
Density and ionization degree
For plasma to exist, ionization is necessary. The term "plasma density" by itself usually refers to the electron density , that is, the number of charge-contributing electrons per unit volume. The degree of ionization is defined as fraction of neutral particles that are ionized:
where is the ion density and the neutral density (in number of particles per unit volume). In the case of fully ionized matter, . Because of the quasineutrality of plasma, the electron and ion densities are related by , where is the average ion charge (in units of the elementary charge).
Temperature
Plasma temperature, commonly measured in kelvin or electronvolts, is a measure of the thermal kinetic energy per particle. High temperatures are usually needed to sustain ionization, which is a defining feature of a plasma. The degree of plasma ionization is determined by the electron temperature relative to the ionization energy (and more weakly by the density). In thermal equilibrium, the relationship is given by the Saha equation. At low temperatures, ions and electrons tend to recombine into bound states—atoms—and the plasma will eventually become a gas.
In most cases, the electrons and heavy plasma particles (ions and neutral atoms) separately have a relatively well-defined temperature; that is, their energy distribution function is close to a Maxwellian even in the presence of strong electric or magnetic fields. However, because of the large difference in mass between electrons and ions, their temperatures may be different, sometimes significantly so. This is especially common in weakly ionized technological plasmas, where the ions are often near the ambient temperature while electrons reach thousands of kelvin. The opposite case is the z-pinch plasma where the ion temperature may exceed that of electrons.
Plasma potential
Since plasmas are very good electrical conductors, electric potentials play an important role. The average potential in the space between charged particles, independent of how it can be measured, is called the "plasma potential", or the "space potential". If an electrode is inserted into a plasma, its potential will generally lie considerably below the plasma potential due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of "quasineutrality", which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma (), but on the scale of the Debye length, there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths.
The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation:
Differentiating this relation provides a means to calculate the electric field from the density:
It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small, otherwise, it will be dissipated by the repulsive electrostatic force.
Magnetization
The existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. Plasma with a magnetic field strong enough to influence the motion of the charged particles is said to be magnetized. A common quantitative criterion is that a particle on average completes at least one gyration around the magnetic-field line before making a collision, i.e., , where is the electron gyrofrequency and is the electron collision rate. It is often the case that the electrons are magnetized while the ions are not. Magnetized plasmas are anisotropic, meaning that their properties in the direction parallel to the magnetic field are different from those perpendicular to it. While electric fields in plasmas are usually small due to the plasma high conductivity, the electric field associated with a plasma moving with velocity in the magnetic field is given by the usual Lorentz formula , and is not affected by Debye shielding.
Mathematical descriptions
To completely describe the state of a plasma, all of the particle locations and velocities that describe the electromagnetic field in the plasma region would need to be written down. However, it is generally not practical or necessary to keep track of all the particles in a plasma. Therefore, plasma physicists commonly use less detailed descriptions, of which there are two main types:
Fluid model
Fluid models describe plasmas in terms of smoothed quantities, like density and averaged velocity around each position (see Plasma parameters). One simple fluid model, magnetohydrodynamics, treats the plasma as a single fluid governed by a combination of Maxwell's equations and the Navier–Stokes equations. A more general description is the two-fluid plasma, where the ions and electrons are described separately. Fluid models are often accurate when collisionality is sufficiently high to keep the plasma velocity distribution close to a Maxwell–Boltzmann distribution. Because fluid models usually describe the plasma in terms of a single flow at a certain temperature at each spatial location, they can neither capture velocity space structures like beams or double layers, nor resolve wave-particle effects.
Kinetic model
Kinetic models describe the particle velocity distribution function at each point in the plasma and therefore do not need to assume a Maxwell–Boltzmann distribution. A kinetic description is often necessary for collisionless plasmas. There are two common approaches to kinetic description of a plasma. One is based on representing the smoothed distribution function on a grid in velocity and position. The other, known as the particle-in-cell (PIC) technique, includes kinetic information by following the trajectories of a large number of individual particles. Kinetic models are generally more computationally intensive than fluid models. The Vlasov equation may be used to describe the dynamics of a system of charged particles interacting with an electromagnetic field.
In magnetized plasmas, a gyrokinetic approach can substantially reduce the computational expense of a fully kinetic simulation.
Plasma science and technology
Plasmas are studied by the vast academic field of plasma science or plasma physics, including several sub-disciplines such as space plasma physics.
Plasmas can appear in nature in various forms and locations, with a few examples given in the following table:
Space and astrophysics
Plasmas are by far the most common phase of ordinary matter in the universe, both by mass and by volume.
Above the Earth's surface, the ionosphere is a plasma, and the magnetosphere contains plasma. Within our Solar System, interplanetary space is filled with the plasma expelled via the solar wind, extending from the Sun's surface out to the heliopause. Furthermore, all the distant stars, and much of interstellar space or intergalactic space is also filled with plasma, albeit at very low densities. Astrophysical plasmas are also observed in accretion disks around stars or compact objects like white dwarfs, neutron stars, or black holes in close binary star systems. Plasma is associated with ejection of material in astrophysical jets, which have been observed with accreting black holes or in active galaxies like M87's jet that possibly extends out to 5,000 light-years.
Artificial plasmas
Most artificial plasmas are generated by the application of electric and/or magnetic fields through a gas. Plasma generated in a laboratory setting and for industrial use can be generally categorized by:
The type of power source used to generate the plasma—DC, AC (typically with radio frequency (RF)) and microwave
The pressure they operate at—vacuum pressure (< 10 mTorr or 1 Pa), moderate pressure (≈1 Torr or 100 Pa), atmospheric pressure (760 Torr or 100 kPa)
The degree of ionization within the plasma—fully, partially, or weakly ionized
The temperature relationships within the plasma—thermal plasma (), non-thermal or "cold" plasma ()
The electrode configuration used to generate the plasma
The magnetization of the particles within the plasma—magnetized (both ion and electrons are trapped in Larmor orbits by the magnetic field), partially magnetized (the electrons but not the ions are trapped by the magnetic field), non-magnetized (the magnetic field is too weak to trap the particles in orbits but may generate Lorentz forces)
Generation of artificial plasma
Just like the many uses of plasma, there are several means for its generation. However, one principle is common to all of them: there must be energy input to produce and sustain it. For this case, plasma is generated when an electric current is applied across a dielectric gas or fluid (an electrically non-conducting material) as can be seen in the adjacent image, which shows a discharge tube as a simple example (DC used for simplicity).
The potential difference and subsequent electric field pull the bound electrons (negative) toward the anode (positive electrode) while the cathode (negative electrode) pulls the nucleus. As the voltage increases, the current stresses the material (by electric polarization) beyond its dielectric limit (termed strength) into a stage of electrical breakdown, marked by an electric spark, where the material transforms from being an insulator into a conductor (as it becomes increasingly ionized). The underlying process is the Townsend avalanche, where collisions between electrons and neutral gas atoms create more ions and electrons (as can be seen in the figure on the right). The first impact of an electron on an atom results in one ion and two electrons. Therefore, the number of charged particles increases rapidly (in the millions) only "after about 20 successive sets of collisions", mainly due to a small mean free path (average distance travelled between collisions).
Electric arc
Electric arc is a continuous electric discharge between two electrodes, similar to lightning.
With ample current density, the discharge forms a luminous arc, where the inter-electrode material (usually, a gas) undergoes various stages — saturation, breakdown, glow, transition, and thermal arc. The voltage rises to its maximum in the saturation stage, and thereafter it undergoes fluctuations of the various stages, while the current progressively increases throughout. Electrical resistance along the arc creates heat, which dissociates more gas molecules and ionizes the resulting atoms. Therefore, the electrical energy is given to electrons, which, due to their great mobility and large numbers, are able to disperse it rapidly by elastic collisions to the heavy particles.
Examples of industrial plasma
Plasmas find applications in many fields of research, technology and industry, for example, in industrial and extractive metallurgy, surface treatments such as plasma spraying (coating), etching in microelectronics, metal cutting and welding; as well as in everyday vehicle exhaust cleanup and fluorescent/luminescent lamps, fuel ignition, and even in supersonic combustion engines for aerospace engineering.
Low-pressure discharges
Glow discharge plasmas: non-thermal plasmas generated by the application of DC or low frequency RF (<100 kHz) electric field to the gap between two metal electrodes. Probably the most common plasma; this is the type of plasma generated within fluorescent light tubes.
Capacitively coupled plasma (CCP): similar to glow discharge plasmas, but generated with high frequency RF electric fields, typically 13.56 MHz. These differ from glow discharges in that the sheaths are much less intense. These are widely used in the microfabrication and integrated circuit manufacturing industries for plasma etching and plasma enhanced chemical vapor deposition.
Cascaded arc plasma source: a device to produce low temperature (≈1eV) high density plasmas (HDP).
Inductively coupled plasma (ICP): similar to a CCP and with similar applications but the electrode consists of a coil wrapped around the chamber where plasma is formed.
Wave heated plasma: similar to CCP and ICP in that it is typically RF (or microwave). Examples include helicon discharge and electron cyclotron resonance (ECR).
Atmospheric pressure
Arc discharge: this is a high power thermal discharge of very high temperature (≈10,000 K). It can be generated using various power supplies. It is commonly used in metallurgical processes. For example, it is used to smelt minerals containing Al2O3 to produce aluminium.
Corona discharge: this is a non-thermal discharge generated by the application of high voltage to sharp electrode tips. It is commonly used in ozone generators and particle precipitators.
Dielectric barrier discharge (DBD): this is a non-thermal discharge generated by the application of high voltages across small gaps wherein a non-conducting coating prevents the transition of the plasma discharge into an arc. It is often mislabeled "Corona" discharge in industry and has similar application to corona discharges. A common usage of this discharge is in a plasma actuator for vehicle drag reduction. It is also widely used in the web treatment of fabrics. The application of the discharge to synthetic fabrics and plastics functionalizes the surface and allows for paints, glues and similar materials to adhere. The dielectric barrier discharge was used in the mid-1990s to show that low temperature atmospheric pressure plasma is effective in inactivating bacterial cells. This work and later experiments using mammalian cells led to the establishment of a new field of research known as plasma medicine. The dielectric barrier discharge configuration was also used in the design of low temperature plasma jets. These plasma jets are produced by fast propagating guided ionization waves known as plasma bullets.
Capacitive discharge: this is a nonthermal plasma generated by the application of RF power (e.g., 13.56 MHz) to one powered electrode, with a grounded electrode held at a small separation distance on the order of 1 cm. Such discharges are commonly stabilized using a noble gas such as helium or argon.
"Piezoelectric direct discharge plasma:" is a nonthermal plasma generated at the high side of a piezoelectric transformer (PT). This generation variant is particularly suited for high efficient and compact devices where a separate high voltage power supply is not desired.
MHD converters
A world effort was triggered in the 1960s to study magnetohydrodynamic converters in order to bring MHD power conversion to market with commercial power plants of a new kind, converting the kinetic energy of a high velocity plasma into electricity with no moving parts at a high efficiency. Research was also conducted in the field of supersonic and hypersonic aerodynamics to study plasma interaction with magnetic fields to eventually achieve passive and even active flow control around vehicles or projectiles, in order to soften and mitigate shock waves, lower thermal transfer and reduce drag.
Such ionized gases used in "plasma technology" ("technological" or "engineered" plasmas) are usually weakly ionized gases in the sense that only a tiny fraction of the gas molecules are ionized. These kinds of weakly ionized gases are also nonthermal "cold" plasmas. In the presence of magnetics fields, the study of such magnetized nonthermal weakly ionized gases involves resistive magnetohydrodynamics with low magnetic Reynolds number, a challenging field of plasma physics where calculations require dyadic tensors in a 7-dimensional phase space. When used in combination with a high Hall parameter, a critical value triggers the problematic electrothermal instability which limited these technological developments.
Complex plasma phenomena
Although the underlying equations governing plasmas are relatively simple, plasma behaviour is extraordinarily varied and subtle: the emergence of unexpected behaviour from a simple model is a typical feature of a complex system. Such systems lie in some sense on the boundary between ordered and disordered behaviour and cannot typically be described either by simple, smooth, mathematical functions, or by pure randomness. The spontaneous formation of interesting spatial features on a wide range of length scales is one manifestation of plasma complexity. The features are interesting, for example, because they are very sharp, spatially intermittent (the distance between features is much larger than the features themselves), or have a fractal form. Many of these features were first studied in the laboratory, and have subsequently been recognized throughout the universe. Examples of complexity and complex structures in plasmas include:
Filamentation
Striations or string-like structures are seen in many plasmas, like the plasma ball, the aurora, lightning, electric arcs, solar flares, and supernova remnants. They are sometimes associated with larger current densities, and the interaction with the magnetic field can form a magnetic rope structure. ( | Physical sciences | States of matter | null |
25921922 | https://en.wikipedia.org/wiki/Impossible%20color | Impossible color | Impossible colors are colors that do not appear in ordinary visual functioning. Different color theories suggest different hypothetical colors that humans are incapable of perceiving for one reason or another, and fictional colors are routinely created in popular culture. While some such colors have no basis in reality, phenomena such as cone cell fatigue enable colors to be perceived in certain circumstances that would not be otherwise.
Opponent process
The color opponent process is a color theory that states that the human visual system interprets information about color by processing signals from cone and rod cells in an antagonistic manner. The three types of cone cells have some overlap in the wavelengths of light to which they respond, so it is more efficient for the visual system to record differences between the responses of cones, rather than each type of cone's individual response. The opponent color theory suggests that there are three opponent channels:
Red versus green
Blue versus yellow
Black versus white (this is achromatic and detects light–dark variation or luminance)
Responses to one color of an opponent channel are antagonistic to those to the other color, and signals output from a place on the retina can contain one or the other but not both, for each opponent pair.
Imaginary colors
A fictitious color or imaginary color is a point in a color space that corresponds to combinations of cone cell responses in one eye that cannot be produced by the eye in normal circumstances seeing any possible light spectrum. No physical object can have an imaginary color.
The spectral sensitivity curve of medium-wavelength (M) cone cells overlaps those of short-wavelength (S) and long-wavelength (L) cone cells. Light of any wavelength that interacts with M cones also interacts with S or L cones, or both, to some extent. Therefore, no wavelength and no spectral power distribution excites only one sort of cone.
Imaginary colors in color spaces
Although they cannot be seen, imaginary colors are often found in the mathematical descriptions that define color spaces.
Any additive mixture of two real colors is also a real color. When colors are displayed in the CIE 1931 XYZ color space, additive mixture results in color along the line between the colors being mixed. By mixing any three colors, one can therefore create any color contained in the triangle they describethis is called the gamut formed by those three colors, which are called primary colors. Any colors outside of this triangle cannot be obtained by mixing the chosen primaries.
When defining primaries, the goal is often to leave as many real colors in gamut as possible. Since the region of real colors is not a triangle (see illustration), it is not possible to pick three real colors that span the whole region. The gamut can be increased by selecting more than three real primary colors, but since the region of real colors is bounded by a smooth curve, there will always be some colors near its edges that are left out. For this reason, primary colors are often chosen that are outside of the region of real colorsthat is, imaginary or fictitious primary colorsin order to capture the greatest area of real colors.
In computer and television screen color displays, the corners of the gamut triangle are defined by commercially available phosphors chosen to be as near as possible to pure red, green, and blue, within the area of real colors. Because of this, these displays inevitably exhibit colors nearest to real colors lying within its gamut triangle, rather than exact matches to real colors that plot outside of it. The specific gamuts available to commercial display devices vary by manufacturer and model and are often defined as part of international standardsfor example, the gamut of chromaticities defined by sRGB color space was developed into a standard (IEC 61966-2-1:1999 ) by the International Electrotechnical Commission.
Chimerical colors
A chimerical color is an imaginary color that can be seen temporarily by looking steadily at a strong color until some of the cone cells become fatigued, temporarily changing their color sensitivities, and then looking at a markedly different color. The direct trichromatic description of vision cannot explain these colors, which can involve saturation signals outside the physical gamut imposed by the trichromatic model. Opponent process color theories, which treat intensity and chroma as separate visual signals, provide a biophysical explanation of these chimerical colors. For example, staring at a saturated primary-color field and then looking at a white object results in an opposing shift in hue, causing an afterimage of the complementary color. Exploration of the color space outside the range of "real colors" by this means is major corroborating evidence for the opponent-process theory of color vision. Chimerical colors can be seen while seeing with one eye or with both eyes, and are not observed to reproduce simultaneously qualities of opposing colors (e.g. "yellowish blue"). Chimerical colors include:
Stygian colors These are simultaneously dark and impossibly saturated. For example, to see "stygian blue": staring at bright yellow causes a dark blue afterimage, then on looking at black, the blue is seen as blue against the black, also as dark as the black. The color is not possible to achieve through normal vision, because the lack of incident light (in the black) prevents saturation of the blue/yellow chromatic signal (the blue appearance).
Self-luminous colors These mimic the effect of glowing material, even when viewed on a medium such as paper, which can only reflect and not emit its own light. For example, to see "self-luminous red": staring at green causes a red afterimage, then on looking at white, the red is seen against the white and may seem to be brighter than the white.
Hyperbolic colors These are impossibly highly saturated. For example, to see "hyperbolic orange": staring at bright cyan causes an orange afterimage, then on looking at orange, the resulting orange afterimage seen against the orange background may cause an orange color purer than the purest orange color that can be made by any normally seen light.
Colors outside physical color space
According to the opponent-process theory, under normal circumstances, there is no hue that could be described as a mixture of opponent hues; that is, as a hue looking "redgreen" or "yellowblue".
In 1983, Hewitt D. Crane and Thomas P. Piantanida performed tests using an eye-tracker device that had a field of a vertical red stripe adjacent to a vertical green stripe, or several narrow alternating red and green stripes (or in some cases, yellow and blue instead). The device could track involuntary movements of one eye (there was a patch over the other eye) and adjust mirrors so the image would follow the eye and the boundaries of the stripes were always on the same places on the eye's retina; the field outside the stripes was blanked with occluders. Under such conditions, the edges between the stripes seemed to disappear (perhaps due to edge-detecting neurons becoming fatigued) and the colors flowed into each other in the brain's visual cortex, overriding the opponency mechanisms and producing not the color expected from mixing paints or from mixing lights on a screen, but new colors entirely, which are not in the CIE 1931 color space, either in its real part or in its imaginary parts. For red-and-green, some saw an even field of the new color; some saw a regular pattern of just-visible green dots and red dots; some saw islands of one color on a background of the other color. Some of the volunteers for the experiment reported that afterward, they could still imagine the new colors for a period of time.
Some observers indicated that although they were aware that what they were viewing was a color (that is, the field was not achromatic), they were unable to name or describe the color. One of these observers was an artist with large color vocabulary. Other observers of the novel hues described the first stimulus as a reddish-green.
In 2001, Vincent A. Billock and Gerald A. Gleason and Brian H. Tsou set up an experiment to test a theory that the 1983 experiment did not control for variations in the perceived luminance of the colors from subject to subject: two colors are equiluminant for an observer when rapidly alternating between the colors produces the least impression of flickering. The 2001 experiment was similar but controlled for luminance. They had these observations:
Some subjects (4 out of 7) described transparency phenomenaas though the opponent colors originated in two depth planes and could be seen, one through the other. ...
We found that when colors were equiluminant, subjects saw reddish greens, bluish yellows, or a multistable spatial color exchange (an entirely novel perceptual phenomena ); when the colors were nonequiluminant, subjects saw spurious pattern formation.
This led them to propose a "soft-wired model of cortical color opponency", in which populations of neurons compete to fire and in which the "losing" neurons go completely silent. In this model, eliminating competition by, for instance, inhibiting connections between neural populations can allow mutually exclusive neurons to fire together.
Hsieh and Tse in 2006 disputed the existence of colors forbidden by opponency theory and claimed they are, in reality, intermediate colors. However, by their own account their methods differed from Crane and Piantanida: "They stabilized the border between two colors on the retina using an eye tracker linked to deflector mirrors, whereas we relied on visual fixation." Hsieh and Tse do not compare their methods to Billock and Tsou, and do not cite their work, even though it was published five years earlier in 2001. | Physical sciences | Basics | Physics |
30600763 | https://en.wikipedia.org/wiki/Evolution%20of%20reptiles | Evolution of reptiles | Reptiles arose about 320 million years ago during the Carboniferous period. Reptiles, in the traditional sense of the term, are defined as animals that have scales or scutes, lay land-based hard-shelled eggs, and possess ectothermic metabolisms. So defined, the group is paraphyletic, excluding endothermic animals like birds that are descended from early traditionally-defined reptiles. A definition in accordance with phylogenetic nomenclature, which rejects paraphyletic groups, includes birds while excluding mammals and their synapsid ancestors. So defined, Reptilia is identical to Sauropsida.
Though few reptiles today are apex predators, many examples of apex reptiles have existed in the past. Reptiles have an extremely diverse evolutionary history that has led to biological successes, such as dinosaurs, pterosaurs, plesiosaurs, mosasaurs, and ichthyosaurs.
First reptiles
Early reptiles
The origin of the reptiles lies about 320–310 million years ago, in the swamps of the late Carboniferous period, when the first reptiles evolved from advanced labyrinthodonts.
The oldest known animal that may have been an amniote, a reptile rather than an amphibian, is Casineria (though it has also been argued to be a temnospondyl amphibian).
A series of footprints from the fossil strata of Nova Scotia, dated to 315 million years, show typical reptilian toes and imprints of scales.
The tracks are attributed to Hylonomus, the oldest unquestionable reptile known.
It was a small, lizard-like animal, about 20 to 30 cm (8–12 in) long, with numerous sharp teeth indicating an insectivorous diet.
Other examples include Westlothiana (sometimes considered a stem-amniote rather than a true amniote) and Paleothyris, both of similar build and presumably similar habit. One of the best known early reptiles is Mesosaurus, a genus from the Early Permian that had returned to water, feeding on fish.
The earliest reptiles were largely overshadowed by bigger labyrinthodont amphibians, such as Cochleosaurus, and remained a small, inconspicuous part of the fauna until after the small ice age at the end of the Carboniferous.
Anapsids, synapsids, diapsids and sauropsids
It was traditionally assumed that first reptiles were anapsids, having a solid skull with holes only for the nose, eyes, spinal cord, etc.; the discoveries of synapsid-like openings in the skull roof of the skulls of several members of Parareptilia, including lanthanosuchoids, millerettids, bolosaurids, some nycteroleterids, some procolophonoids and at least some mesosaurs made it more ambiguous and it is currently uncertain whether the ancestral reptile had an anapsid-like or synapsid-like skull. Very soon after the first reptiles appeared, they split into two branches. One branch, Synapsida (including modern mammals), had one opening in the skull roof behind each eye. The other branch, Sauropsida, is itself divided into two main groups. One of them, the aforementioned Parareptilia, contained taxa with anapsid-like skull, as well as taxa with one opening behind each eye (see above). Members of the other group, Diapsida, possessed a hole in their skulls behind each eye, along with a second hole located higher on the skull. The function of the holes in both synapsids and diapsids was to lighten the skull and give room for the jaw muscles to move, allowing for a more powerful bite.
Turtles have been traditionally believed to be surviving anapsids, on the basis of their skull structure. The rationale for this classification was disputed, with some arguing that turtles are diapsids that reverted to this primitive state in order to improve their armor (see Parareptilia). Later morphological phylogenetic studies with this in mind placed turtles firmly within Diapsida. All molecular studies have strongly upheld the placement of turtles within diapsids, most commonly as a sister group to extant archosaurs.
Mammalian evolution
A basic cladogram of the origin of mammals.
Important developments in the transition from reptile to mammal were the evolution of warm-bloodedness, of molar occlusion, of the three-ossicle middle ear, of hair, and of mammary glands. By the end of the Triassic, there were many species that looked like modern mammals and, by the Middle Jurassic, the lineages leading to the three extant mammal groups — the monotremes, the marsupials, and the placentals — had diverged.
Rise of dinosaurs
Permian reptiles
Near the end of the Carboniferous, while the terrestrial reptiliomorph labyrinthodonts were still present, the synapsids evolved the first fully terrestrial large vertebrates, the pelycosaurs such as Edaphosaurus. In the mid-Permian period, the climate turned drier, resulting in a change of fauna: The primitive pelycosaurs were replaced by the more advanced therapsids.
The anapsid reptiles, whose massive skull roofs had no postorbital holes, continued and flourished throughout the Permian. The pareiasaurs reached giant proportions in the late Permian, eventually disappearing at the close of the period.
Late in the period, the diapsid reptiles split into two main lineages, the archosaurs (ancestors of crocodiles and dinosaurs) and the lepidosaurs (predecessors of modern tuataras, lizards, and snakes). Both groups remained lizard-like and relatively small and inconspicuous during the Permian.
The Mesozoic era, the "Age of Reptiles"
The close of the Permian saw the greatest mass extinction known (see the Permian–Triassic extinction event). Most of the earlier anapsid/synapsid megafauna disappeared, being replaced by the archosauromorph diapsids. The archosaurs were characterized by elongated hind legs and an erect pose, the early forms looking somewhat like long-legged crocodiles. The archosaurs became the dominant group during the Triassic period, developing into the well-known dinosaurs and pterosaurs, as well as the pseudosuchians. The Mesozoic is often called the "Age of Reptiles", a phrase coined by the early 19th-century paleontologist Gideon Mantell who recognized the dinosaurs and the ancestors of the crocodilians as the dominant land vertebrates. Some of the dinosaurs were the largest land animals ever to have lived while some of the smaller theropods gave rise to the first birds.
The sister group to Archosauromorpha is Lepidosauromorpha, containing squamates and rhynchocephalians, as well as their fossil relatives. Lepidosauromorpha contained at least one major group of the Mesozoic sea reptiles: the mosasaurs, which emerged during the Cretaceous period. The phylogenetic placement of other main groups of fossil sea reptiles – the sauropterygians and the ichthyosaurs, which evolved in the early Triassic and in the Middle Triassic respectively – is more controversial. Different authors linked these groups either to lepidosauromorphs or to archosauromorphs, and ichthyosaurs were also argued to be diapsids that did not belong to the least inclusive clade containing lepidosauromorphs and archosauromorphs.
The therapsids came under increasing pressure from the dinosaurs in the Jurassic; the mammals and the tritylodontids were the only survivors of the line by the end of the period.
Bird evolution
The main points to the transition from reptile to bird are the evolution from scales to feathers, the evolution of the beak (although independently evolved in other organisms), the hollowfication of bones, development of flight, and warm-bloodedness.
The evolution of birds is thought to have begun in the Jurassic Period, with the earliest birds derived from theropod dinosaurs. Birds are categorized as a biological class, Aves. The earliest known species in Aves is Archaeopteryx lithographica, from the Late Jurassic period. Modern phylogenetics place birds in the dinosaur clade Theropoda. According to the current consensus, Aves and Crocodilia are the sole living members of an unranked clade, the Archosauria.
Simplified cladogram from Senter (2007).
Demise of the dinosaurs
The close of the Cretaceous period saw the demise of the Mesozoic era reptilian megafauna. Along with massive amount of volcanic activity at the time, the meteor impact that created the Cretaceous–Paleogene boundary is accepted as the main cause for this mass extinction event. Of the large marine reptiles, only sea turtles are left, and, of the dinosaurs, only the small feathered theropods survived in the form of birds. The end of the “Age of Reptiles” led to the “Age of Mammals”. Despite the change in phrasing, reptile diversification continued throughout the Cenozoic. Today, squamates make up the majority of extant reptiles today (over 90%). There are approximately 9,766 extant species of reptiles, compared with 5,400 species of mammals, so the number of reptilian species (without birds) is nearly twice the number of mammals.
Role reversal
After the Cretaceous–Paleogene extinction event wiped out all of the non-avian dinosaurs (birds are generally regarded as the surviving dinosaurs) and several mammalian groups, placental and marsupial mammals diversified into many new forms and ecological niches throughout the Paleogene and Neogene eras. Some reached enormous sizes and almost as wide a variation as the dinosaurs once did. Nevertheless, mammalian megafauna never quite reached the skyscraper heights of some sauropods.
Nonetheless, large reptiles still composed important megafaunal components, such as giant tortoises, large crocodilians and, more locally, large varanids.
The four orders of Reptilia
Testudines
Testudines, or turtles, may have evolved from anaspids, but their exact origin is unknown and heavily debated. Fossils date back to around 220 million years ago and share remarkably similar characteristics. These first turtles retain the same body plan as do all modern testudines and are mostly herbivorous, with some feeding exclusively on small marine organisms. The trade-mark shell is believed to have evolved from extensions from their backbone and widened ribs that fused together. This is supported by the fossil of Odontochelys semitestacea, which has an incomplete shell originating from the ribs and back bone. This species also had teeth with its beak, giving more support to it being a transitional fossil, although this claim is still controversial. This shell evolved to protect against predators, but also slows down the land-based species by a great amount. This has caused many species to go extinct in recent times. Because of alien species out-competing them for food and the inability to escape from humans, there are many endangered species in this order.
Sphenodontia
Sphenodontians arose in the mid Triassic and now consists of a single genus, tuatara, which comprises two endangered species that live on New Zealand and some of its minor surrounding islands. Their evolutionary history is filled with many species. Recent paleogenetic discoveries show that tuataras are prone to quick speciation.
Squamata
The most recent order of reptiles, squamates, are recognized by having a movable quadrate bone (giving them upper-jaw movement), possessing horny scales and hemipenes. They originate from the early Jurassic and are made up of the three suborders Lacertilia (paraphyletic), Serpentes, and Amphisbaenia. Although they are the most recent order, squamates contain more species than any of the other reptilian orders. Squamates are a monophyletic group included, with the Sphenodontia (e.g. tuataras), in the Lepidosauria. The latter superorder, together with some extinct animals like the plesiosaurs, constitute the Lepidosauromorpha, the sister infraclass to the group, the Archosauromorpha, that contains crocodiles, turtles, and birds. Although squamate fossils first appear in the early Jurassic, mitochondrial phylogenetics suggests that they evolved in the late Permian. Most evolutionary relationships within the squamates are not yet completely worked out, with the relationship of snakes to other groups being most problematic. From morphological data, Iguanid lizards have been thought to have diverged from other squamates very early, but recent molecular phylogenies, both from mitochondrial and nuclear DNA, do not support this early divergence. Because snakes have a faster molecular clock than other squamates, and there are few early snake and snake ancestor fossils, it is difficult to resolve the relationship between snakes and other squamate groups.
Crocodilia
The first organisms that showed similar characteristics of Crocodilians were the Crurotarsi, who appeared during the early Triassic 250 million years ago. This quickly gave rise to the Eusuchia clade 220 million years ago, which would eventually lead to the order of Crocodilians, the first of which arose about 85 million years ago during the late Cretaceous. The earliest fossil evidence of eusuchians is of the genus Isisfordia. Early species mainly fed on fish and vegetation. They were land-based, most having long legs (when compared to modern crocodiles) and many were bipedal. As diversification increased, many apex predators arose, all of which are now extinct. Modern Crocodilia arose through specific evolutionary traits. The complete loss of bipedalism was traded for a generally low quadrupedal stance for an easy and less noticeable entrance to bodies of water. The shape of the skull/jaw changed to allow more grasp along with upward-pointing nostrils and eyes. Mimicry is evident, as the backs of all crocodilia resemble some type of floating log and their general color scheme of brown and green mimics moss or wood. Their tail also took on a paddle shape to increase swimming speed. The only remaining groups of this order are the alligators, caimans, crocodiles, gharials, and false gharials.
| Biology and health sciences | Basics_4 | Biology |
30607828 | https://en.wikipedia.org/wiki/Gobiinae | Gobiinae | True gobies were a subfamily, the Gobiinae, of the goby family Gobiidae, although the 5th edition of the Fishes of the World does not subdivide the Gobiidae into subfamilies. They are found in all oceans and a few rivers and lakes, but most live in warm waters. Altogether, the Gobiinae unite about 1149 described species in 160 genera, and new ones are still being discovered in numbers.
Description and ecology
They are usually mid-sized to small ray-finned fishes; some are very colorful, while others are cryptic. Most true gobies are less than 10 cm (4 in) long when fully grown. The largest species Glossogobius giuris can reach up to 50 cm (20 in); the smallest known species as of 2010, Trimmatom nanus, is just about 1 cm in length when fully grown, making it one of the smallest vertebrates.
In many true gobies, the pelvic fins have grown together into a suction cup they can use to hold on to substrate. Most have two dorsal fins, the first made up from spiny fin rays, while the other has some spines in the front followed by numerous soft rays.
They are most plentiful in the tropical and subtropical regions, but as a group are almost cosmopolitan in marine ecosystems. A few species tolerate brackish water, and some – Padogobius and Pomatoschistus species – even inhabit fresh water. They are generally benthic as adults (the spawn can distribute widely by ocean currents), only Sufflogobius bibarbatus is noted to be quite pelagic throughout its life. Most inhabit some sort of burrow or crevice and are somewhat territorial. In some cases, they live in symbioses with unrelated animals, such as crustaceans.
The larger species are fished for food, in some cases on a commercial scale. Many Gobiinae species are popular aquarium fish. Especially popular are the colorful species, some of which are regularly traded. In general, the interesting behavior and bold habits make most true gobies seem attractive pets. However, their territoriality and because even the smallest species are fundamentally carnivorous and need living food to thrive make them not easy to keep (particularly compared to the related family Eleotridae). As typical for oceanic fishes, many Gobiinae tend to be almost impossible to breed in captivity, and some species have become rare from habitat destruction and overfishing.
Genera
This subfamily contains about 160 genera and 1120 species:
Aboma
Acentrogobius
Afurcagobius
Akko
Amblyeleotris
Amblygobius
Amoya
Anatirostrum
Ancistrogobius
Antilligobius
Aphia
Arcygobius
Arenigobius
Aruma
Asterropteryx
Aulopareia
Austrolethops
Babka
Barbulifer
Barbuligobius
Bathygobius
Benthophiloides
Benthophilus
Bollmannia
Bryaninops
Buenia
Cabillus
Caffrogobius
Callogobius
Caspiosoma
Chriolepis
Chromogobius
Corcyrogobius
Coryogalops
Coryphopterus
Cristatogobius
Croilia
Cryptocentroides
Cryptocentrus
Crystallogobius
Cryptopsilotris
Ctenogobiops
Deltentosteus
Didogobius
Discordipinna
Dotsugobius
Drombus
Ebomegobius
Echinogobius
Economidichthys
Egglestonichthys
Ego
Elacatinus
Eleotrica
Evermannia
Eviota
Exyrias
Favonigobius
Feia
Fusigobius
Gammogobius
Ginsburgellus
Gladiogobius
Glossogobius
Gobiodon
Gobiopsis
Gobiosoma
Gobius
Gobiusculus
Gobulus
Gorogobius
Grallenia
Gymneleotris
Hazeus
Hetereleotris
Heterogobius
Heteroplopomus
Hyrcanogobius
Istigobius
Kelloggella
Knipowitschia
Koumansetta
Larsonella
Lebetus
Lesueurigobius
Lobulogobius
Lophiogobius
Lophogobius
Lotilia
Lubricogobius
Luposicya
Lythrypnus
Macrodontogobius
Mahidolia
Mangarinus
Mauligobius
Mesogobius
Microgobius
Millerigobius
Minysicya
Myersina
Nematogobius
Neogobius
Nes
Nesogobius
Obliquogobius
Odondebuenia
Ophiogobius
Oplopomops
Oplopomus
Opua
Padogobius
Palatogobius
Palutrus
Parachaeturichthys
Paragobiodon
Paratrimma
Pariah
Parkraemeria
Parrella
Pascua
Phoxacromion
Phyllogobius
Platygobiopsis
Pleurosicya
Polyspondylogobius
Pomatoschistus
Ponticola
Porogobius
Priolepis
Proterorhinus
Psammogobius
Pseudaphya
Psilogobius
Psilotris
Pycnomma
Rhinogobiops
Risor
Robinsichthys
Signigobius
Silhouettea
Siphonogobius
Speleogobius
Stonogobiops
Sueviota
Sufflogobius
Thorogobius
Tigrigobius
Tomiyamichthys
Trimma
Trimmatom
Tryssogobius
Valenciennea
Vanderhorstia
Vanneaugobius
Varicus
Vomerogobius
Wheelerigobius
Yoga
Yongeichthys
Zebrus
Zosterisessor
| Biology and health sciences | Acanthomorpha | Animals |
2083108 | https://en.wikipedia.org/wiki/Triglidae | Triglidae | Triglidae, commonly known as gurnards or sea robins, are a family of bottom-feeding scorpaeniform ray-finned fish. The gurnards are distributed in temperate and tropical seas worldwide.
Taxonomy
Triglidae was first described as a family in 1815 by the French polymath and naturalist Constantine Samuel Rafinesque. In 1883 Jordan and Gilbert formally designated Trigla lyra, which had been described by Linnaeus in 1758, as the type species of the genus Trigla and so of the family Triglidae. The 5th edition of Fishes of the World classifies this family within the suborder Platycephaloidei in the order Scorpaeniformes. Other authorities differ and do not consider the Scorpaeniformes to be a valid order because the Perciformes order is not monophyletic without the taxa within the Scorpaeniformes being included. These authorities consider the Triglidae to belong to the suborder Triglioidei, along with the family Peristediidae, within the Perciformes. The family Peristediidae is included in the Triglidae as the subfamily Peristediinae by some authorities.
Etymology
Triglidae's name is based on that of Linneaus's genus Trigla, the name of which is a classical name for the red mullet (Mullus barbatus), Artedi thought the red mullet and the gurnards were the same as fishes from both taxa are known to create sounds taken out of the water as well as being red in color. Linnaeus realized they were different and classified Trigla as a gurnard, in contradiction of the ancient usage. They get one of their common names, sea robin, from the orange ventral surface of the species in the genus Prionotus, and from large pectoral fins which resemble a bird's wings. When caught, they make a croaking noise similar to a frog, which has given them the onomatopoeic name gurnard.
Subfamilies and genera
Triglidae is divided into 3 subfamilies and 8 genera as listed below (including about 125 species). Some sources also include Trigloporus as a separate genus, but it is treated here as a subgenus of Chelidonichthys.
Prionotinae Kaup, 1873
Bellator Jordan & Evermann, 1896 (8 species)
Prionotus Lacépède, 1801 (23 species)
Pterygotriglinae Fowler, 1938
Bovitrigla Fowler, 1938 (monotypic)
Pterygotrigla Waite, 1899 (31 species)
Triglinae Rafinesque, 1815
Chelidonichthys Kaup, 1873 (10 species)
Eutrigla Fraser-Brunner, 1938 (monotypic)
Lepidotrigla Günther, 1860 (58 species)
Trigla Linnaeus, 1758 (monotypic)
These subfamilies have been given the rank of tribe, Prionotini, Pterygotriglini and Triglini, by some authorities. Prionotinae are regarded as the basal grouping with Triglinae being the most derived.
Characteristics
Triglidae gurnards have mouths which are either terminal or positioned slightly below the snout, which has its tip normally having paired rostral projections, frequently armed with spines, and these create the impression of a 2 lobed snout when seen from above. There are no barbels on the head and the preorbital bones typically project forward. The lower 3 rays of the pectoral fins are enlarged and free of the fin membrane. They have two separate dorsal fins, the first having between 7 and 11 spines while the second has 10 to 23 soft rays. The anal fin may not have any spines or it can have a single spine and 11 to 23 soft rays. The head is bony and resemble a casque. There are 9 or 10 branched rays in the caudal fin. The smallest species is the spotwing gurnard (Lepidotrigla spiloptera) which reaches a maximum total length of while the largest is the tub gurnard (Chelidonichthys lucerna) which has a maximum published total length of .
Most species are around in length with the females typically being larger than the males. They have an unusually solid skull, and many species also possess armored plates on their bodies. Another distinctive feature is the presence of a "drumming muscle" that makes sounds by beating against the swim bladder. The length of the swim bladder has a negative correlation to gonadal development. A sexual dimorphism of swim bladder size is created due to the negative correlation being stronger in females then in males.
Sea robins have three "walking rays" on each side of their body. They are derived from the supportive structures in the pectoral fins, called fin-rays. During development, the fin-rays separate from the rest of the pectoral fin, developing into walking rays. These walking rays have specialized muscle divisions and unique anatomy that differ from typical fin-rays to allow them to be used as supportive structures during underwater locomotion. These walking rays have been shown to be used for locomotion as well as prey detection on the seafloor via chemoreception ("tasting") highly sensitive to the amino acids prevalent in some marine invertebrates.
Survival and Reproduction
Classified as carnivores, gurnards mainly feed on crustaceans. Most species are opportunistic predators and will feed on prey such as teleost and mollusks as well. Gurnards do not have a primary predator; however, larger fish, marine mammals, birds, and humans will prey on gurnards. They are bottom-dwelling fish, living down to 200 m (660 ft), although they can be found in much shallower water. When it comes to preferred water depths, adult gurnards will favor deeper water while juveniles will favor shallower water. The different genera of gurnards have diverse offspring spawning periods, varying in length and time of year. For example, tub gurnard spawning period takes place from December to March, and red gurnard spawning takes place from September to May.
As food
Gurnard have firm white flesh that holds together well in cooking, making them well-suited to soups and stews. They are commonly used in the French dish bouillabaisse. One source describes gurnards as "rather bony and lacking in flavour"; others praise its flavour and texture.
They were often caught in British waters as a bycatch and discarded. However, as other species became less sustainable and more expensive they became more popular, with the wholesale price between 2007 and 2008 reported to have increased from £0.25 per kg to £4, and sales increasing tenfold by 2011. Gurnards also are now appearing in fish markets in the U.S.
Angling
Sea robins can be caught by dropping a variety of baits and lures to the seafloor, where they actively feed. Mackerel is believed to be the most efficient bait for catching sea robins, but crabs, bunker and other fish meat can also be used successfully depending on location. Sea robins can also be caught by lure fishing if lured near the substrate. They are often considered to be rough fish, caught when fishing for more desirable fish such as striped bass or flounder. Gurnard are also used as bait, for example by lobster fishermen.
| Biology and health sciences | Acanthomorpha | Animals |
2083650 | https://en.wikipedia.org/wiki/Dwarf%20rabbit | Dwarf rabbit | Dwarf rabbit refers either (formally) to a rabbit with the dwarfing gene, or (informally) to any small breed of domestic rabbit or specimen thereof, or (colloquially) to any small rabbit. Dwarfism is a genetic condition that may occur in humans and in many animals, including rabbits. True dwarfism is often associated with a cluster of physical abnormalities, including pituitary dwarfism. The process of dwarfing is used to selectively breed for smaller stature with each generation. Small stature is a characteristic of neoteny, which may account (in part) for the attraction of dwarf animals.
Small rabbits
The Netherland Dwarf is the smallest of the domestic rabbits. The American Rabbit Breeders Association (ARBA) accepts a weight range of , but is the maximum allowed by the British Rabbit Council (BRC). The small stature of the Netherland Dwarf was initially the result of the dwarfing gene: dw. Its short neck and rounded face are additional features of neoteny.
Many small rabbit breeds have the dwarfing gene, but the Polish and the Britannia Petite are among those that do not. They have attained their small stature solely through selective breeding of successively smaller generations (a process called dwarfing).
Some small rabbits (often mixed breeds) are a false dwarf, a rabbit that did not inherit the dwarfing gene.
One of the smallest species of wild rabbit is the Marsh rabbit (Sylvilagus palustris), an excellent swimmer that weighs .
Smallest rabbit breeds
The following table includes rabbit breeds currently recognized by ARBA or by the BRC that have a maximum weight of . Also included is a small breed from Germany, the Teddy Dwarf.
| Biology and health sciences | Rabbits | Animals |
2083830 | https://en.wikipedia.org/wiki/Neutral%20particle%20oscillation | Neutral particle oscillation | In particle physics, neutral particle oscillation is the transmutation of a particle with zero electric charge into another neutral particle due to a change of a non-zero internal quantum number, via an interaction that does not conserve that quantum number. Neutral particle oscillations were first investigated in 1954 by Murray Gell-mann and Abraham Pais.
For example, a neutron cannot transmute into an antineutron as that would violate the conservation of baryon number. But in those hypothetical extensions of the Standard Model which include interactions that do not strictly conserve baryon number, neutron–antineutron oscillations are predicted to occur.
Such oscillations can be classified into two types:
Particle–antiparticle oscillation (for example, oscillation).
Flavor oscillation (for example, oscillation).
In those cases where the particles decay to some final product, then the system is not purely oscillatory, and an interference between oscillation and decay is observed.
History and motivation
CP violation
After the striking evidence for parity violation provided by Wu et al. in 1957, it was assumed that CP (charge conjugation-parity) is the quantity which is conserved. However, in 1964 Cronin and Fitch reported CP violation in the neutral Kaon system. They observed the long-lived KL (with ) undergoing decays into two pions (with ) thereby violating CP conservation.
In 2001, CP violation in the system was confirmed by the BaBar and the Belle experiments. Direct CP violation in the system was reported by both the labs by 2005.
The and the systems can be studied as two state systems, considering the particle and its antiparticle as two states of a single particle.
The solar neutrino problem
The pp chain in the sun produces an abundance of . In 1968, R. Davis et al. first reported the results of the Homestake experiment. Also known as the Davis experiment, it used a huge tank of perchloroethylene in Homestake mine (it was deep underground to eliminate background from cosmic rays), South Dakota. Chlorine nuclei in the perchloroethylene absorb to produce argon via the reaction
,
which is essentially
.
The experiment collected argon for several months. Because the neutrino interacts very weakly, only about one argon atom was collected every two days. The total accumulation was about one third of Bahcall's theoretical prediction.
In 1968, Bruno Pontecorvo showed that if neutrinos are not considered massless, then (produced in the sun) can transform into some other neutrino species ( or ), to which Homestake detector was insensitive. This explained the deficit in the results of the Homestake experiment. The final confirmation of this solution to the solar neutrino problem was provided in April 2002 by the SNO (Sudbury Neutrino Observatory) collaboration, which measured both flux and the total neutrino flux.
This 'oscillation' between the neutrino species can first be studied considering any two, and then generalized to the three known flavors.
Description as a two-state system
Special case that only considers mixing
Caution: "mixing" discussed in this article is not the type obtained from mixed quantum states. Rather, "mixing" here refers to the superposition of "pure state" energy (mass) eigenstates, prescribed by a "mixing matrix" (e.g. the CKM or PMNS matricies).
Let be the Hamiltonian of the two-state system, and and be its orthonormal eigenvectors with eigenvalues and respectively.
Let be the state of the system at time
If the system starts as an energy eigenstate of for example, say
then the time evolved state, which is the solution of the Schrödinger equation
will be
But this is physically same as since the exponential term is just a phase factor: It does not produce an observable new state. In other words, energy eigenstates are stationary eigenstates, that is, they do not yield observably distinct new states under time evolution.
Define to be a basis in which the unperturbed Hamiltonian operator, , is diagonal:
It can be shown, that oscillation between states will occur if and only if off-diagonal terms of the Hamiltonian are not zero.
Hence let us introduce a general perturbation imposed on such that the resultant Hamiltonian is still Hermitian. Then
where and and
The eigenvalues of the perturbed Hamiltonian, then change to and where
Since is a general Hamiltonian matrix, it can be written as
The following two results are clear:
{| class="wikitable collapsible collapsed"
! Proof
|-
|
|-
|style="background:white;text-align:center;border-left:1px solid white;border-right:1px solid white;"| therefore
|-
|
|}
{| class="wikitable collapsible collapsed"
! Proof
|-
|
|-
|style="background:white;border-left:1px solid white;border-right:1px solid white;"| where the following results have been used:
|-
|
is a unit vector and hence
The Levi-Civita symbol is antisymmetric in any two of its indices ( and in this case) and hence
|}
With the following parametrization (this parametrization helps as it normalizes the eigenvectors and also introduces an arbitrary phase making the eigenvectors most general)
and using the above pair of results the orthonormal eigenvectors of and consequently those of are obtained as
Writing the eigenvectors of in terms of those of we get
Now if the particle starts out as an eigenstate of (say, ), that is
then under time evolution we get
which unlike the previous case, is distinctly different from
We can then obtain the probability of finding the system in state at time as
which is called Rabi's formula. Hence, starting from one eigenstate of the unperturbed Hamiltonian the state of the system oscillates between the eigenstates of with a frequency (known as Rabi frequency),
From equation (6), for we can conclude that oscillation will exist only if So is known as the coupling term as it connects the two eigenstates of the unperturbed Hamiltonian and thereby facilitates oscillation between the two.
Oscillation will also cease if the eigenvalues of the perturbed Hamiltonian are degenerate, i.e. But this is a trivial case as in such a situation, the perturbation itself vanishes and takes the form (diagonal) of and we're back to square one.
Hence, the necessary conditions for oscillation are:
Non-zero coupling, i.e.
Non-degenerate eigenvalues of the perturbed Hamiltonian i.e.
The general case: considering mixing and decay
If the particle(s) under consideration undergoes decay, then the Hamiltonian describing the system is no longer Hermitian. Since any matrix can be written as a sum of its Hermitian and anti-Hermitian parts, can be written as,
The eigenvalues of are
The suffixes stand for Heavy and Light respectively (by convention) and this implies that is positive.
The normalized eigenstates corresponding to and respectively, in the natural basis are
and are the mixing terms. Note that these eigenstates are no longer orthogonal.
Let the system start in the state That is
Under time evolution we then get
Similarly, if the system starts in the state , under time evolution we obtain
CP violation as a consequence
If in a system and represent CP conjugate states (i.e. particle-antiparticle) of one another (i.e. and ), and certain other conditions are met, then CP violation can be observed as a result of this phenomenon. Depending on the condition, CP violation can be classified into three types:
CP violation through decay only
Consider the processes where decay to final states , where the barred and the unbarred kets of each set are CP conjugates of one another.
The probability of decaying to is given by,
,
and that of its CP conjugate process by,
If there is no CP violation due to mixing, then .
Now, the above two probabilities are unequal if,
.
Hence, the decay becomes a CP violating process as the probability of a decay and that of its CP conjugate process are not equal.
CP violation through mixing only
The probability (as a function of time) of observing starting from is given by,
,
and that of its CP conjugate process by,
.
The above two probabilities are unequal if,
Hence, the particle-antiparticle oscillation becomes a CP violating process as the particle and its antiparticle (say, and respectively) are no longer equivalent eigenstates of CP.
CP violation through mixing-decay interference
Let be a final state (a CP eigenstate) that both and can decay to. Then, the decay probabilities are given by,
and,
From the above two quantities, it can be seen that even when there is no CP violation through mixing alone (i.e. ) and neither is there any CP violation through decay alone (i.e. ) and thus the probabilities will still be unequal, provided that
The last terms in the above expressions for probability are thus associated with interference between mixing and decay.
An alternative classification
Usually, an alternative classification of CP violation is made:
Specific cases
Neutrino oscillation
Considering a strong coupling between two flavor eigenstates of neutrinos (for example, –, –, etc.) and a very weak coupling between the third (that is, the third does not affect the interaction between the other two), equation () gives the probability of a neutrino of type transmuting into type as,
where, and are energy eigenstates.
The above can be written as,
Thus, a coupling between the energy (mass) eigenstates produces the phenomenon of oscillation between the flavor eigenstates. One important inference is that neutrinos have a finite mass, although very small. Hence, their speed is not exactly the same as that of light but slightly lower.
Neutrino mass splitting
With three flavors of neutrinos, there are three mass splittings:
But only two of them are independent, because .
This implies that two of the three neutrinos have very closely placed masses. Since only two of the three are independent, and the expression for probability in equation () is not sensitive to the sign of (as sine squared is independent of the sign of its argument), it is not possible to determine the neutrino mass spectrum uniquely from the phenomenon of flavor oscillation. That is, any two out of the three can have closely spaced masses.
Moreover, since the oscillation is sensitive only to the differences (of the squares) of the masses, direct determination of neutrino mass is not possible from oscillation experiments.
Length scale of the system
Equation () indicates that an appropriate length scale of the system is the oscillation wavelength . We can draw the following inferences:
If , then and oscillation will not be observed. For example, production (say, by radioactive decay) and detection of neutrinos in a laboratory.
If , where is a whole number, then and oscillation will not be observed.
In all other cases, oscillation will be observed. For example, for solar neutrinos; for neutrinos from nuclear power plant detected in a laboratory few kilometers away.
Neutral kaon oscillation and decay
CP violation through mixing only
The 1964 paper by Christenson et al. provided experimental evidence of CP violation in the neutral Kaon system. The so-called long-lived Kaon (CP = −1) decayed into two pions (CP = (−1)(−1) = 1), thereby violating CP conservation.
and being the strangeness eigenstates (with eigenvalues +1 and −1 respectively), the energy eigenstates are,
These two are also CP eigenstates with eigenvalues +1 and −1 respectively. From the earlier notion of CP conservation (symmetry), the following were expected:
Because has a CP eigenvalue of +1, it can decay to two pions or with a proper choice of angular momentum, to three pions. However, the two pion decay is a lot more frequent.
having a CP eigenvalue −1, can decay only to three pions and never to two.
Since the two pion decay is much faster than the three pion decay, was referred to as the short-lived Kaon , and as the long-lived Kaon . The 1964 experiment showed that contrary to what was expected, could decay to two pions. This implied that the long lived Kaon cannot be purely the CP eigenstate , but must contain a small admixture of , thereby no longer being a CP eigenstate. Similarly, the short-lived Kaon was predicted to have a small admixture of . That is,
where, is a complex quantity and is a measure of departure from CP invariance. Experimentally, .
Writing and in terms of and , we obtain (keeping in mind that ) the form of equation ():
where, .
Since , condition () is satisfied and there is a mixing between the strangeness eigenstates and giving rise to a long-lived and a short-lived state.
CP violation through decay only
The and have two modes of two pion decay: or . Both of these final states are CP eigenstates of themselves. We can define the branching ratios as,
.
Experimentally, and . That is , implying and , and thereby satisfying condition ().
In other words, direct CP violation is observed in the asymmetry between the two modes of decay.
CP violation through mixing-decay interference
If the final state (say ) is a CP eigenstate (for example ), then there are two different decay amplitudes corresponding to two different decay paths:
.
CP violation can then result from the interference of these two contributions to the decay as one mode involves only decay and the other oscillation and decay.
Which then is the "real" particle
The above description refers to flavor (or strangeness) eigenstates and energy (or CP) eigenstates. But which of them represents the "real" particle? What do we really detect in a laboratory? Quoting David J. Griffiths:
The mixing matrix - a brief introduction
If the system is a three state system (for example, three species of neutrinos , three species of quarks ), then, just like in the two state system, the flavor eigenstates (say , , ) are written as a linear combination of the energy (mass) eigenstates (say , , ). That is,
.
In case of leptons (neutrinos for example) the transformation matrix is the PMNS matrix, and for quarks it is the CKM matrix.
The off diagonal terms of the transformation matrix represent coupling, and unequal diagonal terms imply mixing between the three states.
The transformation matrix is unitary and appropriate parameterization (depending on whether it is the CKM or PMNS matrix) is done and the values of the parameters determined experimentally.
| Physical sciences | Particle physics: General | Physics |
2084166 | https://en.wikipedia.org/wiki/Dodecanol | Dodecanol | Dodecanol , or lauryl alcohol, is an organic compound produced industrially from palm kernel oil or coconut oil. It is a fatty alcohol. Sulfate esters of lauryl alcohol, especially sodium lauryl sulfate, are very widely used as surfactants. Sodium lauryl sulfate and the related dodecanol derivatives ammonium lauryl sulfate and sodium laureth sulfate are all used in shampoos. Dodecanol is tasteless, colorless, and has a floral odor.
Production and use
In 1993, the European demand of dodecanol was around 60,000 tonnes per year. It can be obtained from palm oil or coconut oil fatty acids and methyl esters by hydrogenation. It may also be produced synthetically via the Ziegler process. A classic laboratory method involves Bouveault-Blanc reduction of ethyl laurate.
Dodecanol is used to make surfactants, which are used in lubricating oils, and pharmaceuticals. Millions of tons of sodium dodecylsulfate (SDS) are produced annually by sulfation of dodecyl alcohol:
Dodecanol is used as an emollient. It is also the precursor to dodecanal, an important fragrance, and 1-bromododecane, an alkylating agent for improving the lipophilicity of organic molecules.
Toxicity
Dodecanol can irritate the skin. It has about half the toxicity of ethanol, but it is very harmful to marine organisms.
Mutual solubility with water
The mutual solubility of 1-dodecanol and water has been quantified as follows.
{| class="wikitable sortable"
|+Mutual solubility of water and 1-dodecanol (98%, melting point 24 °C), Weight %
|-
!Temperature (°C) !! Solubility of dodecanol in water !! Solubility of water in dodecanol
|-
| 29.5 || 0.04 || 2.87
|-
| 40.0 || 0.05 || 2.85
|-
| 50.2 || 0.09 || 2.69
|-
| 60.5 || 0.15 || 2.96
|-
| 70.5 || 0.09 || 2.70
|-
| 80.3 || 0.14 || 2.89
|-
| 90.8 || 0.18 || 2.96
|-
| standard deviation || 0.02 || 0.01
|-
|}
| Physical sciences | Alcohols | Chemistry |
2086634 | https://en.wikipedia.org/wiki/Paper%20bag | Paper bag | A paper bag is a bag made of paper, usually kraft paper. Paper bags can be made either with virgin or recycled fibres to meet customers' demands. Paper bags are commonly used as shopping carrier bags and for packaging of some consumer goods. They carry a wide range of products from groceries, glass bottles, clothing, books, toiletries, electronics and various other goods and can also function as means of transport in day-to-day activities.
History
The first machine to mass-produce paper bags was invented in 1852 by Francis Wolle, a Pennsylvania schoolteacher. Wolle and his brother patented the machine and founded the Union Paper Bag Company.
In 1853, James Baldwin, papermaker of Birmingham and Kings Norton in England, was granted a patent for apparatus to make square-bottomed paper bags. Thereafter he used an image of a flat-bottomed bag as his business logo.
In 1871, inventor Margaret E. Knight designed a machine that could create flat-bottomed paper bags, which could carry more than the previous envelope-style design.
In 1883, Charles Stilwell patented a machine that made square-bottom paper bags with pleated sides, making them easier to fold and store. This style of bag came to be known as the S.O.S., or "Self-Opening Sack".
In 1912, Walter Deubener, a grocer in Saint Paul, Minnesota, used cord to reinforce paper bags and add carrying handles. These "Deubener Shopping Bags" could carry up to 75 pounds (34 kg) at a time, and became quite popular, selling over a million bags a year by 1915. Paper bags with handles later became the standard for department stores, and were often printed with the store's logo or brand colors.
Plastic bags were introduced in the 1970s, and thanks to their lower cost, eventually replaced paper bags as the bag of choice for grocery stores. With the trend towards phasing out lightweight plastic bags, though, some grocers and shoppers have switched back to paper bags.
In 2015, the world's largest paper shopping bag was made in the UK and recorded by Guinness World Records. Also in 2015: The European Union adopted directive (EU) 2015/720, that requires a reduction in the consumption of single use plastic bags per person to 90 by 2019 and to 40 by 2025.
In 2018, the “European Paper Bag Day” was established by the platform The Paper Bag, an association of the leading European kraft paper manufacturers and producers of paper bags. The annual action day takes place on 18 October and aims to raise awareness among consumers about paper carrier bags as a sustainable packaging solution. It was launched to encourage more people to act responsibly and use, reuse and recycle paper bags. With different activities on local level, the association wants to open a dialogue with consumers and give them revealing insights about paper packaging.
In April 2019, the European Union adopted Directive (EU) 2019/904 of the European Parliament and of the Council of 5 June 2019 on the reduction of the impact of certain plastic products on the environment.
Paper sack
A paper sack is a type of paper bag that can be constructed of one or several layers of high quality kraft paper, usually produced from virgin fibre. Paper sacks can also be referred to as industrial paper bags, industrial paper sacks, shipping sacks or multi-wall paper bags or sacks. They are often used for packaging and transporting dry powdery and granulated materials such as fertilizer, animal feed, sand, dry chemicals, flour and cement. Many have several layers of sack papers, printed external layer and inner plies. Some paper sacks have a plastic film, foil, or polyethylene coated paper layer in between as a water-repellant, insect resistant, or rodent barrier.
Multi-wall paper sacks are designed to provide strong product protection, with high elasticity and high tear resistance, for products with high demands for safety and durability. Information such as instructions, logos or trademarks can be printed on the resistant outer surface. Plastic films or different dispersions are sometimes used as inner layers or coatings to provide a barrier against moisture, water vapour, grease, oxygen, odours and bacteria. Paper sacks are produced on paper sack converting machines consisting of tuber and a bottomer.
There are two basic designs of bags: open-mouth bags and valve bags. An open-mouth bag is a tube of paper plies with the bottom end sealed. The bag is filled through the open mouth and then closed by stitching, adhesive, or tape. Valve sacks have both ends closed and are filled through a valve. A typical example of a valve bag is the cement sack.
Properties
Paper sacks are usually made of Kraft paper, which has the advantage of being soft and strong at the same time. The stretch or elongation increases the energy required to break the material. They can carry and protect products up to 50 kg, and adapt easily to the nature of their contents and to handling constraints. Depending on the product, the weight ratio of a paper sack to its contents can be up to 1:250. The strength is due to the arrangement of the fresh fibres used in kraft paper production. The bonding together of the fibres during production improves not only the strength, porosity and elasticity, but also the tear-resistance.
One of the natural and unique characteristics of sack kraft paper is its porosity. Acting as a filter material, high-porosity paper enables the air used in the filling process for dry powdery goods to escape very quickly, without the need for air extraction systems. This makes it possible to achieve filling speeds of up to 3.5 sec for a 25 kg sack.
Thanks to different paper sack constructions with glue, barrier, layer or surface concepts, paper sacks can also be moisture resistant. All moisture-proof sacks are compatible with regular paper sack filling machines. In especially adverse weather conditions, an extremely thin bioplastic, plastic or other adequate barrier film can be part of the surface layer in the paper sack construction for particularly effective protection.
Added barrier film liners can also extend the shelf life in a range of different conditions. There are many different sack constructions especially designed to offer a good shelf life when paper sacks are exposed to extreme conditions such as damp, moisture, light, oxygen or carbon dioxide. Also, the correct storage and handling of paper sacks along the supply chain extends their shelf life.
Paper sacks also provide a medium for promotional messages and sophisticated printing designs. Due to their natural, non-slippery texture and their construction, paper sacks can safely be handled, stacked, palletized and stored. User-friendly opening systems, such as a tear-open flap, allow quick and clean access to the contents without the use of tools such as knives.
Construction
Standard brown paper bags are made from kraft paper. Tote-style paper carrier bags, such as those often used by department stores or as gift bags, can be made from any kind of paper, and come in any color. There are two different styles of handles for paper carrier bags: flat handles and cord handles. Paper carrier bags made from virgin kraft paper are developed especially for demanding packaging. Paper bags can be made from recycled paper, with some local laws requiring bags to have a minimum percentage of post-consumer recycled content.
Paper bags can be made to withstand more pressure or weight than plastic bags do.
Sack paper manufacturing
Wood pulp for sack paper is made from softwood by the kraft process. The long fibers provides the paper its strength. Sack paper is then produced on a paper machine from the wood pulp. Both white and brown grades are made. The paper is microcrepped to give porosity and elasticity. Microcrepping is done by drying with loose draws, allowing the fibres to shrink. This is causing the paper to elongate 4% in machine direction and 10% in cross direction before busting. Machine direction elongation can be further improved by pressing between very elastic cylinders causing more microcrepping. The paper may be coated with polyethylene (PE) or different dispersions to ensure an effective barrier against moisture, grease and bacteria. A paper sack can be made of one or several layers of sack paper depending on the toughness needed.
Paper sack manufacturing
The production of paper sacks is an entirely automatic process. It is divided into two main parts: tube forming and bottom folding. The sack kraft paper used for the production is printed on its way to the tube forming. In the first step of tube forming, paper and film (if applied) can be vent-hole perforated to improve the air permeability. They are combined by glue at the cross-pasting unit afterwards and glue is then also applied at the edge of the paper layers. In the next step, paper and film are formed into a continuous tube. Individual tubes are separated by a perforating knife. After accumulating these individual tubes, they are bundled at the shingling conveyor. These bundles are transported from the tuber to the bottomer. There, at least one end of the tube is folded to a bottom that creates the paper sack. Afterwards, the paper sacks are transported to a press section, which ensures efficient paste distribution and adhesion. The production process is monitored by electronic inspection systems and machine operators.
The production process slightly varies depending on the type of sack. All paper sacks are tailor-made and cater to the specific area of usage, product type and transportation needs.
Sack types and sizes
There are many different types of paper sacks available for the various end uses. The most common sack types are valve sacks, pinch bottom sacks, SOS (self opening sacks) sacks and (pasted) open-mouth sacks.
Valve sacks have a valve for filling and sealing in a corner of the paper sack. The valve system in a corner of the sack allows high-speed filling with automatic closing when the sack is filled to its capacity. This type of sack is especially used to pack powdered goods such as cement, flour and cocoa.
The pinch bottom sack is a usually leak-proof open mouth sack, used for products such as chemicals or foodstuffs which must be protected during transport and storage.
An SOS sack has a high stability and makes quick open mouth filling easy, and can also be reclosed. SOS sacks are used for granulated products such as food products or animal feed. They have a rectangular bottom and high stability faculties and can also be used as a compost sack or refuse sack. Pasted open mouth sacks can also be SOS sacks and differentiate themselves due to their glued bottom area, which increases their strength.
Paper sacks are available in different sizes and can carry loads up to 50 kg. The capacity is variable, and paper sacks also come in a variety of calipers, basis weights, and treatments.
Single layer
Paper shopping bags, brown paper bags, grocery bags, paper bread bags and other light duty bags have a single layer of paper. A variety of constructions and designs are available. Many are printed with the names of stores and brands. Paper bags are not waterproof. Types of paper bag are: laminated, twisted, flat tap. The laminated bag, whilst not totally waterproof, has a laminate that protects the outside to some degree.
Quality standard and certification
Paper bag durability can be measured in accordance with the European test standard EN13590:2003. This standard is based on scientifically conducted studies and helps retailers to avoid poor-quality carrier bags. The quality certification system for paper bags is based on this standard. The test method subjects the carrier bag to heavy weights while being lifted repeatedly. The size of the paper bag is taken into account because the larger its volume, the heavier the load it must be able to carry. As a result of the certification, the paper bag is marked with the weight and volume it may carry. It is wise to choose a tested and certified paper bag.
Testing and assurance
Filled bags or sacks can be evaluated in the field by careful observation. Laboratory package testing is often conducted using drop testing, shock table testing, puncture testing, etc.
Paper sacks for food require specific quality assurance and hygiene management along the entire supply chain, as they must be in line with several national and international standards regarding the safety and hygiene of the stored food products. All materials of the sack need to be taken into account to fulfil these requirements. In Europe, there are several mandatory procedures to collect, evaluate and document all necessary information. Under certain conditions, migration testing and the issuing of compliance documents is demanded. Most of those requirements concern the conditions of use, storage time and temperature. Information on these aspects from all suppliers is a prerequisite.
Sustainability
The raw material used in papermaking – cellulose fibre extracted from wood – is a renewable and natural resource. However, environmental concerns have been raised about types of wood harvesting, such as clearcutting. Due to their biodegradable characteristics, paper bags degrade in a short period of time (two to five months). When using natural water-based colours and starch-based adhesives, paper bags do not harm the environment.
Most paper bags that are produced in Europe are made from cellulose fibres that are sourced from sustainably managed European forests. They are extracted from tree thinning and from process waste from the sawn timber industry. Sustainable forest management maintains biodiversity and ecosystems and provides a habitat for wildlife, recreational areas and jobs. This sustainable forest management is proven in the FSC® or PEFC™ certifications of paper products. Consumers can look for the FSC and PEFC labels on their paper bags to make sure they are made from sustainably sourced fibres.
As a wood product, paper continues to store carbon throughout its lifetime. This carbon sequestration time is extended when the paper is recycled, because the carbon remains in the cellulose fibres.
Paper bags can be used several times. Paper bag manufacturers recommend reusing paper carrier bags as often as possible in order to further decrease the environmental impact.
The carbon footprint of an average European paper sack is analysed since 2007. The studies have shown continuous improvement, as the carbon impact of paper sacks has been reduced by 28% in eleven years between 2007 and 2018. In 2018, the carbon footprint of an average European paper sack from cradle-to-gate is 85g CO2e per sack. If the biogenic emissions and biogenic removals were included in the analysis, it would amount to -35g CO2e per sack, which means that paper sacks have a positive impact on the environment.
Recycling
Paper bags are highly biodegradable and recyclable, and hence does not pose the same environmental footprint as plastic bags do. The fibres are reused 3.6 times on average in Europe, while the world average is 2.4 times.
Plastic or water-resistant coatings or layers make recycling more difficult. Paper bag recycling is done through the re-pulping of the paper recycling and pressing into the required shapes.
Safety
Compared to plastic bags, paper bags present less suffocation risk to young children or animals.
Branding and marketing
Paper shopping bags can be used as a vehicle to project the brand image of retailers. Paper is very tactile due to its texture and shape. Its print quality and color reproduction allow for creativity in advertising and development of the brand image. Furthermore, they achieve maximum visibility and great appreciation from customers. Using paper bags gives a signal of commitment to the environment and by using packaging made from renewable, recyclable and biodegradable sources, retailers and brand owners contribute to reducing the use of non-biodegradable shopping bags. Paper carrier bags can be a visible part of corporate social responsibility, and they are in line with a sustainable consumer lifestyle.
Brown bag
While brown is the most common paper bag color (as it is the natural color of the wood fiber used to make kraft paper), the term "brown bag" (especially as a verb) refers to two specific and distinct practices:
The first form of "brown bagging" refers to bringing food from home (regardless of the actual container used) instead of buying a meal at or near one's destination. Most often, it means bringing a packed lunch to school or work. For example, a 1983 study reported that America had 60 million "brown baggers."
Individuals may choose to brown bag to save money; if options to buy food are limited or absent; or to accommodate medical, religious, or lifestyle dietary requirements (e.g. low-fat or low-salt, kosher or halal, vegetarian or vegan). Additionally, while "a substantial minority of brown baggers have access to microwave ovens or refrigerators," those who do not need to ensure their food can stay fresh until mealtime without refrigeration (possibly bringing their own, via ice packs and thermal bags), and be ready to eat without being reheated.
The noun form of the term included in the Merriam-Webster dictionary in 1950 refers to use of a paper bag instead of a lunch box. William Safire traces this to an article in Time magazine.
The second form of "brown bagging" involves drinking alcohol in places where it is not legal to do so (such as a public park, or a venue without a liquor license) while using a brown paper bag (or a black plastic bag, or other suitably opaque covering) to conceal the bottle or can. While it's usually an open secret that a drink handled like this almost always contains alcohol (as there's little reason to hide other drinks), depending on local laws, this can minimize the chance of legal consequences. If there's no law against generally keeping open containers in bags, then doing so does not, in and of itself, give police probable cause to search a bag for alcohol.
Anti-brown-bag law in New York, USA
In 1967, it was stated that "brown-bagging is the genteel disguise ... by a patron to furnish his own liquor when he dines at the local restaurant".
In 1985, the New York State Liquor Authority pushed for legislation (informally referred to as "the anti-brown-bag law") to prevent patrons from bringing alcoholic beverages into food establishments that do not have a liquor license. New York City Mayor Ed Koch, having been "arrested" for doing so months earlier, denounced the effort.
Brown Bag Report
The advertising agent who, prior to the Marlboro Man, developed ads for the product as a filtered "feminine brand" in 1981 developed "The Brown-Bag Report" with funding from Swift, Carnation, General Mills and American Can.
Symbolism
While in the United States the lunch box or lunch pail has been used as a symbol of the working class, Safire wrote: "In the metaphor of the modern worker, the brown bag has replaced the lunch pail."
About a third of the brown baggers are schoolchildren.
The "Brown Paper Bag Test" formed part of a colorist discriminatory practice in African-American history, in which an individual's skin tone was compared to the color of a brown paper bag.
Paper bags are occasionally worn over the head as symbol of embarrassment, for example, the Canadian comedian The Unknown Comic.
Associations
The platform The Paper Bag consists of leading European paper manufacturers and producers of paper bags. It was created in 2017 to represent the interests of the European paper bag industry and to promote the advantages of paper packaging. The Paper Bag is steered by the organisations CEPI Eurokraft and EUROSAC.
EUROSAC is the European Federation of multiwall paper sack manufacturers. It was created in 1952 and is headquartered in Paris. It represents over 75% of European paper sack manufacturers. Its members operate in 20 different countries. Sack manufacturers from all continents and bag manufacturers also contribute to the federation as corresponding members, and more than 20 suppliers (paper, film, machine or glue manufacturers) are registered as associate members.
CEPI Eurokraft is the European Association for producers of sack kraft paper for the paper sack industry and kraft paper for the packaging industry. It was established in the 1930s to represent the Swedish, Norwegian and Finnish kraft paper manufacturers and now has eleven member companies in twelve countries.
Other uses
Paper bags are commonly used for carrying items. However, they have been used for other purposes. In 1911, the English chef Nicolas Soyer wrote a cookbook, Paper-Bag Cookery, about how to use clean, odorless paper bags for cooking, as an extension of the en papillote technique and an alternative to pots and pans.
| Technology | Containers | null |
2086847 | https://en.wikipedia.org/wiki/Ostraciidae | Ostraciidae | Ostraciidae or Ostraciontidae is a family of squared, bony fish belonging to the order Tetraodontiformes, closely related to the pufferfishes and filefishes. Fish in the family are known variously as boxfishes, cofferfishes, cowfishes and trunkfishes. It contains about 23 extant species in 6 extant genera.
Taxonomy
Ostraciidae was first proposed as a family in 1810 by the French polymath Constantine Samuel Rafinesque. In the past this grouping was regarded as a subfamily, the Ostraciinae, along with the subfamily Aracaninae, of a wider Ostraciidae. However, recent phylogenetic studies have concluded that the families Aracanidae and Ostraciidae are valid families but that they are part of the same clade, the suborder Ostracioidei. The 5th edition of Fishes of the World classifies this clade as the suborder Ostracioidea within the order Tetraodontiformes.
Etymology
Ostraciidae takes its name from its type genus, Ostracion, a name which means "little box" and is an allusion to the shape of the body of its type species, O. cubicus.
Description
Ostraciidae boxfishes occur in a variety of different colors, and are notable for the hexagonal or "honeycomb" patterns on their skin. They swim in a rowing manner. Their hexagonal plate-like scales are fused together into a solid, triangular or box-like carapace, from which the fins, tail, eyes and mouth protrude. Because of these heavy armoured scales, Ostraciidae are limited to slow movements, but few other fish are able to eat the adults. Ostraciid boxfish of the genus Lactophrys also secrete poisons from their skin into the surrounding water, further protecting them from predation. Although the adults are in general quite square in shape, young Ostraciidae are more rounded. The young often exhibit brighter colors than the adults. The scrawled cowfish, Acanthostracion quadricornis, can grow up to in length, but is generally smaller at higher latitudes.
Range
Ostraciids occur in the Atlantic, Indian, and Pacific oceans, generally at middle latitudes, although the common or buffalo trunkfish (Lactophrys trigonus) which lives mainly in Florida waters may be found as far north as Cape Cod.
Toxic defences
The various members of this family are able to secrete cationic surfactants through their skin which can act as a chemical defense mechanism. An example of this is pahutoxin, a water-soluble, crystalline chemical toxin that is contained in mucus secreted from the skin of Ostracion lentiginosus and other members of the trunkfish family when they are under stress. Pahutoxin is a choline chloride ester of 3-acetoxypalmitic acid that behaves similarly to steroidal saponins found in echinoderms. When this toxic mucus is released from the fish, it quickly dissolves in the environment and negatively affects any fish in the surrounding area. It is possible since this toxin resembles certain detergents so closely, that adding these detergents as pollutants to seawater has potential to interfere with receptor-mediated processes in marine life.
Classification
The author Keiichi Matsuura lists the following genera and species:
Extant taxa
There are about 25 recognized extant species in six genera:
Acanthostracion Bleeker, 1865
Lactophrys Swainson, 1839
Lactoria D. S. Jordan & Fowler, 1902
Ostracion Linnaeus, 1758
Paracanthostracion Whitley, 1933
Tetrosomus Swainson, 1839
Fossil taxa
Genus Eolactoria Tyler, 1975
Eolactoria sorbinii Tyler 1976 (Lutetian of Monte Bolca, Eocene Italy)
Genus Oligolactoria Tyler & Gregorova, 1991
Oligolactoria bubiki Tyler & Gregorova, 1991 (Rupelian of Moravia, Oligocene Czech Republic)
| Biology and health sciences | Acanthomorpha | Animals |
2088152 | https://en.wikipedia.org/wiki/Ammonium%20nitrite | Ammonium nitrite | Ammonium nitrite is a chemical compound with the chemical formula . It is the ammonium salt of nitrous acid. It is composed of ammonium cations and nitrite anions . It is not used in pure isolated form since it is highly unstable and decomposes into water and nitrogen, even at room temperature.
Preparation
Ammonium nitrite forms naturally in the air and can be prepared by the absorption of equal parts nitrogen dioxide and nitric oxide in aqueous ammonia.
It can also be synthesized by oxidizing ammonia with ozone or hydrogen peroxide, or in a precipitation reaction of barium or lead nitrite with ammonium sulfate, or silver nitrite with ammonium chloride, or ammonium perchlorate with potassium nitrite. The precipitate is filtered off and the solution concentrated. It forms colorless crystals which are soluble in water.
Physical and chemical properties
Ammonium nitrite may explode at a temperature of 60–70 °C, and will decompose quicker when dissolved in a concentrated aqueous solution, than in the form of a dry crystal. Even in room temperature the compound slowly decomposes into water and nitrogen:
It decomposes when heated or in the presence of acid into water and nitrogen. Ammonium nitrite solution is stable at higher pH and lower temperature. If there is any decrease in pH lower than 7.0, it may lead to an explosion, since the nitrite can react to it. A safe pH can be maintained by adding an ammonia solution. The mole ratio of ammonium nitrite to ammonia must be above 10%.
| Physical sciences | Nitric oxyanions | Chemistry |
2088221 | https://en.wikipedia.org/wiki/Submillimetre%20astronomy | Submillimetre astronomy | Submillimetre astronomy or submillimeter astronomy (see spelling differences) is the branch of observational astronomy that is conducted at submillimetre wavelengths (i.e., terahertz radiation) of the electromagnetic spectrum. Astronomers place the submillimetre waveband between the far-infrared and microwave wavebands, typically taken to be between a few hundred micrometres and a millimetre. It is still common in submillimetre astronomy to quote wavelengths in 'microns', the old name for micrometre.
Using submillimetre observations, astronomers examine molecular clouds and dark cloud cores with a goal of clarifying the process of star formation from earliest collapse to stellar birth. Submillimetre observations of these dark clouds can be used to determine chemical abundances and cooling mechanisms for the molecules which comprise them. In addition, submillimetre observations give information on the mechanisms for the formation and evolution of galaxies.
From the ground
The most significant limitations to the detection of astronomical emission at submillimetre wavelengths with ground-based observatories are atmospheric emission, noise and attenuation. Like the infrared, the submillimetre atmosphere is dominated by numerous water vapour absorption bands and it is only through "windows" between these bands that observations are possible. The ideal submillimetre observing site is dry, cool, has stable weather conditions and is away from urban population centres. Only a handful of sites have been identified. They include Mauna Kea (Hawaii, United States), the Llano de Chajnantor Observatory on the Atacama Plateau (Chile), the South Pole, and Hanle in India (the Himalayan site of the Indian Astronomical Observatory). Comparisons show that all four sites are excellent for submillimetre astronomy, and of these sites Mauna Kea is the most established and arguably the most accessible. There has been some recent interest in high-altitude Arctic sites, particularly Summit Station in Greenland where the PWV (precipitable water vapor) measure is always better than at Mauna Kea (however Mauna Kea's equatorial latitude of 19 degrees means it can observe more of the southern skies than Greenland).
The Llano de Chajnantor Observatory site hosts the Atacama Pathfinder Experiment (APEX), the largest submillimetre telescope operating in the southern hemisphere,
and the world's largest ground based astronomy project, the Atacama Large Millimeter Array (ALMA), an interferometer for submillimetre wavelength observations made of 54 12-metre and 12 7-metre radio telescopes. The Submillimeter Array (SMA) is another interferometer, located at Mauna Kea, consisting of eight 6-metre diameter radio telescopes. The largest existing submillimetre telescope, the James Clerk Maxwell Telescope, is also located on Mauna Kea.
From the stratosphere
With high-altitude balloons and aircraft, one can get above more of the atmosphere. The BLAST experiment and SOFIA are two examples, respectively, although SOFIA can also handle near infrared observations.
From orbit
Space-based observations at the submillimetre wavelengths remove the ground-based limitations of atmospheric absorption. The first submillimeter telescope in space was the Soviet BST-1M, located in the scientific equipment compartment of the Salyut-6 orbital station. It was equipped with a mirror with a diameter of 1.5 m and was intended for astrophysical research in the ultraviolet (0.2 - 0.36 microns), infrared (60 - 130 microns) and submillimeter (300 - 1000 microns) spectral regions, which are of interest to those who are interested in which makes it possible to study molecular clouds in space, as well as obtain information about the processes taking place in the upper layers of the Earth's atmosphere.
The Submillimeter Wave Astronomy Satellite (SWAS) was launched into low Earth orbit on December 5, 1998 as one of NASA's Small Explorer Program (SMEX) missions. The mission of the spacecraft is to make targeted observations of giant molecular clouds and dark cloud cores. The focus of SWAS is five spectral lines: water (H2O), isotopic water (H218O), isotopic carbon monoxide (13CO), molecular oxygen (O2), and neutral carbon (C I).
The SWAS satellite was repurposed in June, 2005 to provide support for the NASA Deep Impact mission. SWAS provided water production data on the comet until the end of August 2005.
The European Space Agency launched a space-based mission known as the Herschel Space Observatory (formerly called Far Infrared and Sub-millimetre Telescope or FIRST) in 2009. Herschel deployed the largest mirror ever launched into space (until December 2021, with the launch of the near-infrared James Webb Space Telescope) and studied radiation in the far infrared and submillimetre wavebands. Rather than an Earth orbit, Herschel entered into a Lissajous orbit around , the second Lagrangian point of the Earth-Sun system. is located approximately 1.5 million km from Earth and the placement of Herschel there lessened the interference by infrared and visible radiation from the Earth and Sun. Herschel's mission focused primarily on the origins of galaxies and galactic formation.
| Physical sciences | Radio astronomy | Astronomy |
23009144 | https://en.wikipedia.org/wiki/Poisson%20distribution | Poisson distribution | In probability theory and statistics, the Poisson distribution (; ) is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1 (e.g., number of events in a given area or volume).
The Poisson distribution is named after French mathematician Siméon Denis Poisson. It plays an important role for discrete-stable distributions.
Under a Poisson distribution with the expectation of λ events in a given interval, the probability of k events in the same interval is:
For instance, consider a call center which receives an average of λ = 3 calls per minute at all times of day. If the calls are independent, receiving one does not change the probability of when the next one will arrive. Under these assumptions, the number k of calls received during any minute has a Poisson probability distribution. Receiving k = 1 to 4 calls then has a probability of about 0.77, while receiving 0 or at least 5 calls has a probability of about 0.23.
A classic example used to motivate the Poisson distribution is the number of radioactive decay events during a fixed observation period.
History
The distribution was first introduced by Siméon Denis Poisson (1781–1840) and published together with his probability theory in his work Recherches sur la probabilité des jugements en matière criminelle et en matière civile (1837). The work theorized about the number of wrongful convictions in a given country by focusing on certain random variables that count, among other things, the number of discrete occurrences (sometimes called "events" or "arrivals") that take place during a time-interval of given length. The result had already been given in 1711 by Abraham de Moivre in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus . This makes it an example of Stigler's law and it has prompted some authors to argue that the Poisson distribution should bear the name of de Moivre.
In 1860, Simon Newcomb fitted the Poisson distribution to the number of stars found in a unit of space.
A further practical application was made by Ladislaus Bortkiewicz in 1898. Bortkiewicz showed that the frequency with which soldiers in the Prussian army were accidentally killed by horse kicks could be well modeled by a Poisson distribution..
Definitions
Probability mass function
A discrete random variable is said to have a Poisson distribution with parameter if it has a probability mass function given by:
where
is the number of occurrences ()
is Euler's number ()
k! = k(k–1) ··· (3)(2)(1) is the factorial.
The positive real number is equal to the expected value of and also to its variance.
The Poisson distribution can be applied to systems with a large number of possible events, each of which is rare. The number of such events that occur during a fixed time interval is, under the right circumstances, a random number with a Poisson distribution.
The equation can be adapted if, instead of the average number of events we are given the average rate at which events occur. Then and:
Examples
The Poisson distribution may be useful to model events such as:
the number of meteorites greater than 1-meter diameter that strike Earth in a year;
the number of laser photons hitting a detector in a particular time interval;
the number of students achieving a low and high mark in an exam; and
locations of defects and dislocations in materials.
Examples of the occurrence of random points in space are: the locations of asteroid impacts with earth (2-dimensional), the locations of imperfections in a material (3-dimensional), and the locations of trees in a forest (2-dimensional).
Assumptions and validity
The Poisson distribution is an appropriate model if the following assumptions are true:
, a nonnegative integer, is the number of times an event occurs in an interval.
The occurrence of one event does not affect the probability of a second event.
The average rate at which events occur is independent of any occurrences.
Two events cannot occur at exactly the same instant.
If these conditions are true, then is a Poisson random variable; the distribution of is a Poisson distribution.
The Poisson distribution is also the limit of a binomial distribution, for which the probability of success for each trial equals divided by the number of trials, as the number of trials approaches infinity (see Related distributions).
Examples of probability for Poisson distributions
On a particular river, overflow floods occur once every 100 years on average. Calculate the probability of = 0, 1, 2, 3, 4, 5, or 6 overflow floods in a 100-year interval, assuming the Poisson model is appropriate.
Because the average event rate is one overflow flood per 100 years, = 1
{| class="wikitable"
|-
! !! ( overflow floods in 100 years)
|-
| 0|| 0.368
|-
| 1|| 0.368
|-
| 2|| 0.184
|-
| 3|| 0.061
|-
| 4|| 0.015
|-
| 5|| 0.003
|-
| 6|| 0.0005
|}
The probability for 0 to 6 overflow floods in a 100-year period.
It has been reported that the average number of goals in a World Cup soccer match is approximately 2.5 and the Poisson model is appropriate.
Because the average event rate is 2.5 goals per match, = 2.5 .
{| class="wikitable"
|-
! !! ( goals in a World Cup soccer match)
|-
| 0|| 0.082
|-
| 1|| 0.205
|-
| 2|| 0.257
|-
| 3|| 0.213
|-
| 4|| 0.133
|-
| 5|| 0.067
|-
| 6|| 0.028
|-
| 7|| 0.010
|}
The probability for 0 to 7 goals in a match.
Once in an interval events: The special case of = 1 and = 0
Suppose that astronomers estimate that large meteorites (above a certain size) hit the earth on average once every 100 years ( event per 100 years), and that the number of meteorite hits follows a Poisson distribution. What is the probability of meteorite hits in the next 100 years?
Under these assumptions, the probability that no large meteorites hit the earth in the next 100 years is roughly 0.37. The remaining is the probability of 1, 2, 3, or more large meteorite hits in the next 100 years.
In an example above, an overflow flood occurred once every 100 years The probability of no overflow floods in 100 years was roughly 0.37, by the same calculation.
In general, if an event occurs on average once per interval ( = 1), and the events follow a Poisson distribution, then In addition, as shown in the table for overflow floods.
Examples that violate the Poisson assumptions
The number of students who arrive at the student union per minute will likely not follow a Poisson distribution, because the rate is not constant (low rate during class time, high rate between class times) and the arrivals of individual students are not independent (students tend to come in groups). The non-constant arrival rate may be modeled as a mixed Poisson distribution, and the arrival of groups rather than individual students as a compound Poisson process.
The number of magnitude 5 earthquakes per year in a country may not follow a Poisson distribution, if one large earthquake increases the probability of aftershocks of similar magnitude.
Examples in which at least one event is guaranteed are not Poisson distributed; but may be modeled using a zero-truncated Poisson distribution.
Count distributions in which the number of intervals with zero events is higher than predicted by a Poisson model may be modeled using a zero-inflated model.
Properties
Descriptive statistics
The expected value of a Poisson random variable is .
The variance of a Poisson random variable is also .
The coefficient of variation is while the index of dispersion is 1.
The mean absolute deviation about the mean is
The mode of a Poisson-distributed random variable with non-integer is equal to which is the largest integer less than or equal to . This is also written as floor(). When is a positive integer, the modes are and − 1.
All of the cumulants of the Poisson distribution are equal to the expected value . The th factorial moment of the Poisson distribution is .
The expected value of a Poisson process is sometimes decomposed into the product of intensity and exposure (or more generally expressed as the integral of an "intensity function" over time or space, sometimes described as "exposure").
Median
Bounds for the median () of the distribution are known and are sharp:
Higher moments
The higher non-centered moments of the Poisson distribution are Touchard polynomials in : where the braces { } denote Stirling numbers of the second kind. In other words,When the expected value is set to λ = 1, Dobinski's formula implies that the ‑th moment is equal to the number of partitions of a set of size .
A simple upper bound is:
Sums of Poisson-distributed random variables
If for are independent, then A converse is Raikov's theorem, which says that if the sum of two independent random variables is Poisson-distributed, then so are each of those two independent random variables.
Maximum entropy
It is a maximum-entropy distribution among the set of generalized binomial distributions with mean and , where a generalized binomial distribution is defined as a distribution of the sum of N independent but not identically distributed Bernoulli variables.
Other properties
The Poisson distributions are infinitely divisible probability distributions.
The directed Kullback–Leibler divergence of from is given by
If is an integer, then satisfies and
Bounds for the tail probabilities of a Poisson random variable can be derived using a Chernoff bound argument.
The upper tail probability can be tightened (by a factor of at least two) as follows:
where is the Kullback–Leibler divergence of from .
Inequalities that relate the distribution function of a Poisson random variable to the Standard normal distribution function are as follows: where is the Kullback–Leibler divergence of from and is the Kullback–Leibler divergence of from .
Poisson races
Let and be independent random variables, with then we have that
The upper bound is proved using a standard Chernoff bound.
The lower bound can be proved by noting that is the probability that where which is bounded below by where is relative entropy (See the entry on bounds on tails of binomial distributions for details). Further noting that and computing a lower bound on the unconditional probability gives the result. More details can be found in the appendix of Kamath et al.
Related distributions
As a Binomial distribution with infinitesimal time-steps
The Poisson distribution can be derived as a limiting case to the binomial distribution as the number of trials goes to infinity and the expected number of successes remains fixed — see law of rare events below. Therefore, it can be used as an approximation of the binomial distribution if is sufficiently large and p is sufficiently small. The Poisson distribution is a good approximation of the binomial distribution if is at least 20 and p is smaller than or equal to 0.05, and an excellent approximation if ≥ 100 and ≤ 10. Letting and be the respective cumulative density functions of the binomial and Poisson distributions, one has: One derivation of this uses probability-generating functions. Consider a Bernoulli trial (coin-flip) whose probability of one success (or expected number of successes) is within a given interval. Split the interval into n parts, and perform a trial in each subinterval with probability . The probability of k successes out of n trials over the entire interval is then given by the binomial distribution whose generating function is:Taking the limit as n increases to infinity (with x fixed) and applying the product limit definition of the exponential function, this reduces to the generating function of the Poisson distribution:
General
If and are independent, then the difference follows a Skellam distribution.
If and are independent, then the distribution of conditional on is a binomial distribution. Specifically, if then More generally, if X1, X2, ..., X are independent Poisson random variables with parameters 1, 2, ..., then
given it follows that In fact,
If and the distribution of conditional on X = is a binomial distribution, then the distribution of Y follows a Poisson distribution In fact, if, conditional on follows a multinomial distribution, then each follows an independent Poisson distribution
The Poisson distribution is a special case of the discrete compound Poisson distribution (or stuttering Poisson distribution) with only a parameter. The discrete compound Poisson distribution can be deduced from the limiting distribution of univariate multinomial distribution. It is also a special case of a compound Poisson distribution.
For sufficiently large values of , (say >1000), the normal distribution with mean and variance (standard deviation ) is an excellent approximation to the Poisson distribution. If is greater than about 10, then the normal distribution is a good approximation if an appropriate continuity correction is performed, i.e., if , where x is a non-negative integer, is replaced by .
Variance-stabilizing transformation: If then and Under this transformation, the convergence to normality (as increases) is far faster than the untransformed variable. Other, slightly more complicated, variance stabilizing transformations are available, one of which is Anscombe transform. See Data transformation (statistics) for more general uses of transformations.
If for every t > 0 the number of arrivals in the time interval follows the Poisson distribution with mean λt, then the sequence of inter-arrival times are independent and identically distributed exponential random variables having mean 1/.
The cumulative distribution functions of the Poisson and chi-squared distributions are related in the following ways: and
Poisson approximation
Assume where then is multinomially distributed
conditioned on
This means, among other things, that for any nonnegative function
if is multinomially distributed, then
where
The factor of can be replaced by 2 if is further assumed to be monotonically increasing or decreasing.
Bivariate Poisson distribution
This distribution has been extended to the bivariate case. The generating function for this distribution is
with
The marginal distributions are Poisson(θ1) and Poisson(θ2) and the correlation coefficient is limited to the range
A simple way to generate a bivariate Poisson distribution is to take three independent Poisson distributions with means and then set The probability function of the bivariate Poisson distribution is
Free Poisson distribution
The free Poisson distribution with jump size and rate arises in free probability theory as the limit of repeated free convolution
as .
In other words, let be random variables so that has value with probability and value 0 with the remaining probability. Assume also that the family are freely independent. Then the limit as of the law of is given by the Free Poisson law with parameters
This definition is analogous to one of the ways in which the classical Poisson distribution is obtained from a (classical) Poisson process.
The measure associated to the free Poisson law is given by
where and has support
This law also arises in random matrix theory as the Marchenko–Pastur law. Its free cumulants are equal to
Some transforms of this law
We give values of some important transforms of the free Poisson law; the computation can be found in e.g. in the book Lectures on the Combinatorics of Free Probability by A. Nica and R. Speicher
The R-transform of the free Poisson law is given by
The Cauchy transform (which is the negative of the Stieltjes transformation) is given by
The S-transform is given by
in the case that
Weibull and stable count
Poisson's probability mass function can be expressed in a form similar to the product distribution of a Weibull distribution and a variant form of the stable count distribution.
The variable can be regarded as inverse of Lévy's stability parameter in the stable count distribution:
where is a standard stable count distribution of shape and is a standard Weibull distribution of shape
Statistical inference
Parameter estimation
Given a sample of measured values for we wish to estimate the value of the parameter of the Poisson population from which the sample was drawn. The maximum likelihood estimate is
Since each observation has expectation so does the sample mean. Therefore, the maximum likelihood estimate is an unbiased estimator of . It is also an efficient estimator since its variance achieves the Cramér–Rao lower bound (CRLB). Hence it is minimum-variance unbiased. Also it can be proven that the sum (and hence the sample mean as it is a one-to-one function of the sum) is a complete and sufficient statistic for .
To prove sufficiency we may use the factorization theorem. Consider partitioning the probability mass function of the joint Poisson distribution for the sample into two parts: one that depends solely on the sample , called , and one that depends on the parameter and the sample only through the function Then is a sufficient statistic for
The first term depends only on . The second term depends on the sample only through Thus, is sufficient.
To find the parameter that maximizes the probability function for the Poisson population, we can use the logarithm of the likelihood function:
We take the derivative of with respect to and compare it to zero:
Solving for gives a stationary point.
So is the average of the i values. Obtaining the sign of the second derivative of L at the stationary point will determine what kind of extreme value is.
Evaluating the second derivative at the stationary point gives:
which is the negative of times the reciprocal of the average of the ki. This expression is negative when the average is positive. If this is satisfied, then the stationary point maximizes the probability function.
For completeness, a family of distributions is said to be complete if and only if implies that for all If the individual are iid then Knowing the distribution we want to investigate, it is easy to see that the statistic is complete.
For this equality to hold, must be 0. This follows from the fact that none of the other terms will be 0 for all in the sum and for all possible values of Hence, for all implies that and the statistic has been shown to be complete.
Confidence interval
The confidence interval for the mean of a Poisson distribution can be expressed using the relationship between the cumulative distribution functions of the Poisson and chi-squared distributions. The chi-squared distribution is itself closely related to the gamma distribution, and this leads to an alternative expression. Given an observation from a Poisson distribution with mean μ, a confidence interval for μ with confidence level is
or equivalently,
where is the quantile function (corresponding to a lower tail area p) of the chi-squared distribution with degrees of freedom and is the quantile function of a gamma distribution with shape parameter n and scale parameter 1. This interval is 'exact' in the sense that its coverage probability is never less than the nominal .
When quantiles of the gamma distribution are not available, an accurate approximation to this exact interval has been proposed (based on the Wilson–Hilferty transformation):
where denotes the standard normal deviate with upper tail area .
For application of these formulae in the same context as above (given a sample of measured values i each drawn from a Poisson distribution with mean ), one would set
calculate an interval for and then derive the interval for .
Bayesian inference
In Bayesian inference, the conjugate prior for the rate parameter of the Poisson distribution is the gamma distribution. Let
denote that is distributed according to the gamma density g parameterized in terms of a shape parameter α and an inverse scale parameter β:
Then, given the same sample of measured values i as before, and a prior of Gamma(α, β), the posterior distribution is
Note that the posterior mean is linear and is given by
It can be shown that gamma distribution is the only prior that induces linearity of the conditional mean. Moreover, a converse result exists which states that if the conditional mean is close to a linear function in the distance than the prior distribution of must be close to gamma distribution in Levy distance.
The posterior mean E[] approaches the maximum likelihood estimate in the limit as which follows immediately from the general expression of the mean of the gamma distribution.
The posterior predictive distribution for a single additional observation is a negative binomial distribution, sometimes called a gamma–Poisson distribution.
Simultaneous estimation of multiple Poisson means
Suppose is a set of independent random variables from a set of Poisson distributions, each with a parameter and we would like to estimate these parameters. Then, Clevenson and Zidek show that under the normalized squared error loss when then, similar as in Stein's example for the Normal means, the MLE estimator is inadmissible.
In this case, a family of minimax estimators is given for any and as
Occurrence and applications
Some applications of the Poisson distribution to count data (number of events):
telecommunication: telephone calls arriving in a system,
astronomy: photons arriving at a telescope,
chemistry: the molar mass distribution of a living polymerization,
biology: the number of mutations on a strand of DNA per unit length,
management: customers arriving at a counter or call centre,
finance and insurance: number of losses or claims occurring in a given period of time,
seismology: asymptotic Poisson model of risk for large earthquakes,
radioactivity: decays in a given time interval in a radioactive sample,
optics: number of photons emitted in a single laser pulse (a major vulnerability of quantum key distribution protocols, known as photon number splitting).
More examples of counting events that may be modelled as Poisson processes include:
soldiers killed by horse-kicks each year in each corps in the Prussian cavalry. This example was used in a book by Ladislaus Bortkiewicz (1868–1931),
yeast cells used when brewing Guinness beer. This example was used by William Sealy Gosset (1876–1937),
phone calls arriving at a call centre within a minute. This example was described by A.K. Erlang (1878–1929),
goals in sports involving two competing teams,
deaths per year in a given age group,
jumps in a stock price in a given time interval,
times a web server is accessed per minute (under an assumption of homogeneity),
mutations in a given stretch of DNA after a certain amount of radiation,
cells infected at a given multiplicity of infection,
bacteria in a certain amount of liquid,
photons arriving on a pixel circuit at a given illumination over a given time period,
landing of V-1 flying bombs on London during World War II, investigated by R. D. Clarke in 1946.
In probabilistic number theory, Gallagher showed in 1976 that, if a certain version of the unproved prime r-tuple conjecture holds, then the counts of prime numbers in short intervals would obey a Poisson distribution.
Law of rare events
The rate of an event is related to the probability of an event occurring in some small subinterval (of time, space or otherwise). In the case of the Poisson distribution, one assumes that there exists a small enough subinterval for which the probability of an event occurring twice is "negligible". With this assumption one can derive the Poisson distribution from the binomial one, given only the information of expected number of total events in the whole interval.
Let the total number of events in the whole interval be denoted by Divide the whole interval into subintervals of equal size, such that (since we are interested in only very small portions of the interval this assumption is meaningful). This means that the expected number of events in each of the subintervals is equal to
Now we assume that the occurrence of an event in the whole interval can be seen as a sequence of Bernoulli trials, where the -th Bernoulli trial corresponds to looking whether an event happens at the subinterval with probability The expected number of total events in such trials would be the expected number of total events in the whole interval. Hence for each subdivision of the interval we have approximated the occurrence of the event as a Bernoulli process of the form As we have noted before we want to consider only very small subintervals. Therefore, we take the limit as goes to infinity.
In this case the binomial distribution converges to what is known as the Poisson distribution by the Poisson limit theorem.
In several of the above examples — such as, the number of mutations in a given sequence of DNA—the events being counted are actually the outcomes of discrete trials, and would more precisely be modelled using the binomial distribution, that is
In such cases is very large and is very small (and so the expectation is of intermediate magnitude). Then the distribution may be approximated by the less cumbersome Poisson distribution
This approximation is sometimes known as the law of rare events, since each of the individual Bernoulli events rarely occurs.
The name "law of rare events" may be misleading because the total count of success events in a Poisson process need not be rare if the parameter is not small. For example, the number of telephone calls to a busy switchboard in one hour follows a Poisson distribution with the events appearing frequent to the operator, but they are rare from the point of view of the average member of the population who is very unlikely to make a call to that switchboard in that hour.
The variance of the binomial distribution is 1 − p times that of the Poisson distribution, so almost equal when p is very small.
The word law is sometimes used as a synonym of probability distribution, and convergence in law means convergence in distribution. Accordingly, the Poisson distribution is sometimes called the "law of small numbers" because it is the probability distribution of the number of occurrences of an event that happens rarely but has very many opportunities to happen. The Law of Small Numbers is a book by Ladislaus Bortkiewicz about the Poisson distribution, published in 1898.
Poisson point process
The Poisson distribution arises as the number of points of a Poisson point process located in some finite region. More specifically, if D is some region space, for example Euclidean space Rd, for which |D|, the area, volume or, more generally, the Lebesgue measure of the region is finite, and if denotes the number of points in D, then
Poisson regression and negative binomial regression
Poisson regression and negative binomial regression are useful for analyses where the dependent (response) variable is the count of the number of events or occurrences in an interval.
Biology
The Luria–Delbrück experiment tested against the hypothesis of Lamarckian evolution, which should result in a Poisson distribution.
Katz and Miledi measured the membrane potential with and without the presence of acetylcholine (ACh). When ACh is present, ion channels on the membrane would be open randomly at a small fraction of the time. As there are a large number of ion channels each open for a small fraction of the time, the total number of ion channels open at any moment is Poisson distributed. When ACh is not present, effectively no ion channels are open. The membrane potential is . Subtracting the effect of noise, Katz and Miledi found the mean and variance of membrane potential to be , giving . (pp. 94-95 )
During each cellular replication event, the number of mutations is roughly Poisson distributed. For example, the HIV virus has 10,000 base pairs, and has a mutation rate of about 1 per 30,000 base pairs, meaning the number of mutations per replication event is distributed as . (p. 64 )
Other applications in science
In a Poisson process, the number of observed occurrences fluctuates about its mean with a standard deviation These fluctuations are denoted as Poisson noise or (particularly in electronics) as shot noise.
The correlation of the mean and standard deviation in counting independent discrete occurrences is useful scientifically. By monitoring how the fluctuations vary with the mean signal, one can estimate the contribution of a single occurrence, even if that contribution is too small to be detected directly. For example, the charge e on an electron can be estimated by correlating the magnitude of an electric current with its shot noise. If N electrons pass a point in a given time t on the average, the mean current is ; since the current fluctuations should be of the order (i.e., the standard deviation of the Poisson process), the charge can be estimated from the ratio
An everyday example is the graininess that appears as photographs are enlarged; the graininess is due to Poisson fluctuations in the number of reduced silver grains, not to the individual grains themselves. By correlating the graininess with the degree of enlargement, one can estimate the contribution of an individual grain (which is otherwise too small to be seen unaided).
In causal set theory the discrete elements of spacetime follow a Poisson distribution in the volume.
The Poisson distribution also appears in quantum mechanics, especially quantum optics. Namely, for a quantum harmonic oscillator system in a coherent state, the probability of measuring a particular energy level has a Poisson distribution.
Computational methods
The Poisson distribution poses two different tasks for dedicated software libraries: evaluating the distribution , and drawing random numbers according to that distribution.
Evaluating the Poisson distribution
Computing for given and is a trivial task that can be accomplished by using the standard definition of in terms of exponential, power, and factorial functions. However, the conventional definition of the Poisson distribution contains two terms that can easily overflow on computers: and . The fraction of to ! can also produce a rounding error that is very large compared to e−, and therefore give an erroneous result. For numerical stability the Poisson probability mass function should therefore be evaluated as
which is mathematically equivalent but numerically stable. The natural logarithm of the Gamma function can be obtained using the lgamma function in the C standard library (C99 version) or R, the gammaln function in MATLAB or SciPy, or the log_gamma function in Fortran 2008 and later.
Some computing languages provide built-in functions to evaluate the Poisson distribution, namely
R: function dpois(x, lambda);
Excel: function POISSON( x, mean, cumulative), with a flag to specify the cumulative distribution;
Mathematica: univariate Poisson distribution as PoissonDistribution[], bivariate Poisson distribution as MultivariatePoissonDistribution[{ }],.
Random variate generation
The less trivial task is to draw integer random variate from the Poisson distribution with given
Solutions are provided by:
R: function rpois(n, lambda);
GNU Scientific Library (GSL): function gsl_ran_poisson
A simple algorithm to generate random Poisson-distributed numbers (pseudo-random number sampling) has been given by Knuth:
algorithm poisson random number (Knuth):
init:
Let L ← e−λ, k ← 0 and p ← 1.
do:
k ← k + 1.
Generate uniform random number u in [0,1] and let p ← p × u.
while p > L.
return k − 1.
The complexity is linear in the returned value , which is on average. There are many other algorithms to improve this. Some are given in Ahrens & Dieter, see below.
For large values of , the value of = e− may be so small that it is hard to represent. This can be solved by a change to the algorithm which uses an additional parameter STEP such that e−STEP does not underflow:
algorithm poisson random number (Junhao, based on Knuth):
init:
Let Left ← , k ← 0 and p ← 1.
do:
k ← k + 1.
Generate uniform random number u in (0,1) and let p ← p × u.
while p < 1 and Left > 0:
if Left > STEP:
p ← p × eSTEP
Left ← Left − STEP
else:
p ← p × eLeft
Left ← 0
while p > 1.
return k − 1.
The choice of STEP depends on the threshold of overflow. For double precision floating point format the threshold is near e700, so 500 should be a safe STEP.
Other solutions for large values of include rejection sampling and using Gaussian approximation.
Inverse transform sampling is simple and efficient for small values of , and requires only one uniform random number u per sample. Cumulative probabilities are examined in turn until one exceeds u.
algorithm Poisson generator based upon the inversion by sequential search:
init:
Let x ← 0, p ← e−λ, s ← p.
Generate uniform random number u in [0,1].
while u > s do:
x ← x + 1.
p ← p × / x.
s ← s + p.
return x.
| Mathematics | Statistics and probability | null |
5235231 | https://en.wikipedia.org/wiki/Railway%20engineering | Railway engineering | Railway engineering is a multi-faceted engineering discipline dealing with the design, construction and operation of all types of rail transport systems. It encompasses a wide range of engineering disciplines, including civil engineering, computer engineering, electrical engineering, mechanical engineering, industrial engineering and production engineering. A great many other engineering sub-disciplines are also called upon.
History
With the advent of the railways in the early nineteenth century, a need arose for a specialized group of engineers capable of dealing with the unique problems associated with railway engineering. As the railways expanded and became a major economic force, a great many engineers became involved in the field, probably the most notable in Britain being Richard Trevithick, George Stephenson and Isambard Kingdom Brunel. Today, railway systems engineering continues to be a vibrant field of engineering.
Subfields
Mechanical engineering
Command, control & railway signalling
Office systems design
Data center design
SCADA
Network design
Electrical engineering
Energy electrification
Third rail
Fourth rail
Overhead contact system
Civil engineering
Permanent way engineering
Light rail systems
On-track plant
Rail systems integration
Train control systems
Cab signalling
Railway vehicle engineering
Rolling resistance
Curve resistance
Wheel–rail interface
Hunting oscillation
Railway systems engineering
Railway signalling
Fare collection
CCTV
Public address
Intrusion detection
Access control
Systems integration
Professional organisations
In the UK: The Railway Division of the Institution of Mechanical Engineers (IMechE).
In the US: The American Railway Engineering and Maintenance-of-Way Association (AREMA)
In the Philippines: Philippine Railway Engineers' Association, (PREA) Inc.
Worldwide: The Institute of Railway Signal Engineers (IRSE)
| Technology | Disciplines | null |
5240327 | https://en.wikipedia.org/wiki/Dvorak%20technique | Dvorak technique | The Dvorak technique (developed between 1969 and 1984 by Vernon Dvorak) is a widely used system to estimate tropical cyclone intensity (which includes tropical depression, tropical storm, and hurricane/typhoon/intense tropical cyclone intensities) based solely on visible and infrared satellite images. Within the Dvorak satellite strength estimate for tropical cyclones, there are several visual patterns that a cyclone may take on which define the upper and lower bounds on its intensity. The primary patterns used are curved band pattern (T1.0-T4.5), shear pattern (T1.5–T3.5), central dense overcast (CDO) pattern (T2.5–T5.0), central cold cover (CCC) pattern, banding eye pattern (T4.0–T4.5), and eye pattern (T4.5–T8.0).
Both the central dense overcast and embedded eye pattern use the size of the CDO. The CDO pattern intensities start at T2.5, equivalent to minimal tropical storm intensity (40 mph, 65 km/h). The shape of the central dense overcast is also considered. The eye pattern utilizes the coldness of the cloud tops within the surrounding mass of thunderstorms and contrasts it with the temperature within the eye itself. The larger the temperature difference is, the stronger the tropical cyclone. Once a pattern is identified, the storm features (such as length and curvature of banding features) are further analyzed to arrive at a particular T-number. The CCC pattern indicates little development is occurring, despite the cold cloud tops associated with the quickly evolving feature.
Several agencies issue Dvorak intensity numbers for tropical cyclones and their precursors, including the National Hurricane Center's Tropical Analysis and Forecast Branch (TAFB), the NOAA/NESDIS Satellite Analysis Branch (SAB), and the Joint Typhoon Warning Center at the Naval Meteorology and Oceanography Command in Pearl Harbor, Hawaii.
Evolution of the method
The initial development of this technique occurred in 1969 by Vernon Dvorak, using satellite pictures of tropical cyclones within the northwest Pacific Ocean. The system as it was initially conceived involved pattern matching of cloud features with a development and decay model. As the technique matured through the 1970s and 1980s, measurement of cloud features became dominant in defining tropical cyclone intensity and central pressure of the tropical cyclone's low-pressure area. Use of infrared satellite imagery led to a more objective assessment of the strength of tropical cyclones with eyes, using the cloud top temperatures within the eyewall and contrasting them with the warm temperatures within the eye itself. Constraints on short term intensity change are used less frequently than they were back in the 1970s and 1980s. The central pressures assigned to tropical cyclones have required modification, as the original estimates were 5–10 hPa (0.15–0.29 inHg) too low in the Atlantic and up to 20 hPa (0.59 inHg) too high in the northwest Pacific. This led to the development of a separate wind-pressure relationship for the northwest Pacific, devised by Atkinson and Holliday in 1975, then modified in 1977.
As human analysts using the technique lead to subjective biases, efforts have been made to make more objective estimates using computer programs, which have been aided by higher-resolution satellite imagery and more powerful computers. Since tropical cyclone satellite patterns can fluctuate over time, automated techniques use a six-hour averaging period to lead to more reliable intensity estimates. Development of the objective Dvorak technique began in 1998, which performed best with tropical cyclones that had eyes (of hurricane or typhoon strength). It still required a manual center placement, keeping some subjectivity within the process. By 2004, an advanced objective Dvorak technique was developed which utilized banding features for systems below hurricane intensity and to objectively determine the tropical cyclone's center. A central pressure bias was uncovered in 2004 relating to the slope of the tropopause and cloud top temperatures which change with latitude that helped improve central pressure estimates within the objective technique.
Details of the method
In a developing cyclone, the technique takes advantage of the fact that cyclones of similar intensity tend to have certain characteristic features, and as they strengthen, they tend to change in appearance in a predictable manner. The structure and organization of the tropical cyclone are tracked over 24 hours to determine if the storm has weakened, maintained its intensity, or strengthened. Various central cloud and banding features are compared with templates that show typical storm patterns and their associated intensity. If infrared satellite imagery is available for a cyclone with a visible eye pattern, then the technique utilizes the difference between the temperature of the warm eye and the surrounding cold cloud tops to determine intensity (colder cloud tops generally indicate a more intense storm). In each case a "T-number" (an abbreviation for Tropical Number) and a Current Intensity (CI) value are assigned to the storm. These measurements range between 1 (minimum intensity) and 8 (maximum intensity). The T-number and CI value are the same except for weakening storms, in which case the CI is higher. For weakening systems, the CI is held as the tropical cyclone intensity for 12 hours, though research from the National Hurricane Center indicates that six hours is more reasonable. The table at right shows the approximate surface wind speed and sea level pressure that corresponds to a given T-number. The amount a tropical cyclone can change in strength per 24-hour period is limited to 2.5 T-numbers per day.
Pattern types
Within the Dvorak satellite strength estimate for tropical cyclones, there are several visual patterns that a cyclone may take on which define the upper and lower bounds on its intensity. The primary patterns used are curved band pattern (T1.0-T4.5), shear pattern (T1.5-T3.5), central dense overcast (CDO) pattern (T2.5-T5.0), banding eye pattern (T4.0-T4.5), eye pattern (T4.5 – T8.0), and central cold cover (CCC) pattern. Both the central dense overcast and embedded eye pattern utilize the size of the CDO. The CDO pattern intensities start at T2.5, equivalent to minimal tropical storm intensity (). The shape of the central dense overcast is also considered. The farther the center is tucked into the CDO, the stronger it is deemed. Tropical cyclones with maximum sustained winds between and can have their center of circulations obscured by cloudiness of the central dense overcast within visible and infrared satellite imagery, which makes diagnosis of their intensity a challenge.
The CCC pattern, with its large and quickly developing mass of thick cirrus clouds spreading out from an area of convection near a tropical cyclone center within a short time frame, indicates little development. When it develops, rainbands and cloud lines around the tropical cyclone weaken and the thick cloud shield obscures the circulation center. While it resembles a CDO pattern, it is rarely seen.
The eye pattern utilizes the coldness of the cloud tops within the surrounding mass of thunderstorms and contrasts it with the temperature within the eye itself. The larger the temperature difference is, the stronger the tropical cyclone. Winds within tropical cyclones can also be estimated by tracking features within the CDO using rapid scan geostationary satellite imagery, whose pictures are taken minutes apart rather than every half-hour.
Once a pattern is identified, the storm features (such as length and curvature of banding features) are further analyzed to arrive at a particular T-number.
Usage
Several agencies issue Dvorak intensity numbers for tropical cyclones and their precursors. These include the National Hurricane Center's Tropical Analysis and Forecast Branch (TAFB), the National Oceanic and Atmospheric Administration's Satellite Analysis Branch (SAB), and the Joint Typhoon Warning Center at the Naval Pacific Meteorology and Oceanography Center in Pearl Harbor, Hawaii.
The National Hurricane Center will often quote Dvorak T-numbers in their tropical cyclone products. The following example is from discussion number 3 of Tropical Depression 24 (eventually Hurricane Wilma) of the 2005 Atlantic hurricane season:
BOTH TAFB AND SAB CAME IN WITH A DVORAK SATELLITE INTENSITY ESTIMATE OF T2.5/35 KT. HOWEVER ...OFTENTIMES THE SURFACE WIND FIELD OF LARGE DEVELOPING LOW PRESSURE SYSTEMS LIKE THIS ONE WILL LAG ABOUT 12 HOURS BEHIND THE SATELLITE SIGNATURE. THEREFORE... THE INITIAL INTENSITY HAS ONLY BEEN INCREASED TO 30 KT.
Note that in this case the Dvorak T-number (in this case T2.5) was simply used as a guide but other factors determined how the NHC decided to set the system's intensity.
The Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin–Madison has developed the Objective Dvorak Technique (ODT). This is a modified version of the Dvorak technique which uses computer algorithms rather than subjective human interpretation to arrive at a CI number. This is generally not implemented for tropical depressions or weak tropical storms. The China Meteorological Agency (CMA) is expected to start using the standard 1984 version of Dvorak in the near future. The Indian Meteorological Department (IMD) prefers using visible satellite imagery over infrared imagery due to a perceived high bias in estimates derived from infrared imagery during the early morning hours of convective maximum. The Japan Meteorological Agency (JMA) uses the infrared version of Dvorak over the visible imagery version. Hong Kong Observatory and JMA continue to utilize Dvorak after tropical cyclone landfall. Various centers hold on to the maximum current intensity for 6–12 hours, though this rule is broken when rapid weakening is obvious.
Citizen science site Cyclone Center uses a modified version of the Dvorak technique to categorize post-1970 tropical weather.
Benefits and disadvantages
The most significant benefit of the use of the technique is that it has provided a more complete history of tropical cyclone intensity in areas where aircraft reconnaissance is neither possible nor routinely available. Intensity estimates of maximum sustained wind are currently within of what aircraft are able to measure half of the time, though the assignment of intensity of systems with strengths between moderate tropical-storm force () and weak hurricane- or typhoon-force () is the least certain. Its overall precision has not always been true, as refinements in the technique led to intensity changes between 1972 and 1977 of up to . The method is internally consistent in that it constrains rapid increases or decreases in tropical cyclone intensity. Some tropical cyclones fluctuate in strength more than the 2.5 T numbers per day limit allowed by the rule, which can work to the technique's disadvantage and has led to occasional abandonment of the constraints since the 1980s. Systems with small eyes near the limb, or edge, of a satellite image can be biased too weakly using the technique, which can be resolved through use of polar-orbiting satellite imagery. Subtropical cyclone intensity cannot be determined using Dvorak, which led to the development of the Hebert-Poteat technique in 1975. Cyclones undergoing extratropical transition, losing their thunderstorm activity, see their intensities underestimated using the Dvorak technique. This led to the development of the Miller and Lander extratropical transition technique which can be used under these circumstances.
| Physical sciences | Storms | Earth science |
21533375 | https://en.wikipedia.org/wiki/Gemini%20%28constellation%29 | Gemini (constellation) | Gemini is one of the constellations of the zodiac and is located in the northern celestial hemisphere. It was one of the 48 constellations described by the 2nd century AD astronomer Ptolemy, and it remains one of the 88 modern constellations today. Its name is Latin for twins, and it is associated with the twins Castor and Pollux in Greek mythology. Its old astronomical symbol is (♊︎).
Location
Gemini lies between Taurus to the west and Cancer to the east, with Auriga and Lynx to the north, Monoceros and Canis Minor to the south, and Orion to the south-west.
In classical antiquity, Cancer was the location of the Sun on the northern solstice (June 21). During the first century AD, axial precession shifted it into Gemini. In 1990, the location of the Sun at the northern solstice moved from Gemini into Taurus, where it will remain until the 27th century AD and then move into Aries. The Sun will move through Gemini from June 21 to July 20 through 2062.
Gemini is prominent in the winter skies of the northern Hemisphere and is visible the entire night in December–January. The easiest way to locate the constellation is to find its two brightest stars Castor and Pollux eastward from the familiar V-shaped asterism (the open cluster Hyades) of Taurus and the three stars of Orion's Belt (Alnitak, Alnilam, and Mintaka). Another way is to mentally draw a line from the Pleiades star cluster located in Taurus and the brightest star in Leo, Regulus. In doing so, an imaginary line that is relatively close to the ecliptic is drawn, a line which intersects Gemini roughly at the midpoint of the constellation, just below Castor and Pollux.
When the Moon moves through Gemini, its motion can easily be observed in a single night as it appears first west of Castor and Pollux, then aligns, and finally appears east of them.
Features
Stars
The constellation contains 85 stars of naked eye visibility.
The brightest star in Gemini is Pollux, and the second-brightest is Castor. Castor's Bayer designation as "Alpha" arose because Johann Bayer did not carefully distinguish which of the two was the brighter when he assigned his eponymous designations in 1603. Although the characters of myth are twins, the actual stars are physically very different from each other.
α Gem (Castor) is a sextuple star system 52 light-years from Earth, which appears as a magnitude 1.6 blue-white star to the unaided eye. Two spectroscopic binaries are visible at magnitudes 1.9 and 3.0 with a period of 470 years. A wide-set red dwarf star is also a part of the system; this star is an Algol-type eclipsing binary star with a period of 19.5 hours; its minimum magnitude is 9.8 and its maximum magnitude is 9.3.
β Gem (Pollux) is an orange-hued giant star of magnitude 1.14, 34 light-years from Earth. Pollux has an extrasolar planet revolving around it, as do two other stars in Gemini, HD 50554, and HD 59686.
γ Gem (Alhena) is a blue-white hued star of magnitude 1.9, 105 light-years from Earth.
δ Gem (Wasat) is a long-period binary star 59 light-years from Earth. The primary is a white star of magnitude 3.5, and the secondary is an orange dwarf star of magnitude 8.2. The period is over 1000 years; it is divisible in medium amateur telescopes.
ε Gem (Mebsuta), a double star, includes a primary yellow supergiant of magnitude 3.1, nine hundred light-years from Earth. The optical companion, of magnitude 9.6, is visible in binoculars and small telescopes.
ζ Gem (Mekbuda) is a double star, whose primary is a Cepheid variable star with a period of 10.2 days; its minimum magnitude is 4.2 and its maximum magnitude is 3.6. It is a yellow supergiant, 1,200 light-years from Earth, with a radius that is 60 times solar, making it approximately 220,000 times the size of the Sun. The companion, a magnitude 7.6 star, is visible in binoculars and small amateur telescopes.
η Gem (Propus) is a binary star with a variable component. 380 light-years away, it has a period of 500 years and is only divisible in large amateur telescopes. The primary is a semi-regular red giant with a period of 233 days; its minimum magnitude is 3.9 and its maximum magnitude is 3.1. The secondary is of magnitude 6.
κ Gem is a binary star 143 light-years from Earth. The primary is a yellow giant of magnitude 3.6; the secondary is of magnitude 8. The two are only divisible in larger amateur instruments because of the discrepancy in brightness.
ν Gem is a double star divisible in binoculars and small amateur telescopes. The primary is a blue giant of magnitude 4.1, 550 light-years from Earth, and the secondary is of magnitude 8.
38 Gem, a binary star, is also divisible in small amateur telescopes, 84 light-years from Earth. The primary is a white star of magnitude 4.8 and the secondary is a yellow star of magnitude 7.8.
U Gem is a dwarf nova type cataclysmic variable discovered by J. R. Hind in 1855.
Mu Gem (Tejat) is the Bayer designation for a star in the northern constellation of Gemini. It has the traditional name Tejat Posterior, which means back foot, because it is the foot of Castor, one of the Gemini twins.
Deep-sky objects
M35 (NGC 2168) is a large, elongated open cluster of magnitude 5, discovered in the year 1745 by Swiss astronomer Philippe Loys de Chéseaux. It has an area of approximately 0.2 square degrees, the same size as the full moon. Its high magnitude means that M35 is visible to the unaided eye under dark skies; under brighter skies it is discernible in binoculars. The 200 stars of M35 are arranged in chains that curve throughout the cluster; it is 2800 light-years from Earth. Another open cluster in Gemini is NGC 2158. Visible in large amateur telescopes and very rich, it is more than 12,000 light-years from Earth.
NGC 2392 is a planetary nebula with an overall magnitude of 9.2, located 4,000 light-years from Earth. In a small amateur telescope, its 10th magnitude central star is visible, along with its blue-green elliptical disk. It is said to resemble the head of a person wearing a parka.
The Medusa Nebula is another planetary nebula, some 1,500 light-years distant. Geminga is a neutron star approximately 550 light-years from Earth. Other objects include NGC 2129, NGC 2158, NGC 2266, NGC 2331 and NGC 2355.
Meteor showers
The Geminids is a bright meteor shower that peaks on December 13–14. It has a maximum rate of approximately 100 meteors per hour, making it one of the richest meteor showers. The Epsilon Geminids peak between October 18 and October 29 and have only been recently confirmed. They overlap with the Orionids, which make the Epsilon Geminids difficult to detect visually. Epsilon Geminid meteors have a higher velocity than Orionids.
Mythology
In Babylonian astronomy, the stars Castor and Pollux were known as the Great Twins. The Twins were regarded as minor gods and were called Meshlamtaea and Lugalirra, meaning respectively 'The One who has arisen from the Underworld' and the 'Mighty King'. Both names can be understood as titles of Nergal, the major Babylonian god of plague and pestilence, who was king of the Underworld.
In Greek mythology, Gemini was associated with the myth of Castor and Pollux, the children of Leda and Argonauts both. Pollux was the son of Zeus, who seduced Leda, while Castor was the son of Tyndareus, king of Sparta and Leda's husband. Castor and Pollux were also mythologically associated with St. Elmo's fire in their role as the protectors of sailors. When Castor died, because he was mortal, Pollux begged his father Zeus to give Castor immortality, and he did, by uniting them together in the heavens.
Visualizations
Gemini is dominated by Castor and Pollux, two bright stars that appear relatively very closely together forming an o shape, encouraging the mythological link between the constellation and twinship. The twin above and to the right (as seen from the Northern Hemisphere) is Castor, whose brightest star is α Gem; it is a second-magnitude star and represents Castor's head. The twin below and to the left is Pollux, whose brightest star is β Gem (more commonly called Pollux); it is of the first magnitude and represents Pollux's head. Furthermore, the other stars can be visualized as two parallel lines descending from the two main stars, making it look like two figures.
H. A. Rey has suggested an alternative to the traditional visualization that connected the stars of Gemini to show twins holding hands. Pollux's torso is represented by the star υ Gem, Pollux's right hand by ι Gem, Pollux's left hand by κ Gem; all three of these stars are of the fourth magnitude. Pollux's pelvis is represented by the star δ Gem, Pollux's right knee by ζ Gem, Pollux's right foot by γ Gem, Pollux's left knee by λ Gem, and Pollux's left foot by ξ Gem. γ Gem is of the second magnitude, while δ and ξ Gem are of the third magnitude. Castor's torso is represented by the star τ Gem, Castor's left hand by ι Gem (which he shares with Pollux), Castor's right hand by θ Gem; all three of these stars are of the fourth magnitude. Castor's pelvis is represented by the star ε Gem, Castor's left foot by ν Gem, and Castor's right foot by μ Gem and η Gem; ε, μ, and η Gem are of the third magnitude. The brightest star in this constellation is Pollux.
Astronomy
In Meteorologica (1 343b30) Aristotle mentions that he observed Jupiter in conjunction with and then occulting a star in Gemini. This is the earliest-known observation of this nature. A study published in 1990 suggests the star involved was 1 Geminorum and the event took place on 5 December 337 BC.
When William Herschel discovered Uranus on 13 March 1781 it was located near η Gem. In 1930 Clyde Tombaugh exposed a series of photographic plates centred on δ Gem and discovered Pluto.
Equivalents
In Chinese astronomy, the stars that correspond to Gemini are located in two areas: the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ) and the Vermillion Bird of the South (南方朱雀, Nán Fāng Zhū Què).
In some cultures, the twin in Gemini refers to 'the unborn twin' and represents a spiritual or dual self that exists within.
Astrology
, the Sun appears in the constellation Gemini from June 21 to July 20. In tropical astrology, the Sun is considered to be in the sign Gemini from May 22 to June 21, and in sidereal astrology, from June 16 to July 16.
| Physical sciences | Zodiac | Astronomy |
938894 | https://en.wikipedia.org/wiki/Sedimentation | Sedimentation | Sedimentation is the deposition of sediments. It takes place when particles in suspension settle out of the fluid in which they are entrained and come to rest against a barrier. This is due to their motion through the fluid in response to the forces acting on them: these forces can be due to gravity, centrifugal acceleration, or electromagnetism. Settling is the falling of suspended particles through the liquid, whereas sedimentation is the final result of the settling process.
In geology, sedimentation is the deposition of sediments which results in the formation of sedimentary rock. The term is broadly applied to the entire range of processes that result in the formation of sedimentary rock, from initial erosion through sediment transport and settling to the lithification of the sediments. However, the strict geological definition of sedimentation is the mechanical deposition of sediment particles from an initial suspension in air or water.
Sedimentation may pertain to objects of various sizes, ranging from large rocks in flowing water, to suspensions of dust and pollen particles, to cellular suspensions, to solutions of single molecules such as proteins and peptides. Even small molecules supply a sufficiently strong force to produce significant sedimentation.
Principles
Settling
Classification
Classification of sedimentation:
Type 1 sedimentation is characterized by particles that settle discretely at a constant settling velocity, or by the deposition of Iron-Rich minerals to streamlines down to the point source. They settle as individual particles and do not flocculate (stick to each other) during settling. Example: sand and grit material
Type 2 sedimentation is characterized by particles that flocculate during sedimentation and because of this their size is constantly changing and therefore their settling velocity is changing. Example: alum or iron coagulation
Type 3 sedimentation is also known as zone sedimentation. In this process the particles are at a high concentration (greater than 1000 mg/L) such that the particles tend to settle as a mass and a distinct clear zone and sludge zone are present. Zone settling occurs in lime-softening, sedimentation, active sludge sedimentation and sludge thickeners.
Sedimentation equilibrium
When particles settling from a suspension reach a hard boundary, the concentration of particles at the boundary is opposed by the diffusion of the particles. The distribution of sediment near the boundary comes into sedimentation equilibrium. Measurements of the distribution yields information on the nature of the particles.
In geology
In geology, the term sedimentation is broadly applied to the entire range of processes that result in the formation of sedimentary rock, from initial formation of sediments by erosion of particles from rock outcrops, through sediment transport and settling, to the lithification of the sediments. However, the term is more particularly applied to the deposition of sediments, and in the strictest sense, it applies only to the mechanical deposition of sediment particles from an initial suspension in air or water. Sedimentation results in the formation of depositional landforms and the rocks that constitute the sedimentary record. The building up of land surfaces by sedimentation, particularly in river valleys, is called aggradation.
The rate of sedimentation is the thickness of sediment accumulated per unit time. For suspended load, this can be expressed mathematically by the Exner equation. Rates of sedimentation vary from less than per thousand years for pelagic sediment to several meters per thousand years in portions of major river deltas. However, long-term accumulation of sediments is determined less by rate of sedimentation than by rate of subsidence, which creates accommodation space for sediments to accumulate over geological time scales. Most sedimentation in the geologic record occurred in relative brief depositional episodes separated by long intervals of nondeposition or even erosion.
In estuarine environments, settling can be influenced by the presence or absence of vegetation. Trees such as mangroves are crucial to the attenuation of waves or currents, promoting the settlement of suspended particles.
Siltation
An undesired increased transport and sedimentation of suspended material is called siltation, and it is a major source of pollution in waterways in some parts of the world. High sedimentation rates can be a result of poor land management and a high frequency of flooding events. If not managed properly, it can be detrimental to fragile ecosystems on the receiving end, such as coral reefs. Climate change also affects siltation rates.
Human-enhanced sedimentation
In chemistry
In chemistry, sedimentation has been used to measure the size of large molecules (macromolecule), where the force of gravity is augmented with centrifugal force in an ultracentrifuge.
In water treatment
| Physical sciences | Other separations | Chemistry |
939133 | https://en.wikipedia.org/wiki/Modular%20programming | Modular programming | Modular programming is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each contains everything necessary to execute only one aspect or "concern" of the desired functionality.
A module interface expresses the elements that are provided and required by the module. The elements defined in the interface are detectable by other modules. The implementation contains the working code that corresponds to the elements declared in the interface. Modular programming is closely related to structured programming and object-oriented programming, all having the same goal of facilitating construction of large software programs and systems by decomposition into smaller pieces, and all originating around the 1960s. While the historical usage of these terms has been inconsistent, "modular programming" now refers to the high-level decomposition of the code of an entire program into pieces: structured programming to the low-level code use of structured control flow, and object-oriented programming to the data use of objects, a kind of data structure.
In object-oriented programming, the use of interfaces as an architectural pattern to construct modules is known as interface-based programming.
History
Modular programming, in the form of subsystems (particularly for I/O) and software libraries, dates to early software systems, where it was used for code reuse. Modular programming per se, with a goal of modularity, developed in the late 1960s and 1970s, as a larger-scale analog of the concept of structured programming (1960s). The term "modular programming" dates at least to the National Symposium on Modular Programming, organized at the Information and Systems Institute in July 1968 by Larry Constantine; other key concepts were information hiding (1972) and separation of concerns (SoC, 1974).
Modules were not included in the original specification for ALGOL 68 (1968), but were included as extensions in early implementations, ALGOL 68-R (1970) and ALGOL 68C (1970), and later formalized. One of the first languages designed from the start for modular programming was the short-lived Modula (1975), by Niklaus Wirth. Another early modular language was Mesa (1970s), by Xerox PARC, and Wirth drew on Mesa as well as the original Modula in its successor, Modula-2 (1978), which influenced later languages, particularly through its successor, Modula-3 (1980s). Modula's use of dot-qualified names, like M.a to refer to object a from module M, coincides with notation to access a field of a record (and similarly for attributes or methods of objects), and is now widespread, seen in C++, C#, Dart, Go, Java, OCaml, and Python, among others. Modular programming became widespread from the 1980s: the original Pascal language (1970) did not include modules, but later versions, notably UCSD Pascal (1978) and Turbo Pascal (1983) included them in the form of "units", as did the Pascal-influenced Ada (1980). The Extended Pascal ISO 10206:1990 standard kept closer to Modula2 in its modular support. Standard ML (1984) has one of the most complete module systems, including functors (parameterized modules) to map between modules.
In the 1980s and 1990s, modular programming was overshadowed by and often conflated with object-oriented programming, particularly due to the popularity of C++ and Java. For example, the C family of languages had support for objects and classes in C++ (originally C with Classes, 1980) and Objective-C (1983), only supporting modules 30 years or more later. Java (1995) supports modules in the form of packages, though the primary unit of code organization is a class. However, Python (1991) prominently used both modules and objects from the start, using modules as the primary unit of code organization and "packages" as a larger-scale unit; and Perl 5 (1994) includes support for both modules and objects, with a vast array of modules being available from CPAN (1993). OCaml (1996) followed ML by supporting modules and functors.
Modular programming is now widespread, and found in virtually all major languages developed since the 1990s. The relative importance of modules varies between languages, and in class-based object-oriented languages there is still overlap and confusion with classes as a unit of organization and encapsulation, but these are both well-established as distinct concepts.
Terminology
The term assembly (as in .NET languages like C#, F# or Visual Basic .NET) or package (as in Dart, Go or Java) is sometimes used instead of module. In other implementations, these are distinct concepts; in Python a package is a collection of modules, while in Java 9 the introduction of the new module concept (a collection of packages with enhanced access control) was implemented.
Furthermore, the term "package" has other uses in software (for example .NET NuGet packages). A component is a similar concept, but typically refers to a higher level; a component is a piece of a whole system, while a module is a piece of an individual program. The scale of the term "module" varies significantly between languages; in Python it is very small-scale and each file is a module, while in Java 9 it is planned to be large-scale, where a module is a collection of packages, which are in turn collections of files.
Other terms for modules include unit, used in Pascal dialects.
Language support
Languages that formally support the module concept include Ada, ALGOL, BlitzMax, C++, C#, Clojure, COBOL, Common Lisp, D, Dart, eC, Erlang, Elixir, Elm, F, F#, Fortran, Go, Haskell, IBM/360 Assembler, Control Language (CL), IBM RPG, Java, Julia, MATLAB, ML, Modula, Modula-2, Modula-3, Morpho, NEWP, Oberon, Oberon-2, Objective-C, OCaml, several Pascal derivatives (Component Pascal, Object Pascal, Turbo Pascal, UCSD Pascal), Perl, PHP, PL/I, PureBasic, Python, R, Ruby, Rust, JavaScript, Visual Basic (.NET) and WebDNA.
Conspicuous examples of languages that lack support for modules are C and have been C++ and Pascal in their original form, C and C++ do, however, allow separate compilation and declarative interfaces to be specified using header files. Modules were added to Objective-C in iOS 7 (2013); to C++ with C++20, and Pascal was superseded by Modula and Oberon, which included modules from the start, and various derivatives that included modules. JavaScript has had native modules since ECMAScript 2015.
Modular programming can be performed even where the programming language lacks explicit syntactic features to support named modules, like, for example, in C. This is done by using existing language features, together with, for example, coding conventions, programming idioms and the physical code structure. IBM i also uses modules when programming in the Integrated Language Environment (ILE).
Key aspects
With modular programming, concerns are separated such that modules perform logically discrete functions, interacting through well-defined interfaces. Often modules form a directed acyclic graph (DAG); in this case a cyclic dependency between modules is seen as indicating that these should be a single module. In the case where modules do form a DAG they can be arranged as a hierarchy, where the lowest-level modules are independent, depending on no other modules, and higher-level modules depend on lower-level ones. A particular program or library is a top-level module of its own hierarchy, but can in turn be seen as a lower-level module of a higher-level program, library, or system.
When creating a modular system, instead of creating a monolithic application (where the smallest component is the whole), several smaller modules are written separately so when they are composed together, they construct the executable application program. Typically, these are also compiled separately, via separate compilation, and then linked by a linker. A just-in-time compiler may perform some of this construction "on-the-fly" at run time.
These independent functions are commonly classified as either program control functions or specific task functions. Program control functions are designed to work for one program. Specific task functions are closely prepared to be applicable for various programs.
This makes modular designed systems, if built correctly, far more reusable than a traditional monolithic design, since all (or many) of these modules may then be reused (without change) in other projects. This also facilitates the "breaking down" of projects into several smaller projects. Theoretically, a modularized software project will be more easily assembled by large teams, since no team members are creating the whole system, or even need to know about the system as a whole. They can focus just on the assigned smaller task.
| Technology | Software development: General | null |
939182 | https://en.wikipedia.org/wiki/Messier%207 | Messier 7 | Messier 7 or M7, also designated NGC 6475 and sometimes known as the Ptolemy Cluster, is an open cluster of stars in the constellation of Scorpius. The cluster is easily detectable with the naked eye, close to the "stinger" of Scorpius. With a declination of −34.8°, it is the southernmost Messier object.
M7 has been known since antiquity; it was first recorded by the 2nd-century Greek-Roman astronomer Ptolemy, who described it as a nebula in 130 AD. Italian astronomer Giovanni Batista Hodierna observed it before 1654 and counted 30 stars in it. In 1764, French astronomer Charles Messier catalogued the cluster as the seventh member in his list of comet-like objects. English astronomer John Herschel described it as "coarsely scattered clusters of stars".
Telescopic observations of the cluster reveal about 80 stars within a field of view of 1.3° across. At the cluster's estimated distance of 980 light years this corresponds to an actual diameter of 25 light years. The tidal radius of the cluster is and it has a combined mass of about 735 times the mass of the Sun. The age of the cluster is around 200 million years while the brightest member star is of magnitude 5.6. In terms of composition, the cluster contains a similar abundance of elements other than hydrogen and helium as the Sun.
On August 29, 2006, Messier 7 was used for first light image of the Long Range Reconnaissance Imager (LORRI) telescope on the Pluto-bound New Horizons spacecraft.
As of January 2022, Messier 7 is one of the few remaining Messier objects not photographed by the Hubble Space Telescope. This is mainly due to those objects' angular diameter or lack of scientific significance. Most such objects are open clusters of large angular diameter that would require thousands of photos due to Hubble's small field of view. (For comparison, Hubble's well known panoramic photo of the Andromeda Galaxy, covering less than half of our galactic neighbor, required approximately 400 individual movements and 7400 exposures.)
Gallery
| Physical sciences | Notable star clusters | Astronomy |
939579 | https://en.wikipedia.org/wiki/Carina%20Nebula | Carina Nebula | The Carina Nebula or Eta Carinae Nebula (catalogued as NGC 3372; also known as the Great Carina Nebula) is a large, complex area of bright and dark nebulosity in the constellation Carina, located in the Carina–Sagittarius Arm of the Milky Way galaxy. The nebula is approximately from Earth.
The nebula has within its boundaries the large Carina OB1 association and several related open clusters, including numerous O-type stars and several Wolf–Rayet stars. encompasses the star clusters and . is one of the youngest known star clusters at half a million years old and contains stars like the O2 supergiant . is the home of many extremely luminous stars, such as and the Eta Carinae star system. , , , , and are also considered members of the association. is the oldest and furthest from , indicating sequential and ongoing star formation.
The nebula is one of the largest diffuse nebulae in our skies. Although it is four times as large as and even brighter than the famous Orion Nebula, the Carina Nebula is much less well known due to its location in the southern sky. It was discovered by Nicolas-Louis de Lacaille in 1752 from the Cape of Good Hope.
The Carina Nebula was selected as one of five cosmic objects observed by the James Webb Space Telescope, as part of the release of its first official science images. A detailed image was made of an early star-forming region of NGC 3324 known as the Cosmic Cliffs.
Discovery and basic information
Nicolas-Louis de Lacaille discovered the nebula on 25 January 1752. Its dimensions are 120×120 arcminutes centered on the coordinates of right ascension and declination . In modern times it is calculated to be around from Earth.
Objects within the Carina Nebula
Eta Carinae
Eta Carinae is a highly luminous hypergiant star. Estimates of its mass range from 100 to 150 times the mass of the Sun, and its luminosity is about four million times that of the Sun.
This object is currently the most massive star that can be studied in great detail, because of its location and size. Several other known stars may be more luminous and more massive, but data on them is far less robust. (Caveat: Since examples such as the Pistol Star have been demoted by improved data, one should be skeptical of most available lists of "most massive stars". In 2006, Eta Carinae still had the highest confirmed luminosity, based on data across a broad range of wavelengths.) Stars with more than 80 times the mass of the Sun produce more than a million times as much light as the Sun. They are quite rare—only a few dozen in a galaxy as big as ours—and they flirt with disaster near the Eddington limit, i.e., the outward pressure of their radiation is almost strong enough to counteract gravity. Stars that are more than 120 solar masses exceed the theoretical Eddington limit, and their gravity is barely strong enough to hold in its radiation and gas, resulting in a possible supernova or hypernova in the near future.
Eta Carinae's effects on the nebula can be seen directly. Dark globules and some other less visible objects have tails pointing directly away from the massive star. The entire nebula would have looked very different before the Great Eruption in the 1840s surrounded Eta Carinae with dust, drastically reducing the amount of ultraviolet light it put into the nebula.
Homunculus Nebula
Within the large bright nebula is a much smaller feature, immediately surrounding Eta Carinae itself, known as the Homunculus Nebula (from Latin meaning Little Man). It is believed to have been ejected in an enormous outburst in 1841 which briefly made Eta Carinae the second-brightest star in the sky.
The Homunculus Nebula is a small H II region, with gas shocked into ionized and excited states. It also absorbs much of the light from the extremely luminous central stellar system and re-radiates it as infrared (IR). It is the brightest object in the sky at mid-IR wavelengths.
The distance to the Homunculus can be derived from its observed angular dimensions and calculated linear size, assuming it is axially symmetric. The most accurate distance obtained using this method is . The largest radius of the bipolar lobes in this model is about 22,000 AU, and the axis is oriented 41° from the line of sight, or 49° relative to the plane of the sky, which means it is seen from Earth slightly more "end on" than "side on".
Keyhole Nebula
The Keyhole, or Keyhole Nebula, is a small dark cloud of cold molecules and dust within the Carina Nebula, containing bright filaments of hot, fluorescing gas, silhouetted against the much brighter background nebula. John Herschel used the term "lemniscate-oval vacuity" when first describing it, and subsequently referred to it simply as the "oval vacuity". The term lemniscate continued to be used to describe this portion of the nebula until popular astronomy writer Emma Converse described the shape of the nebula as "resembling a keyhole" in an 1873 Appleton's Journal article. The name Keyhole Nebula then came into common use, sometimes for the Keyhole itself, sometimes to describe the whole of the Carina Nebula (signifying "the nebula that contains the Keyhole").
The diameter of the Keyhole structure is approximately . Its appearance has changed significantly since it was first observed, possibly due to changes in the ionizing radiation from Eta Carinae. The Keyhole does not have its own NGC designation. It is sometimes erroneously called NGC 3324, but that catalogue designation refers to a reflection and emission nebula just northwest of the Carina Nebula (or to its embedded star cluster).
Defiant Finger
A small Bok globule in the Keyhole Nebula (at RA 10h44m30s, Dec −59°40') has been photographed by the Hubble Space Telescope and is nicknamed the "Carina Defiant Finger" due to its shape. In Hubble images, light can be seen radiating off the edges of the globule; this is especially visible in the southern tip, where the "finger" is. It is thought that the Defiant Finger is being ionized by the bright Wolf–Rayet star WR 25, and/or Trumpler 16-244, a bright blue supergiant. It has a mass of at least , and stars may be forming within it. Like other interstellar clouds under intense radiation, the Defiant Finger will eventually be completely evaporated; for this cloud the time frame is predicted to be 200,000 to 1,000,000 years.
Trumpler 14
Trumpler 14 is an open cluster with a diameter of , located within the inner regions of the Carina Nebula, approximately from Earth. It is one of the main clusters of the stellar association, which is the largest association in the Carina Nebula. About 2,000 stars have been identified in . and the total mass of the cluster is estimated to be .
Trumpler 15
Trumpler 15 is a star cluster on the north-east edge of the Carina Nebula. Early studies disagreed about the distance, but astrometric measurements by the Gaia mission have confirmed that it is the same distance as the rest of Carina OB1.
Trumpler 16
Trumpler 16 is one of the main clusters of the Carina OB1 stellar association, which is the largest association in the Carina Nebula, and it is bigger and more massive than . The star Eta Carinae is part of this cluster.
Mystic Mountain
Mystic Mountain is the term for a dust–gas pillar in the Carina Nebula, a photo of which was taken by Hubble Space Telescope on its 20th anniversary. The area was observed by Hubble's Wide Field Camera 3 on 1–2 February 2010. The pillar measures in height; nascent stars inside the pillar fire off gas jets that stream from towering “peaks”.
WR 22
WR 22 is an eclipsing binary. The dynamical masses derived from orbital fitting vary from over to less than for the primary and about for the secondary. The spectroscopic mass of the primary has been calculated at or .
WR 25
WR 25 is a binary system in the central portion of the Carina Nebula, a member of the cluster. The primary is a Wolf–Rayet star, possibly the most luminous star in the galaxy. The secondary is hard to detect but thought to be a luminous OB star.
HD 93129
HD 93129 is a triple star system of O-class stars in Carina. All three stars of are among the most luminous in the galaxy; consists of two clearly resolved components, and , and itself is made up of two much closer stars.
HD 93129 A has been resolved into two components. The spectrum is dominated by the brighter component, although the secondary is only 0.9 magnitudes fainter. is an O2 supergiant and Ab is an O3.5 main sequence star. Their separation has decreased from 55 milliarcseconds in 2004 to only 27 mas in 2013, but an accurate orbit is not available.
HD 93129 B is an O3.5 main-sequence star 3 arcseconds away from the closer pair. It is about 1.5 magnitudes fainter than the combined , and is approximately the same brightness as .
HD 93250
HD 93250 is one of the brightest stars in the region of the Carina Nebula. It is only 7.5 arcminutes from Eta Carinae, and is considered to be a member of the same loose open cluster , although it appears closer to the more compact .
HD 93250 is known to be a binary star, however, individual spectra of the two components have never been observed but are thought to be very similar. The spectral type of has variously been given as O5, O6/7, O4, and O3. It has sometimes been classified as a main sequence star and sometimes as a giant star. The Galactic O-Star Spectroscopic Survey has used it as the standard star for the newly created O4 subgiant spectral type.
HD 93205
HD 93205 is a binary system of two large stars.
The more massive member of the pair is an O3.5 main sequence star. The spectrum shows some ionized nitrogen and helium emission lines, indicating some mixing of fusion products to the surface and a strong stellar wind. The mass calculated from apsidal motion of the orbits is . This is somewhat lower than expected from evolutionary modelling of a star with its observed parameters.
The less massive member is an O8 main sequence star of approximately . It moves in its orbit at a speed of over and is considered to be a relativistic binary, which causes the apses of the orbit to change in a predictable way.
Catalogued open clusters in Carina Nebula
, there are eight known open clusters in the Carina Nebula:
Bochum 10 (Bo 10)
Bochum 11 (Bo 11)
Collinder 228 (Cr 228)
Collinder 232 (Cr 232)
Collinder 234 (Cr 234)
Trumpler 14 (Tr 14, Cr 230)
Trumpler 15 (Tr 15, Cr 231)
Trumpler 16 (Tr 16, Cr 233)
Annotated map
Gallery
| Physical sciences | Notable nebulae | null |
940304 | https://en.wikipedia.org/wiki/Podocarpus | Podocarpus | Podocarpus () is a genus of conifers, the most numerous and widely distributed of the podocarp family, the Podocarpaceae. Podocarpus species are evergreen shrubs or trees, usually from tall, known to reach at times. The cones have two to five fused cone scales, which form a fleshy, berry-like, brightly coloured receptacle at maturity. The fleshy cones attract birds, which then eat the cones and disperse the seeds in their droppings. About 97 to 107 species are placed in the genus depending on the circumscription of the species.
Species are cultivated as ornamental plants for parks and large gardens. The cultivar 'County Park Fire' has won the Royal Horticultural Society's Award of Garden Merit.
Etymology
The name comes from Greek poús meaning "foot" and karpós meaning "fruit".
Names
Common names for various species include "yellowwood" and "pine", as in the plum pine (Podocarpus elatus) or the Buddhist pine (Podocarpus macrophyllus).
Description
Podocarpus species are evergreen woody plants. They are generally trees, but may also be shrubs. The trees can reach a height of at their tallest. Some shrubby species have a decumbent growth habit. The primary branches form pseudowhorls around the trunk. The bark can be scaly or fibrous and peeling with vertical strips. Terminal buds are distinctive with bud scales that are often imbricate and can be spreading.
The leaves are simple and flattened, and may be sessile or short petiolate. The phyllotaxis or leaf arrangement is spiral, and may be subopposite on some shoots. The leaves are usually linear-lanceolate or linear-elliptic in shape, though they can be broader lanceolate, ovate, or nearly elliptic in some species. Juvenile leaves are often larger than adult leaves, though similar in shape. The leaves are coriaceous and have a distinct midrib. The stomata are usually restricted to the abaxial or underside of the leaf, forming two stomatal bands around the midrib.
Podocarpus spp. are generally dioecious, with the male pollen cones and female seed cones borne on separate individual plants, but some species may be monoecious. The cones develop from axillary buds, and may be solitary or form clusters.
The pollen cones are long and catkin-like in shape. They may be sessile or short pedunculate. A pollen cone consists of a slender rachis with numerous spirally arranged microsporophylls around it. Each triangular microsporophyll has two basal pollen-producing pollen sacs. The pollen is bisaccate.
The seed cones are highly modified with the few cone scales swelling and fusing at maturity. The cones are pedunculate and often solitary. The seed cone consists of two to five cone scales of which only the uppermost one or rarely two nearest the apex of the cone are fertile. Each fertile scale usually has one apical ovule. The infertile basal scales fuse and swell to form a succulent, usually brightly colored receptacle. Each cone generally has only one seed, but may have two or rarely more. The seed is attached to the apex of the receptacle. The seed is entirely covered by a fleshy modified scale known as an epimatium. The epimatium is usually green, but may be bluish or reddish in some species.
Distribution
The natural distribution of the genus consists of much of Africa, Asia, Australia, Central and South America, and several South Pacific islands. The genus occurs from southern Chile north to Mexico in the Americas and from New Zealand north to Japan in the Asia-Pacific region.
Podocarpus and the Podocarpaceae were endemic to the ancient supercontinent of Gondwana, which broke up into Africa, South America, India, Australia-New Guinea, New Zealand, and New Caledonia between 105 and 45 million years ago. Podocarpus is a characteristic tree of the Antarctic flora, which originated in the cool, moist climate of southern Gondwana, and elements of the flora survive in the humid temperate regions of the former supercontinent. As the continents drifted north and became drier and hotter, podocarps and other members of the Antarctic flora generally retreated to humid regions, especially in Australia, where sclerophyll genera such as Acacia and Eucalyptus became predominant. The flora of Malesia, which includes the Malay peninsula, Indonesia, the Philippines, and New Guinea, is generally derived from Asia, but includes many elements of the old Gondwana flora, including several other genera in the Podocarpaceae (Dacrycarpus, Dacrydium, Falcatifolium, Nageia, Phyllocladus, and the Malesian endemic Sundacarpus), and also Agathis in the Araucariaceae.
Classification
The two subgenera, Podocarpus and Foliolatus, are distinguished by cone and seed morphology.
In Podocarpus, the cone is not subtended by lanceolate bracts, and the seed usually has an apical ridge. Species are distributed in the temperate forests of Tasmania, New Zealand, and southern Chile, with a few occurring in the tropical highlands of Africa and the Americas.
In Foliolatus, the cone is subtended by two lanceolate bracts ("foliola"), and the seed usually lacks an apical ridge. The species are tropical and subtropical, concentrated in eastern and southeastern Asia and Malesia, overlapping with subgenus Podocarpus in northeastern Australia and New Caledonia.
Species in family Podocarpaceae have been reshuffled a number of times based on genetic and physiological evidence, with many species formerly assigned to Podocarpus now assigned to other genera. A sequence of classification schemes has moved species between Nageia and Podocarpus, and in 1969, de Laubenfels divided the huge genus Podocarpus into Dacrycarpus, Decussocarpus (an invalid name he later revised to the valid Nageia), Prumnopitys, and Podocarpus.
Some species of genus Afrocarpus were formerly in Podocarpus, such as Afrocarpus gracilior.
Species
Subgenus Podocarpus
section Podocarpus (eastern and southern Africa)
Podocarpus elongatus
Podocarpus latifolius
Podocarpus milanjianus
section Scytopodium (Madagascar, eastern Africa)
Podocarpus capuronii
Podocarpus henkelii
Podocarpus humbertii
Podocarpus madagascariensis
Podocarpus rostratus
section Australis (southeast Australia, New Zealand, New Caledonia, southern Chile)
Podocarpus alpinus
Podocarpus gnidioides
Podocarpus laetus
Podocarpus lawrencei
Podocarpus nivalis
Podocarpus nubigenus
Podocarpus totara
section Crassiformis (northeast Queensland)
Podocarpus smithii
section Capitulatis (central Chile, southern Brazil, the Andes from northern Argentina to Ecuador)
Podocarpus aracensis
Podocarpus glomeratus
Podocarpus lambertii
Podocarpus parlatorei
Podocarpus salignus
Podocarpus sellowii
Podocarpus sprucei
Podocarpus transiens
section Pratensis (southeast Mexico to Guyana and Peru)
Podocarpus oleifolius
Podocarpus pendulifolius
Podocarpus tepuiensis
section Lanceolatis (southern Mexico, Puerto Rico, Lesser Antilles, Venezuela to highland Bolivia)
Podocarpus coriaceus
Podocarpus matudae
Podocarpus rusbyi
Podocarpus salicifolius
Podocarpus steyermarkii
section Pumilis (southern Caribbean islands and Guiana Highlands)
Podocarpus angustifolius
Podocarpus aristulatus
Podocarpus buchholzii
Podocarpus ekmanii
Podocarpus roraimae
Podocarpus urbanii
section Nemoralis (central and northern South America, south to Bolivia)
Podocarpus brasiliensis
Podocarpus celatus
Podocarpus guatemalensis
Podocarpus magnifolius
Podocarpus purdieanus
Podocarpus trinitensis
Subgenus Foliolatus
section Acuminatus (Sikkim, India to Borneo, New Guinea, New Britain, and northern Queensland)
Podocarpus dispermus
Podocarpus hookeri
Podocarpus ledermannii (section type)
Podocarpus marginalis
Podocarpus micropedunculatus
section Bracteatus (Sumatra to Fiji)
Podocarpus atjehensis
Podocarpus bracteatus
Podocarpus confertus
Podocarpus degeneri
Podocarpus pseudobracteatus
section Foliolatus (Nepal to Sumatra, the Philippines, and New Guinea to Tonga)
Podocarpus colliculatus (treated as a synonym of P. sylvestris by Plants of the World Online)
Podocarpus idenburgensis
Podocarpus insularis
Podocarpus neolinearis (syn. Podocarpus linearis )
Podocarpus neglectus
Podocarpus neriifolius (section type)
Podocarpus pallidus
Podocarpus rubens
Podocarpus vanuatuensis
section Globulus (Taiwan to Vietnam, Sumatra and Borneo, and New Caledonia)
Podocarpus annamiensis
Podocarpus beecherae (section type)
Podocarpus globulus
Podocarpus lucienii
Podocarpus nakaii
Podocarpus oblongus
Podocarpus sylvestris
Podocarpus teysmannii (syn. Podocarpus epiphyticus )
section Longifoliolatus (Peninsular Malaysia and Sumatra east to Fiji)
Podocarpus decipiens
Podocarpus decumbens
Podocarpus deflexus
Podocarpus levis
Podocarpus longefoliolatus (section type)
Podocarpus novoguineensis
Podocarpus polyspermus
Podocarpus salomoniensis
section Gracilis (southern China, across Malesia to Fiji)
Podocarpus affinis
Podocarpus glaucus
Podocarpus pilgeri
Podocarpus ramosii
section Macrostachyus (Eastern India to New Guinea)
Podocarpus archboldii (syn. Podocarpus crassigemmis ) (section type)
Podocarpus brassii
Podocarpus brevifolius
Podocarpus indonesiensis (synonym of Podocarpus rubens per Plants of the World Online)
Podocarpus lenticularis
Podocarpus palawanensis
section Rumphius (Hainan, south through Malesia to northern Queensland)
Podocarpus grayae (aka P. grayii and P. grayi)
Podocarpus laubenfelsii
Podocarpus rumphii
section Polystachyus (southern China and Japan, through Malaya to New Guinea and northeast Australia)
Podocarpus chingianus
Podocarpus elatus
Podocarpus fasciculus
Podocarpus macrocarpus
Podocarpus macrophyllus
Podocarpus macrophyllus var. maki (syn. Podocarpus chinensis )
Podocarpus polystachyus
Podocarpus ridleyi
Podocarpus subtropicalis
section Spathoides (southern China to New Caledonia)
Podocarpus borneensis
Podocarpus costalis
Podocarpus forrestii
Podocarpus gibbsiae
Podocarpus laminaris
Podocarpus lophatus
Podocarpus novae-caledoniae
Podocarpus orarius
Podocarpus spathoides
Podocarpus thevetiifolius
Podocarpus tixieri
section Spinulosus (southeast and southwest coasts of Australia)
Podocarpus drouynianus
Podocarpus spinulosus
Allergenic potential
Male Podocarpus spp. are extremely allergenic, and have an OPALS allergy-scale rating of 10 out of 10. Conversely, completely female Podocarpus plants have an OPALS rating of 1, and are considered "allergy-fighting", as they capture pollen while producing none.
Podocarpus resemble yews, and as with yews, the stems, leaves, flowers, and pollen of Podocarpus are all poisonous. Additionally, the leaves, stems, bark, and pollen are cytotoxic. The male Podocarpus blooms and releases this cytotoxic pollen in the spring and early summer.
Uses
The earliest use of P. elongatus dates back to the southern African Middle Stone Age where it was used to produce an adhesive by distillation. Today, several species of Podocarpus are grown as garden trees, or trained into hedges, espaliers, or screens. In the novel Jurassic Park by Michael Crichton, Podocarpus trees (misspelled as "protocarpus") were used on Isla Nublar, Costa Rica, to conceal electric fences from visitors. Common garden species used for their attractive deep-green foliage and neat habits include P. macrophyllus, known commonly as Buddhist pine, fern pine, or kusamaki, P. salignus from Chile, and P. nivalis, a smaller, red-fleshy-coned shrub. Some members of the genera Nageia, Prumnopitys, and Afrocarpus are marketed under the genus name Podocarpus.
The red, purple, or bluish fleshy cone (popularly called a "fruit") of most species of Podocarpus are edible, raw or cooked into jams or pies. They have a mucilaginous texture with a slightly sweet flavor. They are slightly toxic, so should be eaten only in small amounts, especially when raw.
Some species of Podocarpus are used in systems of traditional medicine for conditions such as fevers, coughs, arthritis, sexually transmitted diseases, and canine distemper.
| Biology and health sciences | Gymnosperms | null |
940606 | https://en.wikipedia.org/wiki/Population%20growth | Population growth | Population growth is the increase in the number of people in a population or dispersed group. The global population has grown from 1 billion in 1800 to 8.2 billion in 2025. Actual global human population growth amounts to around 70 million annually, or 0.85% per year. As of 2024, The United Nations projects that global population will peak in the mid-2080s at around 10.3 billion. The UN's estimates have decreased strongly in recent years due to sharp declines in global birth rates.
Others have challenged many recent population projections as having underestimated population growth.
The world human population has been growing since the end of the Black Death, around the year 1350. A mix of technological advancement that improved agricultural productivity and sanitation and medical advancement that reduced mortality increased population growth. In some geographies, this has slowed through the process called the demographic transition, where many nations with high standards of living have seen a significant slowing of population growth. This is in direct contrast with less developed contexts, where population growth is still happening. Globally, the rate of population growth has declined from a peak of 2.2% per year in 1963.
Population growth alongside increased consumption is a driver of environmental concerns, such as biodiversity loss and climate change, due to overexploitation of natural resources for human development. International policy focused on mitigating the impact of human population growth is concentrated in the Sustainable Development Goals which seeks to improve the standard of living globally while reducing the impact of society on the environment while advancing human well-being.
History
World population has been rising continuously since the end of the Black Death, around the year 1350. Population began growing rapidly in the Western world during the industrial revolution. The most significant increase in the world's population has been since the 1950s, mainly due to medical advancements and increases in agricultural productivity.
Haber process
Due to its dramatic impact on the human ability to grow food, the Haber process, named after one of its inventors, the German chemist Fritz Haber, served as the "detonator of the population explosion", enabling the global population to increase from 1.6 billion in 1900 to 7.7 billion by November 2019.
Thomas McKeown hypotheses
Some of the reasons for the "Modern Rise of Population" were particularly investigated by the British health scientist Thomas McKeown (1912–1988). In his publications, McKeown challenged four theories about the population growth:
McKeown stated that the growth in Western population, particularly surging in the 19th century, was not so much caused by an increase in fertility, but largely by a decline of mortality particularly of childhood mortality followed by infant mortality,
The decline of mortality could largely be attributed to rising standards of living, whereby McKeown put most emphasis on improved nutritional status,
McKeown questioned the effectiveness of public health measures, including sanitary reforms, vaccination and quarantine,
The “McKeown thesis" states that curative medicine measures played little role in mortality decline, not only prior to the mid-20th century but also until well into the 20th century.
Although the McKeown thesis has been heavily disputed, recent studies have confirmed the value of his ideas. His work is pivotal for present day thinking about population growth, birth control, public health and medical care. McKeown had a major influence on many population researchers, such as health economists and Nobel prize winners Robert W. Fogel (1993) and Angus Deaton (2015). The latter considered McKeown as "the founder of social medicine".
Growth rate models
The "population growth rate" is the rate at which the number of individuals in a population increases in a given time period, expressed as a fraction of the initial population. Specifically, population growth rate refers to the change in population over a unit time period, often expressed as a percentage of the number of individuals in the population at the beginning of that period. This can be written as the formula, valid for a sufficiently small time interval:
A positive growth rate indicates that the population is increasing, while a negative growth rate indicates that the population is decreasing. A growth ratio of zero indicates that there were the same number of individuals at the beginning and end of the period—a growth rate may be zero even when there are significant changes in the birth rates, death rates, immigration rates, and age distribution between the two times.
A related measure is the net reproduction rate. In the absence of migration, a net reproduction rate of more than 1 indicates that the population of females is increasing, while a net reproduction rate less than one (sub-replacement fertility) indicates that the population of females is decreasing.
Most populations do not grow exponentially, rather they follow a logistic model. Once the population has reached its carrying capacity, it will stabilize and the exponential curve will level off towards the carrying capacity, which is usually when a population has depleted most its natural resources. In the world human population, growth may be said to have been following a linear trend throughout the last few decades.
Logistic equation
The growth of a population can often be modelled by the logistic equation
where
= the population after time t;
= time a population grows;
= the relative growth rate coefficient;
= the carrying capacity of the population; defined by ecologists as the maximum population size that a particular environment can sustain.
As it is a separable differential equation, the population may be solved explicitly, producing a logistic function:
,
where and is the initial population at time 0.
Global population growth rate
The world population growth rate peaked in 1963 at 2.2% per year and subsequently declined. In 2017, the estimated annual growth rate was 1.1%. The CIA World Factbook gives the world annual birthrate, mortality rate, and growth rate as 1.86%, 0.78%, and 1.08% respectively. The last 100 years have seen a massive fourfold increase in the population, due to medical advances, lower mortality rates, and an increase in agricultural productivity made possible by the Green Revolution.
The annual increase in the number of living humans peaked at 88.0 million in 1989, then slowly declined to 73.9 million in 2003, after which it rose again to 75.2 million in 2006. In 2017, the human population increased by 83 million. Generally, developed nations have seen a decline in their growth rates in recent decades, though annual growth rates remain above 2% in some countries of the Middle East and Sub-Saharan Africa, and also in South Asia, Southeast Asia, and Latin America.
In some countries the population is declining, especially in Eastern Europe, mainly due to low fertility rates, high death rates and emigration. In Southern Africa, growth is slowing due to the high number of AIDS-related deaths. Some Western Europe countries might also experience population decline. Japan's population began decreasing in 2005.
The United Nations Population Division projects world population to reach 11.2 billion by the end of the 21st century.
The Institute for Health Metrics and Evaluation projects that the global population will peak in 2064 at 9.73 billion and decline to 8.89 billion in 2100.
A 2014 study in Science concludes that the global population will reach 11 billion by 2100, with a 70% chance of continued growth into the 22nd century. The German Foundation for World Population reported in December 2019 that the global human population grows by 2.6 people every second, and could reach 8 billion by 2023.
Growth by country
According to United Nations population statistics, the world population grew by 30%, or 1.6 billion humans, between 1990 and 2010. In number of people the increase was highest in India (350 million) and China (196 million). Population growth rate was among highest in the United Arab Emirates (315%) and Qatar (271%).
Many of the world's countries, including many in Sub-Saharan Africa, the Middle East, South Asia and South East Asia, have seen a sharp rise in population since the end of the Cold War. The fear is that high population numbers are putting further strain on natural resources, food supplies, fuel supplies, employment, housing, etc. in some of the less fortunate countries. For example, the population of Chad has ultimately grown from 6,279,921 in 1993 to 10,329,208 in 2009, further straining its resources. Vietnam, Mexico, Nigeria, Egypt, Ethiopia, and the DRC are witnessing a similar growth in population.
The following table gives some example countries or territories:
| Biology and health sciences | Ecology | Biology |
941270 | https://en.wikipedia.org/wiki/Giganotosaurus | Giganotosaurus | Giganotosaurus ( ) is a genus of large theropod dinosaur that lived in what is now Argentina, during the early Cenomanian age of the Late Cretaceous period, approximately 99.6 to 95 million years ago. The holotype specimen was discovered in the Candeleros Formation of Patagonia in 1993 and is almost 70% complete. The animal was named Giganotosaurus carolinii in 1995; the genus name translates to "giant southern lizard", and the specific name honors the discoverer, Ruben Carolini. A dentary bone, a tooth, and some tracks, discovered before the holotype, were later assigned to this animal. The genus attracted much interest and became part of a scientific debate about the maximum sizes of theropod dinosaurs.
Giganotosaurus was one of the largest known terrestrial carnivores, but the exact size has been hard to determine due to the incompleteness of the remains found so far. Estimates for the most complete specimen range from a length of , a skull in length, and a weight of . The dentary bone that belonged to a supposedly larger individual has been used to extrapolate a length of . Some researchers have found the animal to be larger than Tyrannosaurus, which has historically been considered the largest theropod, while others have found them to be roughly equal in size and the largest size estimates for Giganotosaurus exaggerated. The skull was low, with rugose (rough and wrinkled) nasal bones and a ridge-like crest on the lacrimal bone in front of the eye. The front of the lower jaw was flattened and had a downward-projecting process (or "chin") at the tip. The teeth were compressed sideways and had serrations. The neck was strong and the pectoral girdle proportionally small.
Part of the family Carcharodontosauridae, Giganotosaurus is one of the most completely known members of the group, which includes other very large theropods, such as the closely related Mapusaurus, Tyrannotitan and Carcharodontosaurus. Giganotosaurus is thought to have been homeothermic (a type of "warm-bloodedness"), with a metabolism between that of a mammal and a reptile, which would have enabled fast growth. It would have been capable of closing its jaws quickly, capturing and bringing down prey by delivering powerful bites. The "chin" may have helped in resisting stress when a bite was delivered against prey. Giganotosaurus is thought to have been the apex predator of its ecosystem, and it may have fed on juvenile sauropod dinosaurs.
Discovery
In 1993, the amateur Argentine fossil hunter discovered the tibia (lower leg bone) of a theropod dinosaur while driving a dune buggy in the badlands near Villa El Chocón, in the Neuquén province of Patagonia, Argentina. Specialists from the National University of Comahue were sent to excavate the specimen after being notified of the find. The discovery was announced by the paleontologists Rodolfo Coria and Leonardo Salgado at a Society of Vertebrate Paleontology meeting in 1994, where science writer Don Lessem offered to fund the excavation, after having been impressed by a photo of the leg-bone. The partial skull was scattered over an area of about 10 m2 (110 sq ft), and the postcranial skeleton was disarticulated. The specimen preserved almost 70% of the skeleton, and included most of the vertebral column, the pectoral and pelvic girdles, the femora, and the left tibia and fibula.
In 1995, this specimen was preliminarily described by Coria and Salgado, who made it the holotype of the new genus and species Giganotosaurus carolinii (parts of the skeleton were still encased in plaster at this time). The generic name is derived from the Ancient Greek words gigas/ (meaning "giant"), notos/ (meaning "austral/southern", in reference to its provenance) and -sauros/- (meaning "lizard"). The specific name honors Carolini, the discoverer. The holotype skeleton is now housed in the Ernesto Bachmann Paleontological Museum (where it is catalogued as specimen MUCPv-Ch1) in Villa El Chocón, which was inaugurated in 1995 at the request of Carolini. The specimen is the main exhibition at the museum, and is placed on the sandy floor of a room devoted to the animal, along with tools used by paleontologists during the excavation. A mounted reconstruction of the skeleton is exhibited in an adjacent room.
One of the features of theropod dinosaurs that has attracted most scientific interest is the fact that the group includes the largest terrestrial predators of the Mesozoic Era. This interest began with the discovery of one of the first known dinosaurs, Megalosaurus, named in 1824 for its large size. More than half a century later in 1905, Tyrannosaurus was named, and it remained the largest known theropod dinosaur for 90 years, though other large theropods were also known. The discussion of which theropod was the largest was revived in the 1990s by new discoveries in Africa and South America. In their original description, Coria and Salgado considered Giganotosaurus at least the largest theropod dinosaur from the southern hemisphere, and perhaps the largest in the world. They conceded that comparison with Tyrannosaurus was difficult due to the disarticulated state of the cranial bones of Giganotosaurus, but noted that at , the femur of Giganotosaurus was 5 cm (2 in) longer than that of "Sue", the largest known Tyrannosaurus specimen, and that the bones of Giganotosaurus appeared to be more robust, indicating a heavier animal. They estimated the skull to have been about 1.53 m (5 ft) long, and the whole animal to have been 12.5 m (41 ft) long, with a weight of about .
In 1996, the paleontologist Paul Sereno and colleagues described a new skull of the related genus Carcharodontosaurus from Morocco, a theropod described in 1927 but previously known only from fragmentary remains (much of its fossils were destroyed in World War II). They estimated the skull to have been long, similar to Giganotosaurus, but perhaps exceeding that of the Tyrannosaurus "Sue", with a 1.53 m (5 ft) long skull. They also pointed out that carcharodontosaurs appear to have had the proportionally largest skulls, but that Tyrannosaurus appears to have had longer hind limbs. In an interview for a 1995 article entitled "new beast usurps T. rex as king carnivore", Sereno noted that these newly discovered theropods from South America and Africa competed with Tyrannosaurus as the largest predators, and would help in the understanding of Late Cretaceous dinosaur faunas, which had otherwise been very "North America-centric". In the same issue of the journal in which Carcharodontosaurus was described, the paleontologist Philip J. Currie cautioned that it was yet to be determined which of the two animals were larger, and that the size of an animal is less interesting to paleontologists than, for example, adaptations, relationships, and distribution. He also found it remarkable that the two animals were found within a year of each other, and were closely related, in spite of being found on different continents.
In a 1997 interview, Coria estimated Giganotosaurus to have been 13.7 (45 ft) to 14.3 (47 ft) m long and weighing based on new material, larger than Carcharodontosaurus. Sereno countered that it would be difficult to determine a size range for a species based on few, incomplete specimens, and both paleontologists agreed that other aspects of these dinosaurs were more important than settling the "size contest". In 1998, the paleontologist Jorge O. Calvo and Coria assigned a partial left dentary bone (part of the lower jaw) containing some teeth (MUCPv-95) to Giganotosaurus. It had been collected by Calvo near Los Candeleros in 1988 (found in 1987), who described it briefly in 1989, while noting it may have belonged to a new theropod taxon. Calvo and Coria found the dentary to be identical to that of the holotype, though 8% larger at 62 cm (24 in). Though the rear part of it is incomplete, they proposed that the skull of the holotype specimen would have been long, and estimated the skull of the larger specimen to have been long, the longest skull of any theropod.
In 1999, Calvo referred an incomplete tooth, (MUCPv-52), to Giganotosaurus; this specimen was discovered near Lake Ezequiel Ramos Mexia in 1987 by A. Delgado, and is therefore the first known fossil of the genus. Calvo further suggested that some theropod trackways and isolated tracks (which he made the basis of the ichnotaxon Abelichnus astigarrae in 1991) belonged to Giganotosaurus, based on their large size. The largest tracks are long with a pace of , and the smallest is long with a pace of . The tracks are tridactyl (three-toed) and have large and coarse digits, with prominent claw impressions. Impressions of the digits occupy most of the track-length, and one track has a thin heel. Though the tracks were found in a higher stratigraphic level than the main fossils of Giganotosaurus, they were from the same strata as the single tooth and some sauropod dinosaurs that are also known from the same strata as Giganotosaurus.
Continued size estimations
In 2001, the physician-scientist Frank Seebacher proposed a new polynomial method of calculating body-mass estimates for dinosaurs (using body-length, depth, and width), and found Giganotosaurus to have weighed (based on the original length estimate). In their 2002 description of the braincase of Giganotosaurus, Coria and Currie gave a length estimate of for the holotype skull, and calculated a weight of by extrapolating from the circumference of the femur-shaft. This resulted in an encephalization quotient (a measure of relative brain size) of 1.9. In 2004, the paleontologist Gerardo V. Mazzetta and colleagues pointed out that though the femur of the Giganotosaurus holotype was larger than that of "Sue", the tibia was shorter at . They found the holotype specimen to have been equal to Tyrannosaurus in size at (marginally smaller than "Sue"), but that the larger dentary might have represented an animal of , if geometrically similar to the holotype specimen. By using multivariate regression equations, these authors also suggested an alternative weight of for the holotype and for the larger specimen, and that the latter was therefore the largest known terrestrial carnivore.
In 2005, the paleontologist Cristiano Dal Sasso and colleagues described new skull material (a snout) of Spinosaurus (the original fossils of which were also destroyed during World War II), and concluded this dinosaur would have been long with a weight , exceeding the maximum size of all other theropods. In 2006, Coria and Currie described the large theropod Mapusaurus from Patagonia; it was closely related to Giganotosaurus and of approximately the same size. In 2007, the paleontologists François Therrien and Donald M. Henderson found that Giganotosaurus would have approached in length and in weight, while Carcharodontosaurus would have approached in length and in weight (surpassing Tyrannosaurus), and estimated the Giganotosaurus holotype skull to have been long. They cautioned that these measurements depended on whether the incomplete skulls of these animals had been reconstructed correctly, and that more complete specimens were needed for more accurate estimates. They also found that Dal Sasso and colleagues' reconstruction of Spinosaurus was too large, and instead estimated it to have been long, weighing , and possibly as low as in length and in weight. They concluded that these dinosaurs had reached the upper biomechanical size limit attainable by a strictly bipedal animal. In 2010, the paleontologist Gregory S. Paul suggested that the skulls of carcharodontosaurs had been reconstructed as too long in general.
In 2012, the paleontologist Matthew T. Carrano and colleagues noted that though Giganotosaurus had received much attention due to its enormous size, and in spite of the holotype being relatively complete, it had not yet been described in detail, apart from the braincase. They pointed out that many contacts between skull bones were not preserved, which lead to the total length of the skull being ambiguous. They found instead that the skulls of Giganotosaurus and Carcharodontosaurus were exactly the same size as that of Tyrannosaurus. They also measured the femur of the Giganotosaurus holotype to be long, in contrast to the original measurement, and proposed that the body mass would have been smaller overall. In 2013, the paleontologist Scott Hartman published a Graphic Double Integration mass estimate (based on drawn skeletal reconstructions) on his blog, wherein he found Tyrannosaurus ("Sue") to have been larger than Giganotosaurus overall. He estimated the Giganotosaurus holotype to have weighed , and the larger specimen . Tyrannosaurus was estimated to have weighed , and Hartman noted that it had a wider torso, though the two seemed similar in side view. He also pointed out that the Giganotosaurus dentary that was supposedly 8% larger than that of the holotype specimen would rather have been 6.5% larger, or could simply have belonged to a similarly sized animal with a more robust dentary. He conceded that with only one good Giganotosaurus specimen known, it is possible that larger individuals will be found, as it took most of a century to find "Sue" after Tyrannosaurus was discovered.
In 2014, the paleontologist Nizar Ibrahim and colleagues estimated the length of Spinosaurus to have been over , by extrapolating from a new specimen scaled up to match the snout described by Dal Sasso and colleagues. This would make Spinosaurus the largest known carnivorous dinosaur. In 2019, the paleontologist W. Scott Persons and colleagues described a Tyrannosaurus specimen (nicknamed "Scotty"), and estimated it to be more massive than other giant theropods, but cautioned that the femoral proportions of the carcharodontosaurids Giganotosaurus and Tyrannotitan indicated a body mass larger than other adult Tyrannosaurus. They noted that these theropods were known by far fewer specimens than Tyrannosaurus, and that future finds may reveal specimens larger than "Scotty", as indicated by the large Giganotosaurus dentary. While "Scotty" had the greatest femoral circumference, the femoral length of Giganotosaurus was about 10% longer, but the authors stated it was difficult to compare proportions between large theropod clades.
In 2021, the paleontologist Matías Reolid and colleagues compiled various mass estimates of theropods (including Giganotosaurus) to calculate the average, but did not include Therrien and Henderson's 2007 estimates of Carnotaurus and Giganotosaurus, considering them outliers. This resulted in a body mass range for Giganotosaurus between , with an average of . They also applied the skull length and body length ratio proposed by Therrien and Henderson and reconstructed various digital 3D models of theropods to measure body mass distribution and volume, resulting in the mass of a long Giganotosaurus up to . These researchers found the estimates consistent with the values proposed by previous studies. In 2022, Juan I. Canale and colleagues described the large carcharodontosaurid Meraxes, which has the most completely known Carcharodontosaurine skull, with an estimated length of . Extrapolating from that skull, they estimated the skull of Giganotosaurus to have been long, making it one of the largest known theropod skulls. Henderson suggested in 2023 that there was a close relation between the dimensions of the pelvic area and body size in theropods, allowing size estimates for incomplete specimens. Based on this idea, he found Giganotosaurus to have been long, identical to the estimate proposed in the 1995 description.
Description
Giganotosaurus is thought to have been one of the largest theropod dinosaurs, but the incompleteness of its remains have made it difficult to estimate its size reliably. It is therefore impossible to determine with certainty whether it was larger than Tyrannosaurus, for example, which has been considered the largest theropod historically. Different size estimates have been reached by several researchers, based on various methods, and depending on how the missing parts of the skeleton have been reconstructed. Length estimates for the holotype specimen have varied between , with a skull between long, a femur (thigh bone) between long, and a weight between . Fusion of sutures (joints) in the braincase indicates the holotype specimen was a mature individual. A second specimen, consisting of a dentary bone from a supposedly larger individual, has been used to extrapolate a length of , a skull long, and a weight of . Some writers have considered the largest size estimates for both specimens exaggerated. Giganotosaurus has been compared to an oversized version of the well-known genus Allosaurus.
Skull
Though incompletely known, the skull of Giganotosaurus appears to have been low. The maxilla of the upper jaw had a long tooth row, was deep from top to bottom, and its upper and lower edges were almost parallel. The maxilla had a pronounced process (projection) under the nostril, and a small, ellipse-shaped fenestra (opening), as in Allosaurus and Tyrannosaurus. The nasal bone was very rugose (rough and wrinkled), and these rugosities continued backwards, covering the entire upper surface of this bone. The lacrimal bone in front of the eye had a prominent, rugose crest (or horn) that pointed up at a backwards angle. The crest was ridge-like, and had deep grooves. The postorbital bone behind the eye had a down and backwards directed jugal process that projected into the orbit (eye opening), as seen in Tyrannosaurus, Abelisaurus, and Carnotaurus. The supraorbital bone above the eye that contacted between the lacrimal and postorbital bones was eave-like, and similar to that of Abelisaurus. The quadrate bone at the back of the skull was long, and had two pneumatic (air-filled) foramina (holes) on the inner side.
The skull roof (formed by the frontal and parietal bones) was broad and formed a "shelf", which overhung the short supratemporal fenestrae at the top rear of the skull. The jaw articulated far behind the occipital condyle (where the neck is attached to the skull) compared to other theropods. The condyle was broad and low, and had pneumatic cavities. Giganotosaurus did not have a sagittal crest on the top of the skull, and the jaw muscles did not extend onto the skull roof, unlike in most other theropods (due to the shelf over the supratemporal fenestrae). These muscles would instead have been attached to the lower side surfaces of the shelf. The neck muscles that elevated the head would have attached to the prominent supraoccipital bones on the top of the skull, which functioned like the nuchal crest of tyrannosaurs. A latex endocast of the brain cavity of Giganotosaurus showed that the brain was similar to that of the related genus Carcharodontosaurus, but larger. The endocast was long, wide, and had a volume of .
The dentary of the lower jaw expanded in height towards the front (by the mandibular symphysis, where the two halves of the lower jaw connected), where it was also flattened, and it had a downwards projection at the tip (which has been referred to as a "chin"). The lower side of the dentary was concave, the outer side was convex in upper view, and a groove ran along it, which supported foramina that nourished the teeth. The inner side of the dentary had a row of interdental plates, where each tooth had a foramen. The Meckelian groove ran along the lower border. The curvature of the dentary shows that the mouth of Giganotosaurus would have been wide. It is possible that each dentary had twelve alveoli (tooth sockets). Most of the alveoli were about 3.5 cm (1.3 in) long from front to back. The teeth of the dentary were of similar shape and size, except for the first one, which was smaller. The teeth were compressed sideways, were oval in cross-section, and had serrations at the front and back borders, which is typical of theropods. The teeth were sigmoid-shaped when seen in front and back view. One tooth had nine to twelve serrations per mm (0.039 in). The side teeth of Giganotosaurus had curved ridges of enamel, and the largest teeth in the premaxilla (front of the upper jaw) had pronounced wrinkles (with their highest relief near the serrations).
Postcranial skeleton
The neck of Giganotosaurus was strong, and the axis bone (the neck vertebra that articulates with the skull) was robust. The rear neck (cervical) vertebrae had short, flattened centra (the "bodies" of the vertebrae), with almost hemispherical articulations (contacts) at the front, and pleurocoels (hollow depressions) divided by laminae (plates). The back (dorsal) vertebrae had high neural arches and deep pleurocoels. The tail (caudal) vertebrae had neural spines that were elongated from front to back and had robust centra. The transverse processes of the caudal vertebrae were long from front to back, and the chevrons on the front were blade-like. The pectoral girdle was proportionally shorter than that of Tyrannosaurus, with the ratio between the scapula (shoulder blade) and the femur being less than 0.5. The blade of the scapula had parallel borders, and a strong tubercle for insertion of the triceps muscle. The coracoid was small and hook-shaped.
The ilium of the pelvis had a convex upper border, a low postacetabular blade (behind the acetabulum), and a narrow brevis-shelf (a projection where tail muscles attached). The pubic foot was pronounced and shorter at the front than behind. The ischium was straight and expanded hindwards, ending in a lobe-shape. The femur was sigmoid-shaped, and had a very robust, upwards pointing head, with a deep sulcus (groove). The lesser trochanter of the femoral head was wing-like, and placed below the greater trochanter, which was short. The fourth trochanter was large and projected backwards. The tibia of the lower leg was expanded at the upper end, its articular facet (where it articulated with the femur) was wide, and its shaft was compressed from front to back.
Classification
Coria and Salgado originally found Giganotosaurus to group more closely with the theropod clade Tetanurae than to more basal (or "primitive") theropods such as ceratosaurs, due to shared features (synapomorphies) in the legs, skull, and pelvis. Other features showed that it was outside the more derived (or "advanced") clade Coelurosauria. In 1996, Sereno and colleagues found Giganotosaurus, Carcharodontosaurus, and Acrocanthosaurus to be closely related within the superfamily Allosauroidea, and grouped them in the family Carcharodontosauridae. Features shared between these genera include the lacrimal and postorbital bones forming a broad "shelf" over the orbit, and the squared front end of the lower jaw.
As more carcharodontosaurids were discovered, their interrelationships became clearer. The group was defined as all allosauroids closer to Carcharodontosaurus than Allosaurus or Sinraptor by the paleontologist Thomas R. Holtz and colleagues in 2004. In 2006, Coria and Currie united Giganotosaurus and Mapusaurus in the carcharodontosaurid subfamily Giganotosaurinae based on shared features of the femur, such as a weak fourth trochanter, and a shallow, broad groove on the lower end. In 2008, Sereno and the paleontologist Stephen L. Brusatte united Giganotosaurus, Mapusaurus, and Tyrannotitan in the tribe Giganotosaurini. In 2010, Paul listed Giganotosaurus as "Giganotosaurus (or Carcharodontosaurus) carolinii" without elaboration. Giganotosaurus is one of the most complete and informative members of Carcharodontosauridae.
The following cladogram shows the placement of Giganotosaurus within Carcharodontosauridae according the paleontologist Andrea Cau, 2024:
Evolution
Coria and Salgado suggested that the convergent evolution of gigantism in theropods could have been linked to common conditions in their environments or ecosystems. Sereno and colleagues found that the presence of carcharodontosaurids in Africa (Carcharodontosaurus), North America (Acrocanthosaurus), and South America (Giganotosaurus), showed the group had a transcontinental distribution by the Early Cretaceous period. Dispersal routes between the northern and southern continents appear to have been severed by ocean barriers in the Late Cretaceous, which led to more distinct, provincial faunas, by preventing exchange. Previously, it was thought that the Cretaceous world was biogeographically separated, with the northern continents being dominated by tyrannosaurids, South America by abelisaurids, and Africa by carcharodontosaurids. The subfamily Carcharodontosaurinae, in which Giganotosaurus belongs, appears to have been restricted to the southern continent of Gondwana (formed by South America and Africa), where they were probably the apex (top) predators. The South American tribe Giganotosaurini may have been separated from their African relatives through vicariance, when Gondwana broke up during the Aptian–Albian ages of the Early Cretaceous.
Paleobiology
In 1999, the paleontologist Reese E. Barrick and the geologist William J. Showers found that the bones of Giganotosaurus and Tyrannosaurus had very similar oxygen isotope patterns, with similar heat distribution in the body. These thermoregulatory patterns indicate that these dinosaurs had a metabolism intermediate between that of mammals and reptiles, and were therefore homeothermic (with a stable core body-temperature, a type of "warm-bloodedness"). The metabolism of an Giganotosaurus would be comparable to that of a mammalian carnivore, and would have supported rapid growth.
In 2001, the physicist Rudemar Ernesto Blanco and Mazzetta evaluated the cursorial (running) capability of Giganotosaurus. They rejected the hypothesis by James O. Farlow that the risk of injuries involved in such large animals falling while on a run, would limit the speed of large theropods. Instead they posed that the imbalance caused by increasing velocity would be the limiting factor. Calculating the time it would take for a leg to gain balance after the retraction of the opposite leg, they found the upper kinematic limit of the running speed to be . They also found comparison between the running capability of Giganotosaurus and birds like the ostrich based on the strength of their leg-bones to be of limited value, since theropods, unlike birds, had heavy tails to counterbalance their weight.
A 2017 biomechanical study of the running ability of Tyrannosaurus by the biologist William I. Sellers and colleagues suggested that skeletal loads were too great to have allowed adult individuals to run. The relatively long limbs, which were long argued to indicate good running ability, would instead have mechanically limited it to walking gaits, and it would therefore not have been a high-speed pursuit predator. They suggested that these findings would also apply to other long-limbed giant theropods such as Giganotosaurus, Mapusaurus, and Acrocanthosaurus.
Feeding
In 2002, Coria and Currie found that various features of the rear part of the skull (such as the frontwards slope of the occiput and low and wide occipital condyle) indicate that Giganotosaurus would have had a good capability of moving the skull sideways in relation to the front neck vertebrae. These features may also have been related to the increased mass and length of the jaw muscles; the jaw articulation of Giganotosaurus and other carcharodontosaurids was moved hindwards to increase the length of the jaw musculature, enabling faster closure of the jaws, whereas tyrannosaurs increased the mass of the lower jaw musculature, to increase the power of their bite.
In 2005 Therrien and colleagues estimated the relative bite force of theropods and found that Giganotosaurus and related taxa had adaptations for capturing and bringing down prey by delivering powerful bites, whereas tyrannosaurs had adaptations for resisting torsional stress and crushing bones. Estimates in absolute values like newtons were impossible. The bite force of Giganotosaurus was weaker than that of Tyrannosaurus, and the force decreased hindwards along the tooth row. The lower jaws were adapted for slicing bites, and it probably captured and manipulated prey with the front part of the jaws. These authors suggested that Giganotosaurus and other allosaurs may have been generalized predators that fed on a wide spectrum of prey smaller than themselves, such as juvenile sauropods. The ventral process (or "chin") of the lower jaw may have been an adaptation for resisting tensile stress when the powerful bite was delivered with the front of the jaws against the prey.
The first known fossils of the closely related Mapusaurus were found in a bonebed consisting of several individuals at different growth stages. In their 2006 description of the genus, Coria and Currie suggested that though this could be due to a long term or coincidental accumulation of carcasses, the presence of different growth stages of the same taxon indicated the aggregation was not coincidental. In a 2006 National Geographic article, Coria stated that the bonebed was probably the result of a catastrophic event and that the presence of mainly medium-sized individuals, with very few young or old, is normal for animals that form packs. Therefore, Coria said, large theropods may have hunted in groups, which would be advantageous when hunting gigantic sauropods.
Paleoenvironment
Giganotosaurus was discovered in the Candeleros Formation, which was deposited during the Early Cenomanian age of the Late Cretaceous period, approximately 99.6 to 97 million years ago. This formation is the lowest unit in the Neuquén Group, wherein it is part of the Río Limay Subgroup. The formation is composed of coarse and medium-grained sandstones deposited in a fluvial environment (associated with rivers and streams), and in aeolian conditions (effected by wind). Paleosols (buried soil), siltstones, and claystones are present, some of which represent swamp conditions.
Giganotosaurus was probably the apex predator in its ecosystem. It shared its environment with herbivorous dinosaurs such as the titanosaurian sauropod Andesaurus, and the rebbachisaurid sauropods Limaysaurus and Nopcsaspondylus. Other theropods include the abelisaurid Ekrixinatosaurus, the dromaeosaurid Buitreraptor, and the alvarezsauroid Alnashetri. Other reptiles include the crocodyliform Araripesuchus, sphenodontians, snakes, and the turtle Prochelidella. Other vertebrates include cladotherian mammals, a pipoid frog, and ceratodontiform fishes. Footprints indicate the presence of large ornithopods and pterosaurs as well.
| Biology and health sciences | Theropods | Animals |
942048 | https://en.wikipedia.org/wiki/Adaptation | Adaptation | In biology, adaptation has three related meanings. Firstly, it is the dynamic evolutionary process of natural selection that fits organisms to their environment, enhancing their evolutionary fitness. Secondly, it is a state reached by the population during that process. Thirdly, it is a phenotypic trait or adaptive trait, with a functional role in each individual organism, that is maintained and has evolved through natural selection.
Historically, adaptation has been described from the time of the ancient Greek philosophers such as Empedocles and Aristotle. In 18th and 19th-century natural theology, adaptation was taken as evidence for the existence of a deity. Charles Darwin and Alfred Russel Wallace proposed instead that it was explained by natural selection.
Adaptation is related to biological fitness, which governs the rate of evolution as measured by changes in allele frequencies. Often, two or more species co-adapt and co-evolve as they develop adaptations that interlock with those of the other species, such as with flowering plants and pollinating insects. In mimicry, species evolve to resemble other species; in mimicry this is a mutually beneficial co-evolution as each of a group of strongly defended species (such as wasps able to sting) come to advertise their defences in the same way. Features evolved for one purpose may be co-opted for a different one, as when the insulating feathers of dinosaurs were co-opted for bird flight.
Adaptation is a major topic in the philosophy of biology, as it concerns function and purpose (teleology). Some biologists try to avoid terms which imply purpose in adaptation, not least because they suggest a deity's intentions, but others note that adaptation is necessarily purposeful.
History
Adaptation is an observable fact of life accepted by philosophers and natural historians from ancient times, independently of their views on evolution, but their explanations differed. Empedocles did not believe that adaptation required a final cause (a purpose), but thought that it "came about naturally, since such things survived." Aristotle did believe in final causes, but assumed that species were fixed.
In natural theology, adaptation was interpreted as the work of a deity and as evidence for the existence of God. William Paley believed that organisms were perfectly adapted to the lives they led, an argument that shadowed Gottfried Wilhelm Leibniz, who had argued that God had brought about "the best of all possible worlds." Voltaire's satire Dr. Pangloss is a parody of this optimistic idea, and David Hume also argued against design. Charles Darwin broke with the tradition by emphasising the flaws and limitations which occurred in the animal and plant worlds.
Jean-Baptiste Lamarck proposed a tendency for organisms to become more complex, moving up a ladder of progress, plus "the influence of circumstances", usually expressed as use and disuse. This second, subsidiary element of his theory is what is now called Lamarckism, a proto-evolutionary hypothesis of the inheritance of acquired characteristics, intended to explain adaptations by natural means.
Other natural historians, such as Buffon, accepted adaptation, and some also accepted evolution, without voicing their opinions as to the mechanism. This illustrates the real merit of Darwin and Alfred Russel Wallace, and secondary figures such as Henry Walter Bates, for putting forward a mechanism whose significance had only been glimpsed previously. A century later, experimental field studies and breeding experiments by people such as E. B. Ford and Theodosius Dobzhansky produced evidence that natural selection was not only the 'engine' behind adaptation, but was a much stronger force than had previously been thought.
General principles
What adaptation is
Adaptation is primarily a process rather than a physical form or part of a body. An internal parasite (such as a liver fluke) can illustrate the distinction: such a parasite may have a very simple bodily structure, but nevertheless the organism is highly adapted to its specific environment. From this we see that adaptation is not just a matter of visible traits: in such parasites critical adaptations take place in the life cycle, which is often quite complex. However, as a practical term, "adaptation" often refers to a product: those features of a species which result from the process. Many aspects of an animal or plant can be correctly called adaptations, though there are always some features whose function remains in doubt. By using the term adaptation for the evolutionary process, and adaptive trait for the bodily part or function (the product), one may distinguish the two different senses of the word.
Adaptation is one of the two main processes that explain the observed diversity of species, such as the different species of Darwin's finches. The other process is speciation, in which new species arise, typically through reproductive isolation. An example widely used today to study the interplay of adaptation and speciation is the evolution of cichlid fish in African lakes, where the question of reproductive isolation is complex.
Adaptation is not always a simple matter where the ideal phenotype evolves for a given environment. An organism must be viable at all stages of its development and at all stages of its evolution. This places constraints on the evolution of development, behaviour, and structure of organisms. The main constraint, over which there has been much debate, is the requirement that each genetic and phenotypic change during evolution should be relatively small, because developmental systems are so complex and interlinked. However, it is not clear what "relatively small" should mean, for example polyploidy in plants is a reasonably common large genetic change. The origin of eukaryotic endosymbiosis is a more dramatic example.
All adaptations help organisms survive in their ecological niches. The adaptive traits may be structural, behavioural or physiological. Structural adaptations are physical features of an organism, such as shape, body covering, armament, and internal organization. Behavioural adaptations are inherited systems of behaviour, whether inherited in detail as instincts, or as a neuropsychological capacity for learning. Examples include searching for food, mating, and vocalizations. Physiological adaptations permit the organism to perform special functions such as making venom, secreting slime, and phototropism, but also involve more general functions such as growth and development, temperature regulation, ionic balance and other aspects of homeostasis. Adaptation affects all aspects of the life of an organism.
The following definitions are given by the evolutionary biologist Theodosius Dobzhansky:
1. Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.
2. Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.
3. An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.
What adaptation is not
Adaptation differs from flexibility, acclimatization, and learning, all of which are changes during life which are not inherited. Flexibility deals with the relative capacity of an organism to maintain itself in different habitats: its degree of specialization. Acclimatization describes automatic physiological adjustments during life; learning means alteration in behavioural performance during life.
Flexibility stems from phenotypic plasticity, the ability of an organism with a given genotype (genetic type) to change its phenotype (observable characteristics) in response to changes in its habitat, or to move to a different habitat. The degree of flexibility is inherited, and varies between individuals. A highly specialized animal or plant lives only in a well-defined habitat, eats a specific type of food, and cannot survive if its needs are not met. Many herbivores are like this; extreme examples are koalas which depend on Eucalyptus, and giant pandas which require bamboo. A generalist, on the other hand, eats a range of food, and can survive in many different conditions. Examples are humans, rats, crabs and many carnivores. The tendency to behave in a specialized or exploratory manner is inherited—it is an adaptation. Rather different is developmental flexibility: "An animal or plant is developmentally flexible if when it is raised in or transferred to new conditions, it changes in structure so that it is better fitted to survive in the new environment," writes the evolutionary biologist John Maynard Smith.
If humans move to a higher altitude, respiration and physical exertion become a problem, but after spending time in high altitude conditions they acclimatize to the reduced partial pressure of oxygen, such as by producing more red blood cells. The ability to acclimatize is an adaptation, but the acclimatization itself is not. The reproductive rate declines, but deaths from some tropical diseases also go down. Over a longer period of time, some people are better able to reproduce at high altitudes than others. They contribute more heavily to later generations, and gradually by natural selection the whole population becomes adapted to the new conditions. This has demonstrably occurred, as the observed performance of long-term communities at higher altitude is significantly better than the performance of new arrivals, even when the new arrivals have had time to acclimatize.
Adaptedness and fitness
There is a relationship between adaptedness and the concept of fitness used in population genetics. Differences in fitness between genotypes predict the rate of evolution by natural selection. Natural selection changes the relative frequencies of alternative phenotypes, insofar as they are heritable. However, a phenotype with high adaptedness may not have high fitness. Dobzhansky mentioned the example of the Californian redwood, which is highly adapted, but a relict species in danger of extinction. Elliott Sober commented that adaptation was a retrospective concept since it implied something about the history of a trait, whereas fitness predicts a trait's future.
1. Relative fitness. The average contribution to the next generation by a genotype or a class of genotypes, relative to the contributions of other genotypes in the population. This is also known as Darwinian fitness, selection coefficient, and other terms.
2. Absolute fitness. The absolute contribution to the next generation by a genotype or a class of genotypes. Also known as the Malthusian parameter when applied to the population as a whole.
3. Adaptedness. The extent to which a phenotype fits its local ecological niche. Researchers can sometimes test this through a reciprocal transplant.
Sewall Wright proposed that populations occupy adaptive peaks on a fitness landscape. To evolve to another, higher peak, a population would first have to pass through a valley of maladaptive intermediate stages, and might be "trapped" on a peak that is not optimally adapted.
Types
Changes in habitat
Before Darwin, adaptation was seen as a fixed relationship between an organism and its habitat. It was not appreciated that as the climate changed, so did the habitat; and as the habitat changed, so did the biota. Also, habitats are subject to changes in their biota: for example, invasions of species from other areas. The relative numbers of species in a given habitat are always changing. Change is the rule, though much depends on the speed and degree of the change.
When the habitat changes, three main things may happen to a resident population: habitat tracking, genetic change or extinction. In fact, all three things may occur in sequence. Of these three effects only genetic change brings about adaptation.
When a habitat changes, the resident population typically moves to more suitable places; this is the typical response of flying insects or oceanic organisms, which have wide (though not unlimited) opportunity for movement. This common response is called habitat tracking. It is one explanation put forward for the periods of apparent stasis in the fossil record (the punctuated equilibrium theory).
Genetic change
Without mutation, the ultimate source of all genetic variation, there would be no genetic changes and no subsequent adaptation through evolution by natural selection. Genetic change occurs in a population when mutation increases or decreases in its initial frequency followed by random genetic drift, migration, recombination or natural selection act on this genetic variation. One example is that the first pathways of enzyme-based metabolism at the very origin of life on Earth may have been co-opted components of the already-existing purine nucleotide metabolism, a metabolic pathway that evolved in an ancient RNA world. The co-option requires new mutations and through natural selection, the population then adapts genetically to its present circumstances. Genetic changes may result in entirely new or gradual change to visible structures, or they may adjust physiological activity in a way that suits the habitat. The varying shapes of the beaks of Darwin's finches, for example, are driven by adaptive mutations in the ALX1 gene. The coat color of different wild mouse species matches their environments, whether black lava or light sand, owing to adaptive mutations in the melanocortin 1 receptor and other melanin pathway genes. Physiological resistance to the heart poisons (cardiac glycosides) that monarch butterflies store in their bodies to protect themselves from predators are driven by adaptive mutations in the target of the poison, the sodium pump, resulting in target site insensitivity. These same adaptive mutations and similar changes at the same amino acid sites were found to evolve in a parallel manner in distantly related insects that feed on the same plants, and even in a bird that feeds on monarchs through convergent evolution, a hallmark of adaptation. Convergence at the gene-level across distantly related species can arise because of evolutionary constraint.
Habitats and biota do frequently change over time and space. Therefore, it follows that the process of adaptation is never fully complete. Over time, it may happen that the environment changes little, and the species comes to fit its surroundings better and better, resulting in stabilizing selection. On the other hand, it may happen that changes in the environment occur suddenly, and then the species becomes less and less well adapted. The only way for it to climb back up that fitness peak is via the introduction of new genetic variation for natural selection to act upon. Seen like this, adaptation is a genetic tracking process, which goes on all the time to some extent, but especially when the population cannot or does not move to another, less hostile area. Given enough genetic change, as well as specific demographic conditions, an adaptation may be enough to bring a population back from the brink of extinction in a process called evolutionary rescue. Adaptation does affect, to some extent, every species in a particular ecosystem.
Leigh Van Valen thought that even in a stable environment, because of antagonistic species interactions and limited resources, a species must constantly had to adapt to maintain its relative standing. This became known as the Red Queen hypothesis, as seen in host-parasite interactions.
Existing genetic variation and mutation were the traditional sources of material on which natural selection could act. In addition, horizontal gene transfer is possible between organisms in different species, using mechanisms as varied as gene cassettes, plasmids, transposons and viruses such as bacteriophages.
Co-adaptation
In coevolution, where the existence of one species is tightly bound up with the life of another species, new or 'improved' adaptations which occur in one species are often followed by the appearance and spread of corresponding features in the other species. In other words, each species triggers reciprocal natural selection in the other. These co-adaptational relationships are intrinsically dynamic, and may continue on a trajectory for millions of years, as has occurred in the relationship between flowering plants and pollinating insects.
Mimicry
Bates' work on Amazonian butterflies led him to develop the first scientific account of mimicry, especially the kind of mimicry which bears his name: Batesian mimicry. This is the mimicry by a palatable species of an unpalatable or noxious species (the model), gaining a selective advantage as predators avoid the model and therefore also the mimic. Mimicry is thus an anti-predator adaptation. A common example seen in temperate gardens is the hoverfly (Syrphidae), many of which—though bearing no sting—mimic the warning coloration of aculeate Hymenoptera (wasps and bees). Such mimicry does not need to be perfect to improve the survival of the palatable species.
Bates, Wallace and Fritz Müller believed that Batesian and Müllerian mimicry provided evidence for the action of natural selection, a view which is now standard amongst biologists.
Trade-offs
All adaptations have a downside: horse legs are great for running on grass, but they cannot scratch their backs; mammals' hair helps temperature, but offers a niche for ectoparasites; the only flying penguins do is under water. Adaptations serving different functions may be mutually destructive. Compromise and makeshift occur widely, not perfection. Selection pressures pull in different directions, and the adaptation that results is some kind of compromise.
Examples
Consider the antlers of the Irish elk, (often supposed to be far too large; in deer antler size has an allometric relationship to body size). Antlers serve positively for defence against predators, and to score victories in the annual rut. But they are costly in terms of resources. Their size during the last glacial period presumably depended on the relative gain and loss of reproductive capacity in the population of elks during that time. As another example, camouflage to avoid detection is destroyed when vivid coloration is displayed at mating time. Here the risk to life is counterbalanced by the necessity for reproduction.
Stream-dwelling salamanders, such as Caucasian salamander or Gold-striped salamander have very slender, long bodies, perfectly adapted to life at the banks of fast small rivers and mountain brooks. Elongated body protects their larvae from being washed out by current. However, elongated body increases risk of desiccation and decreases dispersal ability of the salamanders; it also negatively affects their fecundity. As a result, fire salamander, less perfectly adapted to the mountain brook habitats, is in general more successful, have a higher fecundity and broader geographic range.
The peacock's ornamental train (grown anew in time for each mating season) is a famous adaptation. It must reduce his maneuverability and flight, and is hugely conspicuous; also, its growth costs food resources. Darwin's explanation of its advantage was in terms of sexual selection: "This depends on the advantage which certain individuals have over other individuals of the same sex and species, in exclusive relation to reproduction." The kind of sexual selection represented by the peacock is called 'mate choice,' with an implication that the process selects the more fit over the less fit, and so has survival value. The recognition of sexual selection was for a long time in abeyance, but has been rehabilitated.
The conflict between the size of the human foetal brain at birth, (which cannot be larger than about 400 cm3, else it will not get through the mother's pelvis) and the size needed for an adult brain (about 1400 cm3), means the brain of a newborn child is quite immature. The most vital things in human life (locomotion, speech) just have to wait while the brain grows and matures. That is the result of the birth compromise. Much of the problem comes from our upright bipedal stance, without which our pelvis could be shaped more suitably for birth. Neanderthals had a similar problem.
As another example, the long neck of a giraffe brings benefits but at a cost. The neck of a giraffe can be up to in length. The benefits are that it can be used for inter-species competition or for foraging on tall trees where shorter herbivores cannot reach. The cost is that a long neck is heavy and adds to the animal's body mass, requiring additional energy to build the neck and to carry its weight around.
Shifts in function
Pre-adaptation
Pre-adaptation occurs when a population has characteristics which by chance are suited for a set of conditions not previously experienced. For example, the polyploid cordgrass Spartina townsendii is better adapted than either of its parent species to their own habitat of saline marsh and mud-flats. Among domestic animals, the White Leghorn chicken is markedly more resistant to vitamin B1 deficiency than other breeds; on a plentiful diet this makes no difference, but on a restricted diet this preadaptation could be decisive.
Pre-adaptation may arise because a natural population carries a huge quantity of genetic variability. In diploid eukaryotes, this is a consequence of the system of sexual reproduction, where mutant alleles get partially shielded, for example, by genetic dominance. Microorganisms, with their huge populations, also carry a great deal of genetic variability. The first experimental evidence of the pre-adaptive nature of genetic variants in microorganisms was provided by Salvador Luria and Max Delbrück who developed the Fluctuation Test, a method to show the random fluctuation of pre-existing genetic changes that conferred resistance to bacteriophages in Escherichia coli. The word is controversial because it is teleological and the entire concept of natural selection depends on the presence of genetic variation, regardless of the population size of a species in question.
Co-option of existing traits: exaptation
Features that now appear as adaptations sometimes arose by co-option of existing traits, evolved for some other purpose. The classic example is the ear ossicles of mammals, which we know from paleontological and embryological evidence originated in the upper and lower jaws and the hyoid bone of their synapsid ancestors, and further back still were part of the gill arches of early fish. The word exaptation was coined to cover these common evolutionary shifts in function. The flight feathers of birds evolved from the much earlier feathers of dinosaurs, which might have been used for insulation or for display.
Niche construction
Animals including earthworms, beavers and humans use some of their adaptations to modify their surroundings, so as to maximize their chances of surviving and reproducing. Beavers create dams and lodges, changing the ecosystems of the valleys around them. Earthworms, as Darwin noted, improve the topsoil in which they live by incorporating organic matter. Humans have constructed extensive civilizations with cities in environments as varied as the Arctic and hot deserts.
In all three cases, the construction and maintenance of ecological niches helps drive the continued selection of the genes of these animals, in an environment that the animals have modified.
Non-adaptive traits
Some traits do not appear to be adaptive as they have a neutral or deleterious effect on fitness in the current environment. Because genes often have pleiotropic effects, not all traits may be functional: they may be what Stephen Jay Gould and Richard Lewontin called spandrels, features brought about by neighbouring adaptations, on the analogy with the often highly decorated triangular areas between pairs of arches in architecture, which began as functionless features.
Another possibility is that a trait may have been adaptive at some point in an organism's evolutionary history, but a change in habitats caused what used to be an adaptation to become unnecessary or even maladapted. Such adaptations are termed vestigial. Many organisms have vestigial organs, which are the remnants of fully functional structures in their ancestors. As a result of changes in lifestyle the organs became redundant, and are either not functional or reduced in functionality. Since any structure represents some kind of cost to the general economy of the body, an advantage may accrue from their elimination once they are not functional. Examples: wisdom teeth in humans; the loss of pigment and functional eyes in cave fauna; the loss of structure in endoparasites.
Extinction and coextinction
If a population cannot move or change sufficiently to preserve its long-term viability, then it will become extinct, at least in that locale. The species may or may not survive in other locales. Species extinction occurs when the death rate over the entire species exceeds the birth rate for a long enough period for the species to disappear. It was an observation of Van Valen that groups of species tend to have a characteristic and fairly regular rate of extinction.
Just as there is co-adaptation, there is also coextinction, the loss of a species due to the extinction of another with which it is coadapted, as with the extinction of a parasitic insect following the loss of its host, or when a flowering plant loses its pollinator, or when a food chain is disrupted.
Origin of adaptive capacities
The first stage in the evolution of life on earth is often hypothesized to be the RNA world in which short self-replicating RNA molecules proliferated before the evolution of DNA and proteins. By this hypothesis, life started when RNA chains began to self-replicate, initiating the three mechanisms of Darwinian selection: heritability, variation of type, and competition for resources. The fitness of an RNA replicator (its per capita rate of increase) would likely have been a function of its intrinsic adaptive capacities, determined by its nucleotide sequence, and the availability of resources. The three primary adaptive capacities may have been: (1) replication with moderate fidelity, giving rise to heritability while allowing variation of type, (2) resistance to decay, and (3) acquisition of resources. These adaptive capacities would have been determined by the folded configurations of the RNA replicators resulting from their nucleotide sequences.
Philosophical issues
Adaptation raises philosophical issues concerning how biologists speak of function and purpose, as this carries implications of evolutionary history – that a feature evolved by natural selection for a specific reason – and potentially of supernatural intervention – that features and organisms exist because of a deity's conscious intentions. In his biology, Aristotle introduced teleology to describe the adaptedness of organisms, but without accepting the supernatural intention built into Plato's thinking, which Aristotle rejected. Modern biologists continue to face the same difficulty. On the one hand, adaptation is purposeful: natural selection chooses what works and eliminates what does not. On the other hand, biologists by and large reject conscious purpose in evolution. The dilemma gave rise to a famous joke by the evolutionary biologist Haldane: "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public.'" David Hull commented that Haldane's mistress "has become a lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of teleological language; they flaunt it." Ernst Mayr stated that "adaptedness... is an a posteriori result rather than an a priori goal-seeking", meaning that the question of whether something is an adaptation can only be determined after the event.
| Biology and health sciences | Evolution | null |
943374 | https://en.wikipedia.org/wiki/Lagoon%20Nebula | Lagoon Nebula | The Lagoon Nebula (catalogued as Messier 8 or M8, NGC 6523, Sharpless 25, RCW 146, and Gum 72) is a giant interstellar cloud in the constellation Sagittarius. It is classified as an emission nebula and has an H II region.
The Lagoon Nebula was discovered by Giovanni Hodierna before 1654 and is one of only two star-forming nebulae faintly visible to the eye from mid-northern latitudes. Seen with binoculars, it appears as a distinct cloud-like patch with a definite core. Within the nebula is the open cluster NGC 6530.
Characteristics
The Lagoon Nebula is estimated to be between 4,000–6,000 light-years away from the Earth. In the sky of Earth, it spans 90' by 40', which translates to an actual dimension of 110 by 50 light years. Like many nebulae, it appears pink in time-exposure color photos but is gray to the eye peering through binoculars or a telescope, human vision having poor color sensitivity at low light levels. The nebula contains a number of Bok globules (dark, collapsing clouds of protostellar material), the most prominent of which have been catalogued by E. E. Barnard as B88, B89 and B296. It also includes a funnel-like or tornado-like structure caused by a hot O-type star that emanates ultraviolet light, heating and ionizing gases on the surface of the nebula. The Lagoon Nebula also contains at its centre a structure known as the Hourglass Nebula (so named by John Herschel), which should not be confused with the better known Engraved Hourglass Nebula in the constellation of Musca. In 2006, four Herbig–Haro objects were detected within the Hourglass, providing direct evidence of active star formation by accretion within it.
| Physical sciences | Notable nebulae | Astronomy |
944428 | https://en.wikipedia.org/wiki/Nymphaea | Nymphaea | Nymphaea () is a genus of hardy and tender aquatic plants in the family Nymphaeaceae. The genus has a cosmopolitan distribution. Many species are cultivated as ornamental plants, and many cultivars have been bred. Some taxa occur as introduced species where they are not native, and some are weeds. Plants of the genus are known commonly as water lilies, or waterlilies in the United Kingdom. The genus name is from the Greek νυμφαία, nymphaia and the Latin nymphaea, which means "water lily" and were inspired by the nymphs of Greek and Latin mythology.
Description
Vegetative characteristics
Water lilies are aquatic, rhizomatous or tuberous, perennial or annual herbs with sometimes desiccation-tolerant, branched or unbranched rhizomes, which can be stoloniferous, or lacking stolons. The tuberous or fibrous roots are contractile. The leaves are mostly floating, but submerged and emergent leaves occur as well. The shape of the lamina can be ovate, orbicular, elliptic, hastate, or sagittate. The width of the lamina ranges in size from 2.5–3 cm to 40–60 cm. The lamina has a deep sinus and the basal lobes can be overlapping or divergent. The margin of the lamina can be entire, dentate, or sinuate. The leaves can be stipulate, or exstipulate. The petioles are a few centimetres to 5–6 m long, and 0.3–1.9 cm wide.
Generative characteristics
The flowers are emergent, floating, or rarely submerged. The diurnal or nocturnal, chasmogamous or rarely cleistogamous, solitary, hermaphrodite, entomophilous, fragrant or inodorous flowers are mostly protogynous. The flowers have (3–)4(–5) green, sometimes spotted sepals, and about 6–50 lanceolate to spathulate, differently coloured petals, which are often gradually transitioning into the shape of the stamens. The gap between petals and stamens can be present or absent. The androecium consists of 20–750 stamens. The stamens can be petaloid or not petal-like. The gynoecium consists of 5–35 carpels. The carpels usually posess a sterile appendage. The globose, fleshy, spongy, irregularly dehiscent fruit, borne on a terete, glabrous or pubescent, curved or coiled peduncle, bears arillate, globose to elliptic, hairy or glabrous seeds with a smooth surface or longitudinal ridges. Proliferating pseudanthia or tuberous flowers (i.e., sterile, branching, proliferating floral structures for vegetative propagation) can be present or absent.
Cytology
Various ploidy levels have been observed in Nymphaea: 2x, 3x, 4x, 6x, 8x, and 16x. The chromosome count ranges from 28 to 224.
Taxonomy
The genus Nymphaea L. was described by Carl Linnaeus in 1753. It has three synonyms: Castalia Salisb. published by Richard Anthony Salisbury in 1805, Leuconymphaea Kuntze published by Otto Kuntze in 1891, and Ondinea Hartog published by Cornelis den Hartog in 1970. The type species is Nymphaea alba L.
Subgenera
The genus Nymphaea has been divided into several subgenera:
Nymphaea subg. Anecphya (Casp.) Conard
Nymphaea subg. Brachyceras (Casp.) Conard
Nymphaea subg. Confluentes S.W.L.Jacobs
Nymphaea subg. Hydrocallis (Planch.) Conard
Nymphaea subg. Lotos (DC.) Conard
Nymphaea subg. Nymphaea (autonym)
Sections
The subgenus Nymphaea subg. Nymphaea has been divided into sections:
Nymphaea sect. Chamaenymphaea (Planch.) Wiersema
Nymphaea sect. Nymphaea (autonym)
Nymphaea sect. Xanthantha (Casp.) Wiersema
Species
As of January 2024, there are 65 accepted species by Plants of the World Online:
Nymphaea abhayana
Nymphaea alba
Nymphaea alexii
Nymphaea amazonum
Nymphaea ampla
Nymphaea atrans
Nymphaea belophylla
Nymphaea × borealis
Nymphaea caatingae
Nymphaea candida
Nymphaea carpentariae
Nymphaea conardii
Nymphaea × daubenyana
Nymphaea dimorpha
Nymphaea divaricata
Nymphaea elegans
Nymphaea elleniae
Nymphaea francae
Nymphaea gardneriana
Nymphaea georginae
Nymphaea gigantea
Nymphaea glandulifera
Nymphaea gracilis
Nymphaea guineensis
Nymphaea harleyi
Nymphaea hastifolia
Nymphaea heudelotii
Nymphaea immutabilis
Nymphaea jacobsii
Nymphaea jamesoniana
Nymphaea kakaduensis
Nymphaea kimberleyensis
Nymphaea lasiophylla
Nymphaea leibergii
Nymphaea lingulata
Nymphaea loriana
Nymphaea lotus
Nymphaea lukei
Nymphaea macrosperma
Nymphaea maculata
Nymphaea manipurensis
Nymphaea mexicana
Nymphaea micrantha
Nymphaea noelae
Nymphaea nouchali
Nymphaea novogranatensis
Nymphaea odorata
Nymphaea ondinea
Nymphaea oxypetala
Nymphaea paganuccii
Nymphaea pedersenii
Nymphaea potamophila
Nymphaea prolifera
Nymphaea pubescens
Nymphaea pulchella
Nymphaea rapinii
Nymphaea rubra
Nymphaea rudgeana
Nymphaea siamensis
Nymphaea stuhlmannii
Nymphaea sulphurea
Nymphaea × sundvikii
Nymphaea tenuinervia
Nymphaea tetragona
Nymphaea thermarum
Nymphaea × thiona
Nymphaea vanildae
Nymphaea vaporalis
Nymphaea violacea
Fossil species
†Nymphaea brongniartii (Caspary) Saporta
Evolutionary relationships
The genus Nymphaea may be paraphyletic in its current circumscription, as the genera Euryale and Victoria have been placed within the genus Nymphaea in several studies.
Ecology
Habitat
Nymphaea occurs in freshwater, as well as brackish water habitats.
Pollination
Flowers of Nymphaea subg. Hydrocallis are pollinated by Cyclocephala beetles. Likewise, beetle pollination by Ruteloryctes morio, a member of the same Cyclocephalini tribe, has been reported in Nymphaea subg. Lotos. The subgenera Nymphaea subg. Anecphya and Nymphaea subg. Brachyceras are pollinated by bees and flies. The subgenus Nymphaea subg. Nymphaea is pollinated by bees, flies and beetles.
Herbivory
Many birds feed on seeds and fruits of Nymphaea.
Invasive species
Outside of its natural habitat, Nymphaea mexicana and hybrids thereof have become invasive weeds. It has been proposed to employ the weevil species Bagous longulus as a biocontrol agent against Nymphaea mexicana in South Africa. Invasive horticultural hybrids can pose a threat to Nymphaea species through introgressive hybridisation. The naturalised hybrids can displace native species and mask their disappearance, as it can be difficult to distinguish between species and naturalised hybrids.
Conservation
Several species are in danger of extinction. Nymphaea thermarum is classified as critically endangered (CR), Nymphaea loriana is classified as endangered (EN), Nymphaea stuhlmannii is classified as endangered (EN), and Nymphaea nouchali var. mutandaensis is also classified as endangered (EN).
Use
Horticulture
Water lilies are not only decorative, but also provide useful shade which helps reduce the growth of algae in ponds and lakes. Many of the water lilies familiar in water gardening are hybrids and cultivars. These cultivars have gained the Royal Horticultural Society's Award of Garden Merit:
'Escarboucle' (orange-red)
'Gladstoniana' (double white flowers with prominent yellow stamens)
'Gonnère' (double white scented flowers)
'James Brydon;' (cupped rose-red flowers)
'Marliacea Chromatella' (pale yellow flowers)
'Pygmaea Helvola' (miniature, with cupped fragrant yellow flowers)
Food
All water lilies are poisonous and contain an alkaloid called nupharin in almost all of their parts.
In India, it has mostly been eaten as a famine food or as a medicinal (both cooked).
In Sri Lanka it was formerly eaten as a type of medicine and its price was too high to serve as a normal meal, but in the 1940s or earlier some villagers began to grow water lilies in the paddy fields left uncultivated during the monsoon season (Yala season), and the price dropped. The tubers are called manel here and eaten boiled and in curries.
In West Africa, usage varied between cultures, in the Upper Guinea the rhizomes were only considered famine foods - here the tubers were either roasted in ashes, or dried and ground into a flour. The Buduma people ate the seeds and rhizomes. Some tribes ate the rhizomes raw.
The Hausa people of Ghana, Nigeria and the people of Southern Sudan used the tubers of Nymphaea lotus, the seeds (inside the tubers) are locally referred to as 'gunsi' in Ghana. They are ground into flour.
The plants were also said to be eaten in the Philippines. In the 1950s there were no records of leaves or flowers being eaten.
In a North American species, the boiled young leaves and unopened flower buds are said to be edible. The seeds, high in starch, protein, and oil, may be popped, parched, or ground into flour. Potato-like tubers can be collected from the species N. tuberosa (=N. odorata).
Water lilies were said to have been a major food source for a certain tribe of indigenous Australians in 1930, with the flowers and stems eaten raw, while the "roots and seedpods" were cooked either on an open fire or in a ground oven.
Other uses
Tannins extracted from rhizomes are used in dyeing wool a purple-black or brown colour. The peduncles are used as pipes to smoke tobacco.
Culture
The Ancient Egyptians used the water lilies of the Nile as cultural symbols. Since 1580 it has become popular in the English language to apply the Latin word lotus, originally used to designate a tree, to the water lilies growing in Egypt, and much later the word was used to translate words in Indian texts. The lotus motif is a frequent feature of temple column architecture.
In Egypt, the lotus, rising from the bottom mud to unfold its petals to the sun, suggested the glory of the sun's own emergence from the primaeval slime. It was a metaphor of creation. It was a symbol of the fertility gods and goddesses as well as a symbol of the upper Nile as the giver of life.
A Roman belief existed that drinking a liquid of crushed Nymphaea in vinegar for 10 consecutive days turned a boy into a eunuch.
A Syrian terra-cotta plaque from the 14th–13th centuries BC shows the goddess Asherah holding two lotus blossoms. An ivory panel from the 9th-8th centuries BC shows the god Horus seated on a lotus blossom, flanked by two cherubs.
The French Impressionist painter Claude Monet is known for his many paintings of water lilies in the pond in his garden at Giverny.
N. nouchali is the national flower of Bangladesh and Sri Lanka.
Water lilies are also used as ritual narcotics. According to one source, this topic "was the subject of a lecture by William Emboden given at Nash Hall of the Harvard Botanical Museum on the morning of April 6, 1979".
Examples
| Biology and health sciences | Nymphaeales | Plants |
944638 | https://en.wikipedia.org/wiki/Earth%27s%20energy%20budget | Earth's energy budget | Earth's energy budget (or Earth's energy balance) is the balance between the energy that Earth receives from the Sun and the energy the Earth loses back into outer space. Smaller energy sources, such as Earth's internal heat, are taken into consideration, but make a tiny contribution compared to solar energy. The energy budget also takes into account how energy moves through the climate system. The Sun heats the equatorial tropics more than the polar regions. Therefore, the amount of solar irradiance received by a certain region is unevenly distributed. As the energy seeks equilibrium across the planet, it drives interactions in Earth's climate system, i.e., Earth's water, ice, atmosphere, rocky crust, and all living things. The result is Earth's climate.
Earth's energy budget depends on many factors, such as atmospheric aerosols, greenhouse gases, surface albedo, clouds, and land use patterns. When the incoming and outgoing energy fluxes are in balance, Earth is in radiative equilibrium and the climate system will be relatively stable. Global warming occurs when earth receives more energy than it gives back to space, and global cooling takes place when the outgoing energy is greater.
Multiple types of measurements and observations show a warming imbalance since at least year 1970. The rate of heating from this human-caused event is without precedent. The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere. During 2005 to 2019 the Earth's energy imbalance (EEI) averaged about 460 TW or globally .
It takes time for any changes in the energy budget to result in any significant changes in the global surface temperature. This is due to the thermal inertia of the oceans, land and cryosphere. Most climate models make accurate calculations of this inertia, energy flows and storage amounts.
Definition
Earth's energy budget includes the "major energy flows of relevance for the climate system". These are "the top-of-atmosphere energy budget; the surface energy budget; changes in the global energy inventory and internal flows of energy within the climate system".
Earth's energy flows
In spite of the enormous transfers of energy into and from the Earth, it maintains a relatively constant temperature because, as a whole, there is little net gain or loss: Earth emits via atmospheric and terrestrial radiation (shifted to longer electromagnetic wavelengths) to space about the same amount of energy as it receives via solar insolation (all forms of electromagnetic radiation).
The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere, amounting to about 460 TW or globally .
Incoming solar energy (shortwave radiation)
The total amount of energy received per second at the top of Earth's atmosphere (TOA) is measured in watts and is given by the solar constant times the cross-sectional area of the Earth corresponded to the radiation. Because the surface area of a sphere is four times the cross-sectional area of a sphere (i.e. the area of a circle), the globally and yearly averaged TOA flux is one quarter of the solar constant and so is approximately 340 watts per square meter (W/m2). Since the absorption varies with location as well as with diurnal, seasonal and annual variations, the numbers quoted are multi-year averages obtained from multiple satellite measurements.
Of the ~340 W/m2 of solar radiation received by the Earth, an average of ~77 W/m2 is reflected back to space by clouds and the atmosphere and ~23 W/m2 is reflected by the surface albedo, leaving ~240 W/m2 of solar energy input to the Earth's energy budget. This amount is called the absorbed solar radiation (ASR). It implies a value of about 0.3 for the mean net albedo of Earth, also called its Bond albedo (A):
Outgoing longwave radiation
Thermal energy leaves the planet in the form of outgoing longwave radiation (OLR). Longwave radiation is electromagnetic thermal radiation emitted by Earth's surface and atmosphere. Longwave radiation is in the infrared band. But, the terms are not synonymous, as infrared radiation can be either shortwave or longwave. Sunlight contains significant amounts of shortwave infrared radiation. A threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation.
Generally, absorbed solar energy is converted to different forms of heat energy. Some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the "atmospheric window"; this radiation is able to pass through the atmosphere unimpeded and directly escape to space, contributing to OLR. The remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms, until the atmosphere emits that energy as thermal energy which is able to escape to space, again contributing to OLR. For example, heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conduction/convection processes, as well as via radiative heat transport. Ultimately, all outgoing energy is radiated into space in the form of longwave radiation.
The transport of longwave radiation from Earth's surface through its multi-layered atmosphere is governed by radiative transfer equations such as Schwarzschild's equation for radiative transfer (or more complex equations if scattering is present) and obeys Kirchhoff's law of thermal radiation.
A one-layer model produces an approximate description of OLR which yields temperatures at the surface (Ts=288 Kelvin) and at the middle of the troposphere (Ta=242 K) that are close to observed average values:
In this expression σ is the Stefan–Boltzmann constant and ε represents the emissivity of the atmosphere, which is less than 1 because the atmosphere does not emit within the wavelength range known as the atmospheric window.
Aerosols, clouds, water vapor, and trace greenhouse gases contribute to an effective value of about . The strong (fourth-power) temperature sensitivity maintains a near-balance of the outgoing energy flow to the incoming flow via small changes in the planet's absolute temperatures.
As viewed from Earth's surrounding space, greenhouse gases influence the planet's atmospheric emissivity (ε). Changes in atmospheric composition can thus shift the overall radiation balance. For example, an increase in heat trapping by a growing concentration of greenhouse gases (i.e. an enhanced greenhouse effect) forces a decrease in OLR and a warming (restorative) energy imbalance. Ultimately when the amount of greenhouse gases increases or decreases, in-situ surface temperatures rise or fall until the absorbed solar radiation equals the outgoing longwave radiation, or ASR equals OLR.
Earth's internal heat sources and other minor effects
The geothermal heat flow from the Earth's interior is estimated to be 47 terawatts (TW) and split approximately equally between radiogenic heat and heat left over from the Earth's formation. This corresponds to an average flux of 0.087 W/m2 and represents only 0.027% of Earth's total energy budget at the surface, being dwarfed by the of incoming solar radiation.
Human production of energy is even lower at an average 18 TW, corresponding to an estimated 160,000 TW-hr, for all of year 2019. However, consumption is growing rapidly and energy production with fossil fuels also produces an increase in atmospheric greenhouse gases, leading to a more than 20 times larger imbalance in the incoming/outgoing flows that originate from solar radiation.
Photosynthesis also has a significant effect: An estimated 140 TW (or around 0.08%) of incident energy gets captured by photosynthesis, giving energy to plants to produce biomass. A similar flow of thermal energy is released over the course of a year when plants are used as food or fuel.
Other minor sources of energy are usually ignored in the calculations, including accretion of interplanetary dust and solar wind, light from stars other than the Sun and the thermal radiation from space. Earlier, Joseph Fourier had claimed that deep space radiation was significant in a paper often cited as the first on the greenhouse effect.
Budget analysis
In simplest terms, Earth's energy budget is balanced when the incoming flow equals the outgoing flow. Since a portion of incoming energy is directly reflected, the balance can also be stated as absorbed incoming solar (shortwave) radiation equal to outgoing longwave radiation:
Internal flow analysis
To describe some of the internal flows within the budget, let the insolation received at the top of the atmosphere be 100 units (= 340 W/m2), as shown in the accompanying Sankey diagram. Called the albedo of Earth, around 35 units in this example are directly reflected back to space: 27 from the top of clouds, 2 from snow and ice-covered areas, and 6 by other parts of the atmosphere. The 65 remaining units (ASR = 220 W/m2) are absorbed: 14 within the atmosphere and 51 by the Earth's surface.
The 51 units reaching and absorbed by the surface are emitted back to space through various forms of terrestrial energy: 17 directly radiated to space and 34 absorbed by the atmosphere (19 through latent heat of vaporisation, 9 via convection and turbulence, and 6 as absorbed infrared by greenhouse gases). The 48 units absorbed by the atmosphere (34 units from terrestrial energy and 14 from insolation) are then finally radiated back to space. This simplified example neglects some details of mechanisms that recirculate, store, and thus lead to further buildup of heat near the surface.
Ultimately the 65 units (17 from the ground and 48 from the atmosphere) are emitted as OLR. They approximately balance the 65 units (ASR) absorbed from the sun in order to maintain a net-zero gain of energy by Earth.
Heat storage reservoirs
Land, ice, and oceans are active material constituents of Earth's climate system along with the atmosphere. They have far greater mass and heat capacity, and thus much more thermal inertia. When radiation is directly absorbed or the surface temperature changes, thermal energy will flow as sensible heat either into or out of the bulk mass of these components via conduction/convection heat transfer processes. The transformation of water between its solid/liquid/vapor states also acts as a source or sink of potential energy in the form of latent heat. These processes buffer the surface conditions against some of the rapid radiative changes in the atmosphere. As a result, the daytime versus nighttime difference in surface temperatures is relatively small. Likewise, Earth's climate system as a whole shows a slow response to shifts in the atmospheric radiation balance.
The top few meters of Earth's oceans harbor more thermal energy than its entire atmosphere. Like atmospheric gases, fluidic ocean waters transport vast amounts of such energy over the planet's surface. Sensible heat also moves into and out of great depths under conditions that favor downwelling or upwelling.
Over 90 percent of the extra energy that has accumulated on Earth from ongoing global warming since 1970 has been stored in the ocean. About one-third has propagated to depths below 700 meters. The overall rate of growth has also risen during recent decades, reaching close to 500 TW (1 W/m2) as of 2020. That led to about 14 zettajoules (ZJ) of heat gain for the year, exceeding the 570 exajoules (=160,000 TW-hr) of total primary energy consumed by humans by a factor of at least 20.
Heating/cooling rate analysis
Generally speaking, changes to Earth's energy flux balance can be thought of as being the result of external forcings (both natural and anthropogenic, radiative and non-radiative), system feedbacks, and internal system variability. Such changes are primarily expressed as observable shifts in temperature (T), clouds (C), water vapor (W), aerosols (A), trace greenhouse gases (G), land/ocean/ice surface reflectance (S), and as minor shifts in insolaton (I) among other possible factors. Earth's heating/cooling rate can then be analyzed over selected timeframes (Δt) as the net change in energy (ΔE) associated with these attributes:
Here the term ΔET, corresponding to the Planck response, is negative-valued when temperature rises due to its strong direct influence on OLR.
The recent increase in trace greenhouse gases produces an enhanced greenhouse effect, and thus a positive ΔEG forcing term. By contrast, a large volcanic eruption (e.g. Mount Pinatubo 1991, El Chichón 1982) can inject sulfur-containing compounds into the upper atmosphere. High concentrations of stratospheric sulfur aerosols may persist for up to a few years, yielding a negative forcing contribution to ΔEA. Various other types of anthropogenic aerosol emissions make both positive and negative contributions to ΔEA. Solar cycles produce ΔEI smaller in magnitude than those of recent ΔEG trends from human activity.
Climate forcings are complex since they can produce direct and indirect feedbacks that intensify (positive feedback) or weaken (negative feedback) the original forcing. These often follow the temperature response. Water vapor trends as a positive feedback with respect to temperature changes due to evaporation shifts and the Clausius-Clapeyron relation. An increase in water vapor results in positive ΔEW due to further enhancement of the greenhouse effect. A slower positive feedback is the ice-albedo feedback. For example, the loss of Arctic ice due to rising temperatures makes the region less reflective, leading to greater absorption of energy and even faster ice melt rates, thus positive influence on ΔES. Collectively, feedbacks tend to amplify global warming or cooling.
Clouds are responsible for about half of Earth's albedo and are powerful expressions of internal variability of the climate system. They may also act as feedbacks to forcings, and could be forcings themselves if for example a result of cloud seeding activity. Contributions to ΔEC vary regionally and depending upon cloud type. Measurements from satellites are gathered in concert with simulations from models in an effort to improve understanding and reduce uncertainty.
Earth's energy imbalance (EEI)
The Earth's energy imbalance (EEI) is defined as "the persistent and positive (downward) net top of atmosphere energy flux associated with greenhouse gas forcing of the climate system".
If Earth's incoming energy flux (ASR) is larger or smaller than the outgoing energy flux (OLR), then the planet will gain (warm) or lose (cool) net heat energy in accordance with the law of energy conservation:
.
Positive EEI thus defines the overall rate of planetary heating and is typically expressed as watts per square meter (W/m2). During 2005 to 2019 the Earth's energy imbalance averaged about 460 TW or globally 0.90 ± 0.15 W per m2.
When Earth's energy imbalance (EEI) shifts by a sufficiently large amount, the shift is measurable by orbiting satellite-based instruments. Imbalances that fail to reverse over time will also drive long-term temperature changes in the atmospheric, oceanic, land, and ice components of the climate system. Temperature, sea level, ice mass and related shifts thus also provide measures of EEI.
The biggest changes in EEI arise from changes in the composition of the atmosphere through human activities, thereby interfering with the natural flow of energy through the climate system. The main changes are from increases in carbon dioxide and other greenhouse gases, that produce heating (positive EEI), and pollution. The latter refers to atmospheric aerosols of various kinds, some of which absorb energy while others reflect energy and produce cooling (or lower EEI).
It is not (yet) possible to measure the absolute magnitude of EEI directly at top of atmosphere, although changes over time as observed by satellite-based instruments are thought to be accurate. The only practical way to estimate the absolute magnitude of EEI is through an inventory of the changes in energy in the climate system. The biggest of these energy reservoirs is the ocean.
Energy inventory assessments
The planetary heat content that resides in the climate system can be compiled given the heat capacity, density and temperature distributions of each of its components. Most regions are now reasonably well sampled and monitored, with the most significant exception being the deep ocean.
Estimates of the absolute magnitude of EEI have likewise been calculated using the measured temperature changes during recent multi-decadal time intervals. For the 2006 to 2020 period EEI was about and showed a significant increase above the mean of for the 1971 to 2020 period.
EEI has been positive because temperatures have increased almost everywhere for over 50 years. Global surface temperature (GST) is calculated by averaging temperatures measured at the surface of the sea along with air temperatures measured over land. Reliable data extending to at least 1880 shows that GST has undergone a steady increase of about 0.18 °C per decade since about year 1970.
Ocean waters are especially effective absorbents of solar energy and have a far greater total heat capacity than the atmosphere. Research vessels and stations have sampled sea temperatures at depth and around the globe since before 1960. Additionally, after the year 2000, an expanding network of nearly 4000 Argo robotic floats has measured the temperature anomaly, or equivalently the ocean heat content change (ΔOHC). Since at least 1990, OHC has increased at a steady or accelerating rate. ΔOHC represents the largest portion of EEI since oceans have thus far taken up over 90% of the net excess energy entering the system over time (Δt):
.
Earth's outer crust and thick ice-covered regions have taken up relatively little of the excess energy. This is because excess heat at their surfaces flows inward only by means of thermal conduction, and thus penetrates only several tens of centimeters on the daily cycle and only several tens of meters on the annual cycle. Much of the heat uptake goes either into melting ice and permafrost or into evaporating more water from soils.
Measurements at top of atmosphere (TOA)
Several satellites measure the energy absorbed and radiated by Earth, and thus by inference the energy imbalance. These are located top of atmosphere (TOA) and provide data covering the globe. The NASA Earth Radiation Budget Experiment (ERBE) project involved three such satellites: the Earth Radiation Budget Satellite (ERBS), launched October 1984; NOAA-9, launched December 1984; and NOAA-10, launched September 1986.
NASA's Clouds and the Earth's Radiant Energy System (CERES) instruments are part of its Earth Observing System (EOS) since March 2000. CERES is designed to measure both solar-reflected (short wavelength) and Earth-emitted (long wavelength) radiation. The CERES data showed increases in EEI from in 2005 to in 2019. Contributing factors included more water vapor, less clouds, increasing greenhouse gases, and declining ice that were partially offset by rising temperatures. Subsequent investigation of the behavior using the GFDL CM4/AM4 climate model concluded there was a less than 1% chance that internal climate variability alone caused the trend.
Other researchers have used data from CERES, AIRS, CloudSat, and other EOS instruments to look for trends of radiative forcing embedded within the EEI data. Their analysis showed a forcing rise of from years 2003 to 2018. About 80% of the increase was associated with the rising concentration of greenhouse gases which reduced the outgoing longwave radiation.
Further satellite measurements including TRMM and CALIPSO data have indicated additional precipitation, which is sustained by increased energy leaving the surface through evaporation (the latent heat flux), offsetting some of the increase in the longwave greenhouse flux to the surface.
It is noteworthy that radiometric calibration uncertainties limit the capability of the current generation of satellite-based instruments, which are otherwise stable and precise. As a result, relative changes in EEI are quantifiable with an accuracy which is not also achievable for any single measurement of the absolute imbalance.
Geodetic and hydrographic surveys
Observations since 1994 show that ice has retreated from every part of Earth at an accelerating rate. Mean global sea level has likewise risen as a consequence of the ice melt in combination with the overall rise in ocean temperatures.
These shifts have contributed measurable changes to the geometric shape and gravity of the planet.
Changes to the mass distribution of water within the hydrosphere and cryosphere have been deduced using gravimetric observations by the GRACE satellite instruments. These data have been compared against ocean surface topography and further hydrographic observations using computational models that account for thermal expansion, salinity changes, and other factors. Estimates thereby obtained for ΔOHC and EEI have agreed with the other (mostly) independent assessments within uncertainties.
Importance as a climate change metric
Climate scientists Kevin Trenberth, James Hansen, and colleagues have identified the monitoring of Earth's energy imbalance as an important metric to help policymakers guide the pace for mitigation and adaptation measures. Because of climate system inertia, longer-term EEI (Earth's energy imbalance) trends can forecast further changes that are "in the pipeline".
Scientists found that the EEI is the most important metric related to climate change. It is the net result of all the processes and feedbacks in play in the climate system. Knowing how much extra energy affects weather systems and rainfall is vital to understand the increasing weather extremes.
In 2012, NASA scientists reported that to stop global warming atmospheric CO2 concentration would have to be reduced to 350 ppm or less, assuming all other climate forcings were fixed. As of 2020, atmospheric CO2 reached 415 ppm and all long-lived greenhouse gases exceeded a 500 ppm -equivalent concentration due to continued growth in human emissions.
| Physical sciences | Climate change | Earth science |
33142291 | https://en.wikipedia.org/wiki/Near-surface%20geophysics | Near-surface geophysics | Near-surface geophysics is the use of geophysical methods to investigate small-scale features in the shallow (tens of meters) subsurface. It is closely related to applied geophysics or exploration geophysics. Methods used include seismic refraction and reflection, gravity, magnetic, electric, and electromagnetic methods. Many of these methods were developed for oil and mineral exploration but are now used for a great variety of applications, including archaeology, environmental science, forensic science, military intelligence, geotechnical investigation, treasure hunting, and hydrogeology. In addition to the practical applications, near-surface geophysics includes the study of biogeochemical cycles.
Overview
In studies of the solid Earth, the main feature that distinguishes geophysics from geology is that it involves remote sensing. Various physical phenomena are used to probe below the surface where scientists cannot directly access the rock. Applied geophysics projects typically have the following elements: data acquisition, data reduction, data processing, modeling, and geological interpretation.
This all requires various types of geophysical surveys. These may include surveys of gravity, magnetism, seismicity, or magnetotellurics.
Data acquisition
A geophysical survey is a set of measurements made with a geophysical instrument. Often a set of measurements are along a line, or traverse. Many surveys have a set of parallel traverses and another set perpendicular to it to get good spatial coverage. Technologies used for geophysical surveys include:
Seismic methods, such as reflection seismology, seismic refraction, and seismic tomography.
Seismoelectrical method
Geodesy and gravity techniques, including gravimetry and gravity gradiometry.
Magnetic techniques, including aeromagnetic surveys and magnetometers.
Electrical techniques, including electrical resistivity tomography, induced polarization and spontaneous potential.
Electromagnetic methods, such as magnetotellurics, ground penetrating radar and transient/time-domain electromagnetics.
Borehole geophysics, also called well logging.
Remote sensing techniques, including hyperspectral imaging.
Data reduction
The raw data from a geophysical survey must often be converted to a more useful form. This may involve correcting the data for unwanted variations; for example, a gravity survey would be corrected for surface topography. Seismic travel times would be converted to depths. Often a target of the survey will be revealed as an anomaly, a region that has data values above or below the surrounding region.
Data processing
The reduced data may not provide a good enough image because of background noise. The signal-to-noise ratio may be improved by repeated measurements of the same quantity followed by some sort of averaging such as stacking or signal processing.
Modeling
Once a good profile is obtained of the physical property that is directly measured, it must be converted to a model of the property that is being investigated. For example, gravity measurements are used to obtain a model of the density profile under the surface. This is called an inverse problem. Given a model of the density, the gravity measurements at the surface can be predicted; but in an inverse problem the gravity measurements are known and the density must be inferred. This problem has uncertainties due to the noise and limited coverage of the surface, but even with perfect coverage many possible models of the interior could fit the data. Thus, additional assumptions must be made to constrain the model.
Depending on the data coverage, the model may only be a 2D model of a profile. Or a set of parallel transects may be interpreted using a 2½D model, which assumes that relevant features are elongated. For more complex features, a 3D model may be obtained using tomography.
Geological interpretation
The final step in a project is the geological interpretation. A positive gravity anomaly may be an igneous intrusion, a negative anomaly a salt dome or void. A region of higher electrical conductivity may have water or galena. For a good interpretation the geophysics model must be combined with geological knowledge of the area.
Seismology
Seismology makes use of the ability of vibrations to travel through rock as seismic waves. These waves come in two types: pressure waves (P-waves) and shear waves (S-waves). P-waves travel faster than S-waves, and both have trajectories that bend as the wave speeds change with depth. Refraction seismology makes use of these curved trajectories. In addition, if there are discontinuities between layers in the rock or sediment, seismic waves are reflected. Reflection seismology identifies these layer boundaries by the reflections.
Reflection seismology
Seismic reflection is used for imaging of nearly horizontal layers in the Earth. The method is much like echo sounding. It can be used to identify folding and faulting, and to search for oil and gas fields. On a regional scale, profiles can be combined to get sequence stratigraphy, making it possible to date sedimentary layers and identify eustatic sea level rise.
Refraction seismology
Seismic refraction can be used not only to identify layers in rocks by the trajectories of the seismic waves, but also to infer the wave speeds in each layer, thereby providing some information on the material in each layer.
Magnetic surveying
Magnetic surveying can be done on a planetary scale (for example, the survey of Mars by the Mars Global Surveyor) or on a scale of meters. In the near-surface, it is used to map geological boundaries and faults, find certain ores, buried igneous dykes, locating buried pipes and old mine workings, and detecting some kinds of land mines. It is also used to look for human artifacts. Magnetometers are used to search for anomalies produced by targets with a lot of magnetically hard material such as ferrites.
Microgravity surveying
High precision gravity measurements can be used to detect near surface density anomalies, such as those associated with sinkholes and old mine workings, with repeat monitoring allowing near-surface changes over these to be quantified.
Ground-penetrating radar
Ground-penetrating radar is one of the most popularly used near-surface geophysics in forensic archaeology, forensic geophysics, geotechnical investigation, treasure hunting, and hydrogeology, with typical penetration depths down to below ground level, depending upon local soil and rock conditions, although this depends upon the central frequency transmitter/receiver antennae utilised.
Bulk ground conductivity
Bulk ground conductivity typically uses transmitter/receiver pairs to obtain primary/secondary EM signals from the surrounding environment (note potential difficulty in urban areas with above-ground EM sources of interference), with collection areas depending upon the antennae spacing and equipment used. There are airborne, land- and water-based systems currently available. They are particularly useful for initial ground reconnaissance work in geotechnical, archaeology and forensic geophysics investigations.
Electrical resistivity
The reciprocal of conductivity, electrical resistivity surveys measure the resistance of material (usually soil) between electrical probes, with typical penetration depths one to two times the electrode separations. There are various electrode configurations of equipment, the most typical using two current and two potential electrodes in a dipole-dipole array. They are used for geotechnical, archaeology and forensic geophysics investigations and have better resolution than most conductivity surveys. They do experience significant changes with soil moisture content, a difficulty in most site investigations with heterogeneous ground and differing vegetation distributions.
Applications
Milsom & Eriksen (2011) provide a useful field book for field geophysics.
Archaeology
Geophysical methods can be used to find or map an archaeological site remotely, avoiding unnecessary digging. They can also be used to date artifacts.
In surveys of a potential archaeological site, features cut into the ground (such as ditches, pits and postholes) may be detected, even after filled in, by electrical resistivity and magnetic methods. The infill may also be detectable using ground-penetrating radar. Foundations and walls may also have a magnetic or electrical signature. Furnaces, fireplaces and kilns may have a strong magnetic anomaly because a thermoremanent magnetization has been baked into magnetic minerals.
Geophysical methods were extensively used in recent work on the submerged remains of ancient Alexandria as well as three nearby submerged cities (Herakleion, Canopus and Menouthis). Methods that included side-scan sonar, magnetic surveys and seismic profiles uncovered a story of bad site location and a failure to protect buildings against geohazards. In addition, they helped to locate structures that may be the lost Great Lighthouse and palace of Cleopatra, although these claims are contested.
Forensics
Forensic geophysics is increasingly being used to detect near-surface objects/materials related to either a criminal or civil investigation. The most high-profile objects in criminal investigations are clandestine burials of murder victims, but forensic geophysics can also include locating unmarked burials in graveyards and cemeteries, a weapon used in a crime, or buried drugs or money stashes. Civil investigations are more often trying to determine the location, amount and (more tricky) the timing of illegally dumped waste, which include physical (e.g. fly-tipping) and liquid contaminants (e.g. hydrocarbons). There are many geophysical methods that could be employed, depending upon the target and background host materials. Most commonly ground-penetrating radar is used but this may not always be an optimal search detection technique.
Geotechnical investigations
Geotechnical investigations use near-surface geophysics as a standard tool, both for initial site characterisation and to gauge where to subsequently undertake intrusive site investigation (S.I.) which involves boreholes and trial pits. In rural areas conventional SI methods may be employed but in urban areas or in difficult sites, targeted geophysical techniques can rapidly characterise a site for follow-up, intensive surface or near-surface investigative methods. Most common is searching for buried utilities and still-active cables, cleared building foundations, determining soil type(s) and bedrock depth below ground level, solid/liquid waste contamination, mineshafts and relict mines below ground locations and even differing ground conditions. Indoor geophysical investigations have even been undertaken. Techniques vary depending upon the target and host materials as mentioned.
| Physical sciences | Geophysics | Earth science |
3889704 | https://en.wikipedia.org/wiki/Emerging%20technologies | Emerging technologies | Emerging technologies are technologies whose development, practical applications, or both are still largely unrealized. These technologies are generally new but also include old technologies finding new applications. Emerging technologies are often perceived as capable of changing the status quo.
Emerging technologies are characterized by radical novelty (in application even if not in origins), relatively fast growth, coherence, prominent impact, and uncertainty and ambiguity. In other words, an emerging technology can be defined as "a radically novel and relatively fast growing technology characterised by a certain degree of coherence persisting over time and with the potential to exert a considerable impact on the socio-economic domain(s) which is observed in terms of the composition of actors, institutions and patterns of interactions among those, along with the associated knowledge production processes. Its most prominent impact, however, lies in the future and so in the emergence phase is still somewhat uncertain and ambiguous."
Emerging technologies include a variety of technologies such as educational technology, information technology, nanotechnology, biotechnology, robotics, and artificial intelligence.
New technological fields may result from the technological convergence of different systems evolving towards similar goals. Convergence brings previously separate technologies such as voice (and telephony features), data (and productivity applications) and video together so that they share resources and interact with each other, creating new efficiencies.
Emerging technologies are those technical innovations which represent progressive developments within a field for competitive advantage; converging technologies represent previously distinct fields which are in some way moving towards stronger inter-connection and similar goals. However, the opinion on the degree of the impact, status and economic viability of several emerging and converging technologies varies.
History of emerging technologies
In the history of technology, emerging technologies are contemporary advances and innovation in various fields of technology.
Over centuries innovative methods and new technologies have been developed and opened up. Some of these technologies are due to theoretical research, and others from commercial research and development.
Technological growth includes incremental developments and disruptive technologies. An example of the former was the gradual roll-out of DVD (digital video disc) as a development intended to follow on from the previous optical technology compact disc. By contrast, disruptive technologies are those where a new method replaces the previous technology and makes it redundant, for example, the replacement of horse-drawn carriages by automobiles and other vehicles.
Emerging technology debates
Many writers, including computer scientist Bill Joy, have identified clusters of technologies that they consider critical to humanity's future. Joy warns that the technology could be used by elites for good or evil. They could use it as "good shepherds" for the rest of humanity or decide everyone else is superfluous and push for the mass extinction of those made unnecessary by technology.
Advocates of the benefits of technological change typically see emerging and converging technologies as offering hope for the betterment of the human condition. Cyberphilosophers Alexander Bard and Jan Söderqvist argue in The Futurica Trilogy that while Man himself is basically constant throughout human history (genes change very slowly), all relevant change is rather a direct or indirect result of technological innovation (memes change very fast) since new ideas always emanate from technology use and not the other way around. Man should consequently be regarded as history's main constant and technology as its main variable. However, critics of the risks of technological change, and even some advocates such as transhumanist philosopher Nick Bostrom, warn that some of these technologies could pose dangers, perhaps even contribute to the extinction of humanity itself; i.e., some of them could involve existential risks.
Much ethical debate centers on issues of distributive justice in allocating access to beneficial forms of technology. Some thinkers, including environmental ethicist Bill McKibben, oppose the continuing development of advanced technology partly out of fear that its benefits will be distributed unequally in ways that could worsen the plight of the poor. By contrast, inventor Ray Kurzweil is among techno-utopians who believe that emerging and converging technologies could and will eliminate poverty and abolish suffering.
Some analysts such as Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, argue that as information technology advances, robots and other forms of automation will ultimately result in significant unemployment as machines and software begin to match and exceed the capability of workers to perform most routine jobs.
As robotics and artificial intelligence develop further, even many skilled jobs may be threatened. Technologies such as machine learning may ultimately allow computers to do many knowledge-based jobs that require significant education. This may result in substantial unemployment at all skill levels, stagnant or falling wages for most workers, and increased concentration of income and wealth as the owners of capital capture an ever-larger fraction of the economy. This in turn could lead to depressed consumer spending and economic growth as the bulk of the population lacks sufficient discretionary income to purchase the products and services produced by the economy.
Examples of emerging technologies
Artificial intelligence
Artificial intelligence (AI) is the sub intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with animal-like intelligence. Major AI researchers and textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the study of making intelligent machines".
The central functions (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still among the field's long-term goals. Currently, popular approaches include deep learning, statistical methods, computational intelligence and traditional symbolic AI. There is an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others.
3D printing
3D printing, also known as additive manufacturing, has been posited by Jeremy Rifkin and others as part of the third industrial revolution.
Combined with Internet technology, 3D printing would allow for digital blueprints of virtually any material product to be sent instantly to another person to be produced on the spot, making purchasing a product online almost instantaneous.
Although this technology is still too crude to produce most products, it is rapidly developing and created a controversy in 2013 around the issue of 3D printed firearms.
Gene therapy
Gene therapy was first successfully demonstrated in late 1990/early 1991 for adenosine deaminase deficiency, though the treatment was somatic – that is, did not affect the patient's germ line and thus was not heritable. This led the way to treatments for other genetic diseases and increased interest in germ line gene therapy – therapy affecting the gametes and descendants of patients.
Between September 1990 and January 2014, there were around 2,000 gene therapy trials conducted or approved.
Cancer vaccines
A cancer vaccine is a vaccine that treats existing cancer or prevents the development of cancer in certain high-risk individuals. Vaccines that treat existing cancer are known as therapeutic cancer vaccines. There are currently no vaccines able to prevent cancer in general.
On April 14, 2009, The Dendreon Corporation announced that their Phase III clinical trial of Provenge, a cancer vaccine designed to treat prostate cancer, had demonstrated an increase in survival. It received U.S. Food and Drug Administration (FDA) approval for use in the treatment of advanced prostate cancer patients on April 29, 2010. The approval of Provenge has stimulated interest in this type of therapy.
Cultured meat
Cultured meat, also called in vitro meat, clean meat, cruelty-free meat, shmeat, and test-tube meat, is an animal-flesh product that has never been part of a living animal with exception of the fetal calf serum taken from a slaughtered cow. In the 21st century, several research projects have worked on in vitro meat in the laboratory. The first in vitro beefburger, created by a Dutch team, was eaten at a demonstration for the press in London in August 2013. There remain difficulties to be overcome before in vitro meat becomes commercially available. Cultured meat is prohibitively expensive, but it is expected that the cost could be reduced to compete with that of conventionally obtained meat as technology improves. In vitro meat is also an ethical issue. Some argue that it is less objectionable than traditionally obtained meat because it does not involve killing and reduces the risk of animal cruelty, while others disagree with eating meat that has not developed naturally.
Nanotechnology
Nanotechnology (sometimes shortened to nanotech) is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold.
Robotics
Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing. These technologies deal with automated machines that can take the place of humans in dangerous environments, factories, warehouses, or kitchens; or resemble humans in appearance, behavior, and/or cognition. A good example of a robot that resembles humans is Sophia, a social humanoid robot developed by Hong Kong-based company Hanson Robotics which was activated on April 19, 2015. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics.
Stem-cell therapy
Stem cell therapy is an intervention strategy that introduces new adult stem cells into damaged tissue in order to treat disease or injury. Many medical researchers believe that stem cell treatments have the potential to change the face of human disease and alleviate suffering. The ability of stem cells to self-renew and give rise to subsequent generations with variable degrees of differentiation capacities offers significant potential for generation of tissues that can potentially replace diseased and damaged areas in the body, with minimal risk of rejection and side effects.
Chimeric antigen receptor (CAR)-modified T cells have raised among other immunotherapies for cancer treatment, being implemented against B-cell malignancies. Despite the promising outcomes of this innovative technology, CAR-T cells are not exempt from limitations that must yet to be overcome in order to provide reliable and more efficient treatments against other types of cancer.
Distributed ledger technology
Distributed ledger or blockchain technology provides a transparent and immutable list of transactions. A wide range of uses has been proposed for where an open, decentralised database is required, ranging from supply chains to cryptocurrencies.
Smart contracts are self-executing transactions which occur when pre-defined conditions are met. The aim is to provide security that is superior to traditional contract law, and to reduce transaction costs and delays. The original idea was conceived by Nick Szabo in 1994, but remained unrealised until the development of blockchains.
Augmented reality
This type of technology where digital graphics are loaded onto live footage has been around since the 20th century, but thanks to the arrival of more powerful computing hardware and the implementation of open source, this technology has been able to do things that we never thought were possible. Some ways in which we have used this technology can be through apps such as Pokémon Go, Snapchat and Instagram filters and other apps that create fictional things in real objects.
Multi-use rockets
Reusable rockets, in contrast to single use rockets that are disposed after launch, are able to propulsively land safely in a pre-specified place where they are recovered to be used again in later launches. Early prototypes include the McDonnell Douglas DC-X tested in the 1990s, but the company SpaceX was the first to use propulsive reusability on the first stage of an operational orbital launch vehicle, the Falcon 9, in the 2010s. SpaceX is also developing a fully reusable rocket known as Starship. Other entities developing reusable rockets include Blue Origin and Rocket Lab.
Development of emerging technologies
As innovation drives economic growth, and large economic rewards come from new inventions, a great deal of resources (funding and effort) go into the development of emerging technologies. Some of the sources of these resources are described below.
Research and development
Research and development is directed towards the advancement of technology in general, and therefore includes development of emerging technologies. | Technology | General | null |
3893437 | https://en.wikipedia.org/wiki/Joint%20%28geology%29 | Joint (geology) | A joint is a break (fracture) of natural origin in a layer or body of rock that lacks visible or measurable movement parallel to the surface (plane) of the fracture ("Mode 1" Fracture). Although joints can occur singly, they most frequently appear as joint sets and systems. A joint set is a family of parallel, evenly spaced joints that can be identified through mapping and analysis of their orientations, spacing, and physical properties. A joint system consists of two or more intersecting joint sets.
The distinction between joints and faults hinges on the terms visible or measurable, a difference that depends on the scale of observation. Faults differ from joints in that they exhibit visible or measurable lateral movement between the opposite surfaces of the fracture ("Mode 2" and "Mode 3" Fractures). Thus a joint may be created by either strict movement of a rock layer or body perpendicular to the fracture or by varying degrees of lateral displacement parallel to the surface (plane) of the fracture that remains "invisible" at the scale of observation.
Joints are among the most universal geologic structures, found in almost every exposure of rock. They vary greatly in appearance, dimensions, and arrangement, and occur in quite different tectonic environments. Often, the specific origin of the stresses that created certain joints and associated joint sets can be quite ambiguous, unclear, and sometimes controversial. The most prominent joints occur in the most well-consolidated, lithified, and highly competent rocks, such as sandstone, limestone, quartzite, and granite. Joints may be open fractures or filled by various materials. Joints infilled by precipitated minerals are called veins and joints filled by solidified magma are called dikes.
Formation
Joints arise from brittle fracture of a rock or layer due to tensile stress. This stress may be imposed from outside; for example, by the stretching of layers, the rise of pore fluid pressure, or shrinkage caused by the cooling or desiccation of a rock body or layer whose outside boundaries remained fixed.
When tensional stresses stretch a body or layer of rock such that its tensile strength is exceeded, it breaks. When this happens the rock fractures in a plane parallel to the maximum principal stress and perpendicular to the minimum principal stress (the direction in which the rock is being stretched). This leads to the development of a single sub-parallel joint set. Continued deformation may lead to development of one or more additional joint sets. The presence of the first set strongly affects the stress orientation in the rock layer, often causing subsequent sets to form at a high angle, often 90°, to the first set.
Types
Joints are classified by their geometry or by the processes that formed them.
By geometry
The geometry of joints refers to the orientation of joints as either plotted on stereonets and rose-diagrams or observed in rock exposures. In terms of geometry, three major types of joints, nonsystematic joints, systematic joints, and columnar jointing are recognized.
Nonsystematic
Nonsystematic joints are joints that are so irregular in form, spacing, and orientation that they cannot be readily grouped into distinctive, through-going joint sets.
Systematic
Systematic joints are planar, parallel, joints that can be traced for some distance, and occur at regularly, evenly spaced distances on the order of centimeters, meters, tens of meters, or even hundreds of meters. As a result, they occur as families of joints that form recognizable joint sets. Typically, exposures or outcrops within a given area or region of study contains two or more sets of systematic joints, each with its own distinctive properties such as orientation and spacing, that intersect to form well-defined joint systems.
Based upon the angle at which joint sets of systematic joints intersect to form a joint system, systematic joints can be subdivided into conjugate and orthogonal joint sets. The angles at which joint sets within a joint system commonly intersect are called dihedral angles by structural geologists. When the dihedral angles are nearly 90° within a joint system, the joint sets are known as orthogonal joint sets. When the dihedral angles are from 30 to 60° within a joint system, the joint sets are known as conjugate joint sets.
Within regions that have experienced tectonic deformation, systematic joints are typically associated with either layered or bedded strata that have been folded into anticlines and synclines. Such joints can be classified according to their orientation in respect to the axial planes of the folds as they often commonly form in a predictable pattern with respect to the hinge trends of folded strata. Based upon their orientation to the axial planes and axes of folds, the types of systematic joints are:
Longitudinal joints – Joints which are roughly parallel to fold axes and often fan around the fold.
Cross-joints – Joints which are approximately perpendicular to fold axes.
Diagonal joints – Joints which typically occur as conjugate joint sets that trend oblique to the fold axes.
Strike joints – Joints which trend parallel to the strike of the axial plane of a fold.
Cross-strike joints – Joints which cut across the axial plane of a fold.
Columnar
Columnar jointing is a distinctive type of joints that join together at triple junctions either at or about 120° angles. These joints split a rock body into long, prisms or columns. Typically, such columns are hexagonal, although 3-, 4-, 5- and 7-sided columns are relatively common. The diameter of these prismatic columns ranges from a few centimeters to several metres. They are often oriented perpendicular to either the upper surface and base of lava flows and the contact of the tabular igneous bodies with the surrounding rock. This type of jointing is typical of thick lava flows and shallow dikes and sills. Columnar jointing is also known as either columnar structure, prismatic joints, or prismatic jointing. Rare cases of columnar jointing have also been reported from sedimentary strata.
By formation
Joints can be classified according to their origin, under the labels of tectonics, hydraulics, exfoliation, unloading (release), and cooling. Different authors have proposed contradictory hypotheses for the same joint sets and types. And, joints in the same outcrop may form at different times under varied circumstances.
Tectonic
Tectonic joints are joints formed when the relative displacement of the joint walls is normal to its plane as the result of brittle deformation of bedrock in response to regional or local tectonic deformation of bedrock. Such joints form when directed tectonic stress causes the tensile strength of
bedrock to be exceeded as the result of the stretching of rock layers under conditions of elevated pore fluid pressure and directed tectonic stress. Tectonic joints often reflect local tectonic stresses associated with local folding and faulting. Tectonic joints occur as both nonsystematic and systematic joints, including orthogonal and conjugate joint sets.
Hydraulic
Hydraulic joints are formed when pore fluid pressure becomes elevated as a result of vertical gravitational loading. In simple terms, the accumulation of either sediments, volcanic, or other material causes an increase in the pore pressure of groundwater and other fluids in the underlying rock when they cannot move either laterally or vertically in response to this pressure. This also causes an increase in pore pressure in preexisting cracks that increases the tensile stress on them perpendicular to the minimum principal stress (the direction in which the rock is being stretched). If the tensile stress exceeds the magnitude of the least principal compressive stress the rock will fail in a brittle manner and these cracks propagate in a process called hydraulic fracturing. Hydraulic joints occur as both nonsystematic and systematic joints, including orthogonal and conjugate joint sets. In some cases, joint sets can be a tectonic - hydraulic hybrid.
Exfoliation
Exfoliation joints are sets of flat-lying, curved, and large joints that are restricted to massively exposed rock faces in a deeply eroded landscape. Exfoliation jointing consists of fan-shaped fractures varying from a few meters to tens of meters in size that lie sub-parallel to the topography. The vertical, gravitational load of the mass of a mountain-size bedrock mass drives longitudinal splitting and causes outward buckling toward the free air. In addition, paleostress sealed in the granite before the granite was exhumed by erosion and released by exhumation and canyon cutting is also a driving force for the actual spalling.
Unloading
Unloading joints or release joints arise near the surface when bedded sedimentary rocks are brought closer to the surface during uplift and erosion; when they cool, they contract and become relaxed elastically. A stress builds up which eventually exceeds the tensile strength of the bedrock and results in jointing. In the case of unloading joints, compressive stress is released either along preexisting structural elements (such as cleavage) or perpendicular to the former direction of tectonic compression.
Cooling
Cooling joints are columnar joints that result from the cooling of either lava from the exposed surface of a lava lake or flood basalt flow or the sides of a tabular igneous, typically basaltic, intrusion. They exhibit a pattern of joints that join together at triple junctions either at or about 120° angles. They split a rock body into long, prisms or columns that are typically hexagonal, although 3-, 4-, 5- and 7-sided columns are relatively common. They form as a result of a cooling front that moves from some surface, either the exposed surface of a lava lake or flood basalt flow or the sides of a tabular igneous intrusion into either lava of the lake or lava flow or magma of a dike or sill.
Fractography
Joint propagation can be studied through the techniques of fractography in which characteristic marks such as hackles and plumose structures are used to determine propagation directions and, in some cases, the principal stress orientations.
Shear fractures
Some fractures that look like joints are actually shear fractures, which in effect are microfaults. They do not form as the result of the perpendicular opening of a fracture due to tensile stress, but through the shearing of fractures that causes lateral movement of the faces. Shear fractures can be confused with joints because the lateral offset of the fracture faces is not visible in the outcrop or in a specimen. Because of the absence of diagnostic ornamentation or the lack of any discernible movement or offset, they can be indistinguishable from joints. Such fractures occur in planar parallel sets at an angle of 60 degrees and can be of the same size and scale as joints. As a result, some "conjugate joint sets" might actually be shear fractures. Shear fractures are distinguished from joints by the presence of slickensides, the products of shearing movement parallel to the fracture surface. The slickensides are fine-scale, delicate ridge-in-groove lineations on the surface of fracture surfaces.
Importance
Joints are important not only in understanding the local and regional geology and geomorphology but also in developing natural resources, in the safe design of structures, and in environmental protection. Joints have a profound control on weathering and erosion of bedrock. As a result, they exert a strong control on how topography and morphology of landscapes develop. Understanding the local and regional distribution, physical character, and origin of joints is a significant part of understanding the geology and geomorphology of an area. Joints often impart a well-develop fracture-induced permeability to bedrock. As a result, joints strongly influence, even control, the natural circulation (hydrogeology) of fluids, e.g. groundwater and pollutants within aquifers, petroleum in reservoirs, and hydrothermal circulation at depth, within bedrock. Thus, joints are important to the economic and safe development of petroleum, hydrothermal, and groundwater resources and the subject of intensive research relative to these resources. Regional and local joint systems exert a strong control on how ore-forming hydrothermal fluids (consisting largely of , , and NaCl — which formed most of Earth's ore deposits) circulated within its crust. As a result, understanding their genesis, structure, chronology, and distribution is an important part of finding and profitably developing ore deposits. Finally, joints often form discontinuities that may have a large influence on the mechanical behavior (strength, deformation, etc.) of soil and rock masses in, for example, tunnel, foundation, or slope construction. As a result, joints are an important part of geotechnical engineering in practice and research.
| Physical sciences | Structural geology | Earth science |
30623838 | https://en.wikipedia.org/wiki/Native%20element%20mineral | Native element mineral | Native element minerals are those elements that occur in nature in uncombined form with a distinct mineral structure. The elemental class includes metals, intermetallic compounds, alloys, metalloids, and nonmetals. The Nickel–Strunz classification system also includes the naturally occurring phosphides, silicides, nitrides, carbides, and arsenides.
Elements
The following elements occur as native element minerals or alloys:
Nickel–Strunz Classification -01- Native elements
This list uses the Classification of Nickel–Strunz (mindat.org, 10 ed, pending publication).
Abbreviations
"*" – discredited (IMA/CNMNC status).
"?" – questionable/doubtful (IMA/CNMNC status).
"REE" – Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu)
"PGE" – Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt)
03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates:
Neso: insular (from Greek νησος nēsos, island)
Soro: grouping (from Greek σωροῦ sōros, heap, mound (especially of corn))
Cyclo: ring
Ino: chain (from Greek ις [genitive: ινος inos], fibre)
Phyllo: sheet (from Greek φύλλον phyllon, leaf)
Tecto: three-dimensional framework
Nickel–Strunz code scheme NN.XY.##x:
NN: Nickel–Strunz mineral class number
X: Nickel–Strunz mineral division letter
Y: Nickel–Strunz mineral family letter
##x: Nickel–Strunz mineral/group number, x add-on letter
Class: native elements
01.A Metals and intermetallic alloys
01.AA Copper-cupalite family: 05 native copper, 05 lead, 05 native gold, 05 native silver, 05 nickel, 05 aluminium; 10a auricupride, 10b tetra-auricupride; 15 novodneprite, 15 khatyrkite, 15 anyuiite; 20 cupalite, 25 hunchunite
01.AB Zinc-brass family (Cu-Zn alloys): 05 cadmium, 05 zinc, 05 titanium*, 05 rhenium*; 10a brass*, 10a zhanghengite, 10b danbaite, 10b tongxinite*
01.AC Indium-tin family: 05 indium, 10 tin; 15 yuanjiangite, 15 sorosite
01.AD Mercury-amalgam family: 00 amalgam*, 05 mercury; 10 belendorffite, 10 kolymite; 15a paraschachnerite, 15a schachnerite, 15b luanheite, 15c eugenite, 15d moschellandsbergite; 20a weishanite, 20b goldamalgam*; 25 potarite, 30 leadamalgam
01.AE Iron-chromium family: 05 kamacite? (iron var.), 05 iron, 05 chromium; 10 antitaenite*, 10 taenite, 10 tetrataenite; 15 chromferide, 15 wairauite, 15 ferchromide; 20 awaruite, 25 jedwabite
01.AF Platinum-group elements: 05 osmium, 05 rutheniridosmine, 05 ruthenium; 10 palladium, 10 iridium, 10 rhodium, 10 platinum
01.AG PGE-metal alloys: 05 garutiite, 05 hexaferrum; 10 atokite, 10 zvyagintsevite, 10 rustenburgite; 15 taimyrite, 15 tatyanaite; 20 paolovite; 25 plumbopalladinite, 25 stannopalladinite; 30 cabriite; 35 chengdeite, 35 isoferroplatinum; 40 ferronickelplatinum, 40 tetraferroplatinum, 40 tulameenite; 45 hongshiite*, 45 skaergaardite; 50 yixunite, 55 damiaoite, 60 niggliite, 65 bortnikovite, 70 nielsenite
01.B Metallic carbides, silicides, nitrides and phosphides
01.BA Carbides: 05 cohenite; 10 isovite, 10 haxonite; 15 tongbaite; 20 khamrabaevite, 20 niobocarbide, 20 tantalcarbide; 25 qusongite, 30 yarlongite
01.BB Silicides: zangboite; 05 mavlyanovite, 05 suessite; 10 perryite, 15 fersilicite*, 20 ferdisilicite*, 25 luobusaite, 30 gupeiite, 35 hapkeite, 40 xifengite
01.BC Nitrides: 05 roaldite, 10 siderazot, 15 carlsbergite, 15 osbornite
01.BD Phosphides: 05 schreibersite, 05 nickelphosphide; 10 barringerite, 10 monipite; 15 allabogdanite, 15 florenskyite, 15 andreyivanovite; 20 melliniite
01.C Metalloids and nonmetals
01.CA Arsenic group elements: 05 bismuth, 05 native antimony, 05 arsenic, 05 stibarsen; 10 arsenolamprite, 10 pararsenolamprite; 15 paradocrasite
01.CB Carbon-silicon family: 05a graphite, 05b chaoite, 05c fullerite; 10a diamond, 10b lonsdaleite, 15 silicon
01.CC Sulfur-selenium-iodine: 05 sulfur, 05 rosickyite; 10 tellurium, 10 selenium
01.D Nonmetallic carbides and nitrides
01.DA Nonmetallic carbides: 05 moissanite
01.DB Nonmetallic nitrides: 05 nierite, 10 sinoite
01.X Unclassified Strunz elements (metals and intermetallic alloys; metalloids and nonmetals; carbides, silicides, nitrides, phosphides)
01.XX Unknown: 00 hexamolybdenum, 00 tantalum*, 00 brownleeite
| Physical sciences | Minerals | Earth science |
25932382 | https://en.wikipedia.org/wiki/Lipid%20profile | Lipid profile | A lipid profile or lipid panel is a panel of blood tests used to find abnormalities in blood lipid ( such as cholesterol and triglycerides) concentrations. The results of this test can identify certain genetic diseases and can determine approximate risks for cardiovascular disease, certain forms of pancreatitis, and other diseases.
Lipid panels are usually ordered as part of a physical exam, along with other panels such as the complete blood count (CBC) and basic metabolic panel (BMP).
Components
A lipid profile report typically includes:
Low-density lipoprotein (LDL)
High-density lipoprotein (HDL)
Total triglycerides
Total cholesterol
LDL is not usually actually measured, but calculated from the other three using the Friedewald equation. A laboratory can optionally calculate the two extra values from the report:
Very low-density lipoprotein (VLDL)
Cholesterol:HDL ratio
Procedure and indication
Recommendations for cholesterol testing come from the Adult Treatment Panel (ATP) III guidelines, and are based on many large clinical studies, such as the Framingham Heart Study.
For healthy adults with no cardiovascular risk factors, the ATP III guidelines recommend screening once every five years. A lipid profile may also be ordered at regular intervals to evaluate the success of lipid-lowering drugs such as statins.
In the pediatric and adolescent population, lipid testing is not routinely performed. However, the American Academy of Pediatrics and the National Heart, Lung, and Blood Institute (NHLBI) recommend that children aged 9–11 be screened once for severe cholesterol abnormalities. This screening can be valuable to detect genetic diseases such as familial hypercholesterolemia that can be lethal if not treated early.
Traditionally, most laboratories have required patients to fast for 9–12 hours before screening. However, studies have questioned the utility of fasting before lipid panels, and some diagnostic labs routinely accept non-fasting samples.
Methods
Friedewald
Typically the laboratory measures only three quantities: total cholesterol; HDL; Triglycerides. A typical procedure used by NHANES 2004 uses the following measurement methods:
Total cholesterol is measured using a mixture of enzymes. First an esterase converts cholesterol esters into cholesterol and free fatty acid. Then an oxidase oxidizes the cholesterol, producing a H2O2 side-product that changes the color of a dye. The amount of oxidation can be precisely quantified by light absorbance at 500 nm.
Triglyceride concentration is also measured using an enzyme mixture. A lipase releases glycerol from the molecules, which gets oxidized by another enzyme while producing H2O2. The same color-change follows.
HDL is measured in two steps. First a special reagent is added to the serum that binds apoB-containing lipoprotein particles, shielding them from the enzymes in the next step. Then a mixture of PEGylated enzymes is added with dye. The chemical reaction is the same as the total cholesterol measurement, except that the enzymes are blocked from acting on non-HDL lipoproteins by the reagent and their own PEG tails.
From these three data LDL may be calculated. According to Friedewald's equation:
[LDL] [Total cholesterol] − [HDL] −
Other calculations of LDL from those same three data have been proposed which yield some significantly different results.
VLDL can be defined as the total cholesterol that is neither HDL nor LDL. With that definition, Friedewald's equation yields:
[VLDL]
The alternative calculations mentioned above may yield significantly different values for VLDL.
The Friedewald method is reasonably reliable for the majority of patients, but is notably inaccurate in patients with hypertriglyceridemia (> 400 mg/dL or 4.5 mmol/L). It also underestimates LDL-C in patients with low LDL-C (< 25 mg/dL or 0.6 mmol/L). It does not take into account intermediate-density lipoprotein.
A "Martin/Hopkins" variation that takes into how triglycerides-to-VLDL ratio tends to vary with other parameters appears more reliable and accurate.
All-direct
Every part of the lipid panel can be measured directly using ultracentrifugation, which is the gold standard. This type of measurement involves no errors from estimation and can also measure IDL-C and Lp(a)-C levels. Fully direct measurement is more costly, however.
Laboratories may also use proprietrary tests for "direct chemical LDL-C" which require no prior separation by centrifugation. These tests are not yet standardized in US and Europe and lack validation. A specific version of the test seems popular in Japan, however. A number of other LDL-C determination methods have been used in the past or have been proposed for future use.
Implications
This test is used to identify dyslipidemia (various disturbances of cholesterol and triglyceride levels), many forms of which are recognized risk factors for cardiovascular disease and rarely pancreatitis.
A total cholesterol reading can be used to assess an individual's risk for heart disease; however, it should not be relied upon as the only indicator. The individual components that make up total cholesterol reading—LDL, HDL, and VLDL—are also important in measuring risk.
For instance, someone's total cholesterol may be high, but this may be due to very high HDL ("good cholesterol") cholesterol levels,—which can help prevent heart disease (the test is mainly concerned with high LDL, or "bad cholesterol" levels). So, while a high total cholesterol level may help give an indication that there is a problem with cholesterol levels, the components that make up total cholesterol should also be measured.
| Biology and health sciences | Diagnostics | Health |
2899604 | https://en.wikipedia.org/wiki/Honeydew%20%28melon%29 | Honeydew (melon) | The honeydew melon is one of the two main cultivar types in Cucumis melo Inodorus Group. It is characterized by the smooth, often green or yellowish rind and lack of musky odor. The other main type in the Inodorus Group is the wrinkle-rind casaba melon.
Characteristics
A honeydew has a round to slightly oval shape, typically long. It generally ranges in weight from . The flesh is usually pale green in color, while the smooth peel ranges from greenish to yellow. Like most fruit, honeydew has seeds. Its seeds contain high levels of polyunsaturated fatty acids. The inner flesh is eaten, often for dessert, and honeydew is commonly found in supermarkets across the world alongside cantaloupe melons and watermelons. In California, honeydew is in season from August until October.
This fruit grows best in semiarid climates and is harvested based on maturity, not size. Maturity can be hard to judge, but it is based upon the rind color ranging from greenish white (immature) to creamy yellow (mature). Maturity can also be judged by the blossom-end giving when pressed with the thumb, in addition to having a pleasant aroma. Quality is also determined by the honeydew having a nearly spherical shape with a surface free of scars or defects. A honeydew should also feel heavy for its size and have a waxy rather than a fuzzy surface. This reflects the integrity and quality of its flesh as the weight can be attributed to the high water content of the ripened fruit.
Nutrition
The honeydew is 90% water, 9% carbohydrates, 0.1% fat, and 0.5% protein. Like most melons, it is an excellent source of vitamin C, with one cup containing 56% of the recommended daily value. The honeydew is also a good source of vitamin B thiamine, as well as other B vitamins and the mineral potassium. In addition, it is low in calories compared to many other high potassium fruits such as bananas, with only 36 calories per 100g. However, the honeydew contains only negligible amounts of most other vitamins and minerals.
Origin and alternate names
"Honeydew" is the American name for the White Antibes cultivar which has been grown for many years in southern France and Algeria.
In China, honeydews are known as Bailan melons. They are famous locally near Lanzhou, the capital city of Gansu province in China's northwest.
According to Chinese sources, the melons were introduced to China by American Secretary of Agriculture, Henry A. Wallace, who donated melon seeds to the locals while visiting in the 1940s (probably 1944). Wallace served as Secretary of Agriculture and Vice President under president Franklin D. Roosevelt. In 1926, Wallace had founded a major seed company (Pioneer Hi-Bred) and popularized the use of hybridized corn. He also had a general background and interest in agriculture.
As a result of Wallace's introduction of the crop, in China the melon is sometimes called the Wallace (Chinese: 华莱士; pinyin: Hualaishi). The Mizo people use the name Hmazil, the Garo people and the Chakma people of Chittagong Hill Tracts use the name Chindire and the Tanchangya people call it Te'e in their local language.
In some parts of Latin America, especially in Chile, the honeydew is nicknamed "Melón tuna" ("prickly pear melon").
| Biology and health sciences | Melons | Plants |
2899822 | https://en.wikipedia.org/wiki/Spinosauridae | Spinosauridae | Spinosauridae (or spinosaurids) is a clade or family of tetanuran theropod dinosaurs comprising ten to seventeen known genera. Spinosaurid fossils have been recovered worldwide, including Africa, Europe, South America and Asia. Their remains have generally been attributed to the Early to Mid Cretaceous.
Spinosaurids were large bipedal carnivores. Their crocodilian-like skulls were long, low and narrow, bearing conical teeth with reduced or absent serrations. The tips of their upper and lower jaws fanned out into a spoon-shaped structure similar to a rosette, behind which there was a notch in the upper jaw that the expanded tip of the lower jaw fit into. The nostrils of spinosaurids were retracted to a position further back on the head than in most other theropods, and they had bony crests on their heads along the midline of their skulls. Their robust shoulders wielded stocky forelimbs, with three-fingered hands that bore an enlarged claw on the first digit. In many species, the upwards-projecting neural spines of the vertebrae (backbones) were significantly elongated and formed a sail on the animal's back (hence the family's etymology), which supported either a layer of skin or a fatty hump.
The genus Spinosaurus, from which the family, one of its subfamilies (Spinosaurinae) and tribes (Spinosaurini) borrow their names, is the longest known terrestrial predator from the fossil record, with an estimated length of up to and body mass of up to (similar to the weight of an African elephant). The closely related genus Sigilmassasaurus may have reached a similar or greater size, though its taxonomy is disputed. Direct fossil evidence and anatomical adaptations indicate that spinosaurids were at least partially piscivorous (fish-eating), with additional fossil finds indicating they also fed on other dinosaurs and pterosaurs. The osteology of spinosaurid teeth and bones has suggested a semiaquatic lifestyle for some members of this clade. This is further indicated by various anatomical adaptations, such as retracted eyes and nostrils; and the deepening of the tail in some taxa, which has been suggested to have aided in underwater propulsion akin to that of modern crocodilians. Spinosaurs are proposed to be closely related to the megalosaurid theropods of the Jurassic. This is due to both groups sharing many features such an enlarged claw on their first manual ungual and an elongated skull. However, some propose that this group (which is known as the Megalosauroidea) is paraphyletic and that spinosaurs represent either the most basal tetanurans or as basal carnosaurs which are less derived than the megalosaurids. Some have proposed a combination of the two ideas with spinosaurs being in a monophyletic Megalosauroidea inside a more inclusive Carnosauria that is made up of both allosauroids and megalosauroids.
History of discovery
The first spinosaurid fossil, a single conical tooth, was discovered circa 1820 by British paleontologist Gideon Mantell in the Wadhurst Clay Formation. In 1841, naturalist Sir Richard Owen mistakenly assigned it to a crocodilian he named Suchosaurus (meaning "crocodile lizard"). A second species, S. girardi, was later named in 1897. However, the spinosaurid nature of Suchosaurus was not recognized until a 1998 redescription of Baryonyx.
The first fossils referred to a spinosaurid were discovered in 1912 at the Bahariya Formation in Egypt. Consisting of vertebrae, skull fragments, and teeth, these remains became the holotype specimen of the new genus and species Spinosaurus aegyptiacus in 1915, when they were described by German paleontologist Ernst Stromer. The dinosaur's name meant "Egyptian spine lizard", in reference to the unusually long neural spines not seen previously in any other theropod. In April 1944, the holotype of S. aegyptiacus was destroyed during an allied bombing raid in World War II. In 1934, Stromer referred a partial skeleton also from the Bahariya Formation to a new species of Spinosaurus; the specimen has since been alternatively assigned to another African spinosaurid, Sigilmassasaurus.
In 1983, a relatively complete skeleton was excavated from the Smokejacks pit in Surrey, England. These remains were described by British paleontologists Alan J. Charig and Angela C. Milner in 1986 as the holotype of a new species, Baryonyx walkeri. After the discovery of Baryonyx, many new genera have since been described, with the majority from very incomplete remains. However, other finds bear enough fossil material and distinct anatomical features to be assigned with confidence. Paul Sereno and colleagues described Suchomimus in 1998, a baryonychine from Niger, on the basis of a partial skeleton found in 1997. In 2004, partial jaw bones were recovered from the Alcântara Formation, these were referred to a new genus of spinosaurine named Oxalaia in 2011 by Alexander Kellner.
On 2021 a recent discovery in Isle of Wight an island off the south coast of England, remains of a spinosaurid which is said to be of a new species is found. As per the findings, it is about 10 meters in length and weighed several tons. The prehistoric bones of the spinosaurid were found in a geological layer of rock known as the Vectis Formation in Compton Chine, it is the first identifiable theropod from the Vectis Formation. The study was led by Christopher Barker, a PhD doctoral student in vertebrate paleontology at the University of Southampton.
In February 2024, a new spinosaurid was announced with the name of Riojavenatrix lacustris. Originally discovered in La Rioja in 2005, it is the fifth spinosaurid species to be discovered in the Iberian Peninsula. It was found to have lived 120 million years ago and was around 7-8 metres long with a 1.5 metric ton body mass.
Description
Although reliable size and weight estimates for most known spinosaurids are hindered by the lack of good material, all known spinosaurids were large animals. The smallest genus known from good material is Irritator, which was between long and around in weight. Ichthyovenator, Baryonyx, and Suchomimus ranged from long, and weighed between . Oxalaia may have reached a length of between and a weight of . The largest known genus is Spinosaurus, which was capable of reaching lengths of and weighed around , making it the longest known theropod dinosaur and terrestrial predator. The closely allied Sigilmassasaurus may have grown to a similar or greater length, though its taxonomic relationship with Spinosaurus is uncertain. This consistency in large body size among spinosaurids could have evolved as a byproduct of their preference for semiaquatic lifestyles, as without the need to compete with other large theropod dinosaurs for food, they would have been able to grow to massive lengths.
Skull
Spinosaurid skulls—similar in many respects to those of crocodilians—were long, low and narrow. As in other theropods, various fenestrae (openings) in the skull aided in reducing its weight. In spinosaurs however, the antorbital fenestrae were greatly reduced, akin to those of crocodilians. The tips of the premaxillae (frontmost snout bones) were expanded in a spoon shape, forming what has been called a "terminal rosette" of enlarged teeth. Behind this expansion, the upper jaw had a notch bearing significantly smaller teeth, into which the also expanded tips of the dentaries (tooth bearing bones of the mandible) fit into, with a notch behind the expansion of the dentary. The maxillae (main upper jaw bones) were long and formed a low branch under the nostrils that connected to the rear of the premaxillae. The teeth at the frontmost part of the maxillae were small, becoming significantly larger soon after and then gradually decreasing in size towards the back of the jaw. Analysis of the teeth of spinosaurids and their comparison to the teeth of tyrannosaurids suggest that the deep roots of spinosaurids helped to better anchor the teeth of these animals and distribute the stress against lateral forces generated during bites in predation and feeding scenarios.
Despite their highly modified skulls, analysis of the endocasts of Baryonyx walkeri and Ceratosuchops inferodios reveals spinosaurid brains shared a high degree of similarity with those of other non-maniraptoriform theropods.
Lengthwise atop their skulls ran a thin and shallow sagittal crest that was usually tallest near or above the eyes, either becoming shorter or disappearing entirely towards the front of the head. Spinosaurus's head crest was comb-shaped and bore distinct vertical grooves, while those of Baryonyx and Suchomimus looked like small triangular bumps. Irritators median crest stopped above and behind the eyes in a bulbous, flattened shape. However, given that no fully preserved skulls are known for the genus, the complete shape of Irritator's crest is unknown. Cristatusaurus and Suchomimus (a possible synonym of the former) both had narrow premaxillary crests. Angaturama (a possible synonym of Irritator) had an unusually tall crest on its premaxillae that nearly overhung the tip of the snout with a small forward protrusion.
Spinosaurid nostrils were set far back on the skull, at least behind the teeth of the premaxillae, instead of at the front of the snout as in most theropods. Those of Baryonyx and Suchomimus were large and started between the first and fourth maxillary teeth, while Spinosaurus's nostrils were far smaller and more retracted. Irritator's nostrils were positioned similarly to those of Baryonyx and Suchomimus, and were between those of Spinosaurus and Suchomimus in size. Spinosaurids had long secondary palates, bony and rugose structures on the roof of their mouths that are also found in extant crocodilians, but not in most theropod dinosaurs. Oxalaia had a particularly elaborate secondary palate, while most spinosaurs had smoother ones. The teeth of spinosaurids were conical, with an oval to circular cross section and either absent or very fine serrations. Their teeth ranged from slightly recurved, such as those of Baryonyx and Suchomimus, to straight, such as those of Spinosaurus and Siamosaurus, and the crown was often ornamented with longitudinal grooves or ridges.
Postcranial skeleton
The coracoid bones of the shoulders in spinosaurids were robust and hook shaped. The arms were relatively large and well-built; the radius (long bone of the forearm) was stout and usually only half as long as the humerus (upper arm bone). Spinosaurid hands had three fingers, typical of tetanurans, and wielded an enlarged ungual on the first finger (or "thumb"), which formed the bony core of a keratin claw. In genera like Baryonyx and Suchomimus, the phalanges (finger bones) were of conventional length for large theropods, and bore hook-shaped, strongly curved hand claws. Based on fragmentary material from the forelimbs of Spinosaurus, it appears to have had longer, more gracile hands and straighter claws than other spinosaurids.
The hindlimbs of Suchomimus and Baryonyx were somewhat short and mostly conventional of other megalosauroid theropods. Ichthyovenator's hip region was reduced, having the shortest pubis (pubic bone) and ischium (lower and rearmost hip bone) in proportion to the ilium (main hip bone) of any other known theropod. Spinosaurus had an even smaller pelvis and hindlimbs in proportion to its body size; its legs composed just over 25 percent of the total body length. Substantially complete spinosaurid foot remains are only known from Spinosaurus. Unlike most theropods—which walk on three toes, with the hallux (first toe) being reduced and elevated off the ground—Spinosaurus walked on four functional toes, with an enlarged hallux that came in contact with the ground. The unguals of its feet, in contrast with the deeper, smaller and recurved unguals of other theropods, were shallow, long, large in relation to the foot, and had flat bottoms. Based on comparisons with those of modern shorebirds, it is theorized to be probable that the Spinosaurus's feet were webbed.
The upward-projecting neural spines of spinosaurid vertebrae (backbones) were very tall, more so than in most theropods. In life, these spines would have been covered in skin or fat tissue and formed a sail down the animal's back, a condition that has also been observed in some carcharodontosaurid and ornithopod dinosaurs. The eponymous neural spines of Spinosaurus were extremely tall, measuring over in height on some of the dorsal (back) vertebrae. Suchomimus had a lower, ridge-like sail across the majority of its back, hip, and tail region. Baryonyx showed a reduced sail, with a few of the rearmost vertebral spines being somewhat elongated. Ichthyovenator had a sinusoidal (wave-like) sail that was separated in two over the hips, with the upper ends of some neural spines being broad and fan-shaped. A neural spine from the holotype of Vallibonavenatrix shows a similar morphology to those of Ichthyovenator, indicating the presence of a sail in this genus as well. One partial skeleton possibly referable to Angaturama also had elongated neural spines on its hip region. The presence of a sail in fragmentary taxa like Sigilmassasaurus is unknown. In members of the subfamily Spinosaurinae, like Ichthyovenator and Spinosaurus, the neural spines of the caudal (tail) vertebrae were tall and reclined, accompanied by also elongated chevrons—long, thin bones that form the underside of the tail. This was most pronounced in Spinosaurus, in which the spines and chevrons formed a large paddle-like structure, deepening the tail significantly along most of its length.
Classification
The family Spinosauridae was named by Stromer in 1915 to include the single genus Spinosaurus. The clade was expanded as more close relatives of Spinosaurus were uncovered. The first cladistic definition of Spinosauridae was provided by Paul Sereno in 1998 (as "All spinosauroids closer to Spinosaurus than to Torvosaurus").
Traditionally, Spinosauridae is divided into two subfamilies: Spinosaurinae, which contains the genera Icthyovenator, Irritator, Oxalaia, Sigilmassasaurus and Spinosaurus, is marked by unserrated, straight teeth, and external nares which are further back on the skull than in baryonychines, and Baryonychinae, which contains the genera Baryonyx, Cristatusaurus, Suchosaurus, Suchomimus, Ceratosuchops, and Riparovenator, which is marked by serrated, slightly curved teeth, smaller size, and more teeth in the lower jaw behind the terminal rosette than in spinosaurines. Others, such as Siamosaurus, may belong to either Baryonychinae or Spinosaurinae, but are too incompletely known to be assigned with confidence. Siamosaurus was classified as a spinosaurine in 2018, but the results are provisional and not entirely conclusive.
The subfamily Spinosaurinae was named by Sereno in 1998, and defined by Thomas Holtz and colleagues in 2004 as all taxa closer to Spinosaurus aegyptiacus than to Baryonyx walkeri. The subfamily Baryonychinae was named by Charig & Milner in 1986. They erected both the subfamily and the family Baryonychidae for the newly discovered Baryonyx, before it was referred to Spinosauridae. Their subfamily was defined by Holtz and colleagues in 2004, as the complementary clade of all taxa closer to Baryonyx walkeri than to Spinosaurus aegyptiacus. Examinations in 2017 by Marcos Sales and Cesar Schultz indicate that the South American spinosaurids Angaturama and Irritator were intermediate between Baronychinae and Spinosaurinae based on their craniodental features and cladistic analysis. A study by Arden et al. 2018 named the tribe Spinosaurini to include Spinosaurus and Sigilmassasaurus, the latter of which's validity as a spinosaurid is debated. In 2021 Barker et al. named the new tribe Ceratosuchopsini within the Baryonychinae to encompass Suchomimus, Riparovenator, and Ceratosuchops.
The 2017 study mentioned above indicates that Baryonychinae may in fact be non-monophyletic. Their cladogram can be seen below.
The next cladogram displays an analysis of Tetanurae simplified to show only Spinosauridae from Allain colleagues in 2012:
The 2018 phylogenetic analysis by Arden and colleagues, which included many unnamed taxa, resolved Baryonychinae as monophyletic, and also coined the new term Spinosaurini for the clade of Sigilmassasaurus and Spinosaurus.
In 2021, Chris Barker, Hone, Darren Naish, Andrea Cau, Lockwood, Foster, Clarkin, Schneider, and Gostling described two new spinosaurid species, Ceratosuchops inferodios and Riparovenator milnerae. In the paper, they performed a phylogenetic analysis incorporating a general range of theropods, but mostly focusing on Spinosauridae. The results of the analysis appear below:
| Biology and health sciences | Theropods | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.